| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When functions incoming parameters are not in input operands list gcc
4.5 does not load the parameters into registers before calling this
function but the inline assembly assumes valid addresses inside this
function. This breaks the code because r0 and r1 are invalid when
execution enters v4wb_copy_user_page ()
Also the constant needs to be used as third input operand so account
for that as well.
Tested on qemu arm.
CC: <stable@kernel.org>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steven Walter <stevenrwalter@gmail.com> writes:
> I've been tracking down an instance of userspace data corruption,
> and I believe I have found a window during fork where data can be
> lost. The corruption is occurring on an ARMv5 system with VIVT
> caches. Here's the scenario in question. Thread A is forking,
> Thread B is running in userspace:
>
> Thread A: flush_cache_mm() (dup_mmap)
> Thread B: writes to a page in the above mm
> Thread A: pte_wrprotect() the above page (copy_one_pte)
> Thread B: writes to the same page again
>
> During thread B's second write, he'll take a fault and enter the
> do_wp_page() case. We'll end up calling copy_page(), which notably
> uses the kernel virtual addresses for the old and new pages. This
> means that the new page does not necessarily have the data from the
> first write. Now there are two conflicting copies of the same
> cache-line in dcache. If the userspace cache-line flushes before
> the kernel cache-line, we lose the changes made during the first
> write. do_wp_page does call flush_dcache_page on the newly-copied
> page, but there's still a window where the CPU could flush the
> userspace cache-line before then.
Resolve this by flushing the user mapping before copying the page
on processors with a writeback VIVT cache.
Note: this does have a performance impact, and so needs further
consideration before being merged - can we optimize out some of
the cache flushes if, eg, we know that the page isn't yet mapped?
Thread: <e06498070903061426o5875ad13hc6328aa0d3f08ed7@mail.gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
| |
Our copy_user_highpage() implementations may require cache maintainence.
Ensure that implementations have all necessary details to perform this
maintainence.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a fix for the following crash observed in 2.6.29-rc3:
http://lkml.org/lkml/2009/1/29/150
On ARM it doesn't make sense to trace a naked function because then
mcount is called without stack and frame pointer being set up and there
is no chance to restore the lr register to the value before mcount was
called.
Reported-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Tested-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Cc: Abhishek Sagar <sagar.abhishek@gmail.com>
Cc: Steven Rostedt <rostedt@home.goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In all cases the kaddr is assigned an input register even though it is
modified in the assembly code. Let's assign a new variable to the
modified value and mark those inline asm with volatile otherwise they
get optimized away because the output variable is otherwise not used.
Also fix a few conversion errors in copypage-feroceon.c and
copypage-v4mc.c.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
| |
For similar reasons as copy_user_page(), we want to avoid the
additional kmap_atomic if it's unnecessary.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to override the copy_user_page() function. However, this
is not only inefficient, it also causes additional complexity for
highmem support, since we convert from a struct page to a kernel
direct mapped address and back to a struct page again.
Moreover, with highmem support, we end up pointlessly setting up
kmap entries for pages which we're going to remap. So, push the
kmapping down into the copypage implementation files where it's
required.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|