| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
We always have vma->vm_mm around.
Link: http://lkml.kernel.org/r/1466021202-61880-8-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull Xtensa updates from Chris Zankel:
"Xtensa improvements for 4.6:
- control whether perf IRQ is treated as NMI from Kconfig
- implement ioremap for regions outside KIO segment
- fix ISS serial port behaviour when EOF is reached
- fix preemption in {clear,copy}_user_highpage
- fix endianness issues for XTFPGA devices, big-endian cores are now
fully functional
- clean up debug infrastructure and add support for hardware
breakpoints and watchpoints
- add processor configurations for Three Core HiFi-2 MX and HiFi3
cpus"
* tag 'xtensa-next-20160320' of git://github.com/czankel/xtensa-linux:
xtensa: add test_kc705_hifi variant
xtensa: add Three Core HiFi-2 MX Variant.
xtensa: support hardware breakpoints/watchpoints
xtensa: use context structure for debug exceptions
xtensa: remove remaining non-functional KGDB bits
xtensa: clear all DBREAKC registers on start
xtensa: xtfpga: fix earlycon endianness
xtensa: xtfpga: fix i2c controller register width and endianness
xtensa: xtfpga: fix ethernet controller endianness
xtensa: xtfpga: fix serial port register width and endianness
xtensa: define CONFIG_CPU_{BIG,LITTLE}_ENDIAN
xtensa: fix preemption in {clear,copy}_user_highpage
xtensa: ISS: don't hang if stdin EOF is reached
xtensa: support ioremap for memory outside KIO region
xtensa: use XTENSA_INT_LEVEL macro in asm/timex.h
xtensa: make fake NMI configurable
|
| |
| |
| |
| |
| |
| |
| |
| | |
Disabling pagefault makes little sense there, preemption disabling is
what was meant.
Cc: stable@vger.kernel.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
| |
| |
| |
| |
| |
| |
| | |
Map physical memory outside KIO region into the vmalloc area.
Unmap it with vunmap.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The define has a comment from Nick Piggin from 2007:
/* For backwards compat. Remove me quickly. */
I guess 9 years should not be too hurried sense of 'quickly' even for
kernel measures.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's define page_mapped() to be true for compound pages if any
sub-pages of the compound page is mapped (with PMD or PTE).
On other hand page_mapcount() return mapcount for this particular small
page.
This will make cases like page_get_anon_vma() behave correctly once we
allow huge pages to be mapped with PTE.
Most users outside core-mm should use page_mapcount() instead of
page_mapped().
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
| |
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the handler
Introduce faulthandler_disabled() and use it to check for irq context and
disabled pagefaults (via pagefault_disable()) in the pagefault handlers.
Please note that we keep the in_atomic() checks in place - to detect
whether in irq context (in which case preemption is always properly
disabled).
In contrast, preempt_disable() should never be used to disable pagefaults.
With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt
counter, and therefore the result of in_atomic() differs.
We validate that condition by using might_fault() checks when calling
might_sleep().
Therefore, add a comment to faulthandler_disabled(), describing why this
is needed.
faulthandler_disabled() and pagefault_disable() are defined in
linux/uaccess.h, so let's properly add that include to all relevant files.
This patch is based on a patch from Thomas Gleixner.
Reviewed-and-tested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: David.Laight@ACULAB.COM
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: airlied@linux.ie
Cc: akpm@linux-foundation.org
Cc: benh@kernel.crashing.org
Cc: bigeasy@linutronix.de
Cc: borntraeger@de.ibm.com
Cc: daniel.vetter@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: hocko@suse.cz
Cc: hughd@google.com
Cc: mst@redhat.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: schwidefsky@de.ibm.com
Cc: yang.shi@windriver.com
Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing code relies on pagefault_disable() implicitly disabling
preemption, so that no schedule will happen between kmap_atomic() and
kunmap_atomic().
Let's make this explicit, to prepare for pagefault_disable() not
touching preemption anymore.
Reviewed-and-tested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: David.Laight@ACULAB.COM
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: airlied@linux.ie
Cc: akpm@linux-foundation.org
Cc: benh@kernel.crashing.org
Cc: bigeasy@linutronix.de
Cc: borntraeger@de.ibm.com
Cc: daniel.vetter@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: hocko@suse.cz
Cc: hughd@google.com
Cc: mst@redhat.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: schwidefsky@de.ibm.com
Cc: yang.shi@windriver.com
Link: http://lkml.kernel.org/r/1431359540-32227-5-git-send-email-dahi@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The core VM already knows about VM_FAULT_SIGBUS, but cannot return a
"you should SIGSEGV" error, because the SIGSEGV case was generally
handled by the caller - usually the architecture fault handler.
That results in lots of duplication - all the architecture fault
handlers end up doing very similar "look up vma, check permissions, do
retries etc" - but it generally works. However, there are cases where
the VM actually wants to SIGSEGV, and applications _expect_ SIGSEGV.
In particular, when accessing the stack guard page, libsigsegv expects a
SIGSEGV. And it usually got one, because the stack growth is handled by
that duplicated architecture fault handler.
However, when the generic VM layer started propagating the error return
from the stack expansion in commit fee7e49d4514 ("mm: propagate error
from stack expansion even for guard page"), that now exposed the
existing VM_FAULT_SIGBUS result to user space. And user space really
expected SIGSEGV, not SIGBUS.
To fix that case, we need to add a VM_FAULT_SIGSEGV, and teach all those
duplicate architecture fault handlers about it. They all already have
the code to handle SIGSEGV, so it's about just tying that new return
value to the existing code, but it's all a bit annoying.
This is the mindless minimal patch to do this. A more extensive patch
would be to try to gather up the mostly shared fault handling logic into
one generic helper routine, and long-term we really should do that
cleanup.
Just from this patch, you can generally see that most architectures just
copied (directly or indirectly) the old x86 way of doing things, but in
the meantime that original x86 model has been improved to hold the VM
semaphore for shorter times etc and to handle VM_FAULT_RETRY and other
"newer" things, so it would be a good idea to bring all those
improvements to the generic case and teach other architectures about
them too.
Reported-and-tested-by: Takashi Iwai <tiwai@suse.de>
Tested-by: Jan Engelhardt <jengelh@inai.de>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # "s390 still compiles and boots"
Cc: linux-arch@vger.kernel.org
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
|
| |
noMMU configuration doesn't use special area for vmalloc allocations,
don't print it in the memory map.
PAGE_OFFSET is fixed to 0 in noMMU, use min_low_pfn and max_low_pfn for
lowmem range display.
Make all XCHAL_KSEG_* constants unsigned long for consistency with other
addresses.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Memory accounting code can't handle pages below
PLATFORM_DEFAULT_MEM_START. Reserve those pages if they exist.
When PLATFORM_DEFAULT_MEM_START is zero reserve one page at address 0 to
make sure that successfull memory allocations don't return NULL.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
| |
Most cache flushing code is only relevant for MMU. Don't build it for
nommu configuration.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Use __flush_invalidate_dcache_page_alias with alias set to color of the
page physical address instead of __flush_invalidate_dcache_page: this
works for high memory pages and mapping/unmapping to the TLBTEMP area is
virtually free.
Allow building configurations with aliasing cache and highmem enabled.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.
Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
| |
Map high memory pages at virtual addresses with color that match color
of their physical address. Existing cache alias management mechanisms
may be used with such pages.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Existing clear_user_page and copy_user_page cannot be used with highmem
because they calculate physical page address from its virtual address
and do it incorrectly in case of high memory page mapped with
kmap_atomic. Also kmap is not needed, as most likely userspace mapping
color would be different from the kmapped color.
Provide clear_user_highpage and copy_user_highpage functions that
determine if temporary mapping is needed for the pages. Move most of the
logic of the former clear_user_page and copy_user_page to
xtensa/mm/cache.c only leaving temporary mapping setup, invalidation and
clearing/copying in the xtensa/mm/misc.S. Rename these functions to
clear_page_alias and copy_page_alias.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
| |
To support aliasing cache both kmap region sizes are multiplied by the
number of data cache colors. After that expansion page tables that cover
kmap regions may become larger than one page. Correctly allocate and
initialize page tables in this case.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
| |
It's much easier to reason about alignment and coloring of regions
located in the fixmap when fixmap index is just a PFN within the fixmap
region. Change fixmap addressing so that index 0 corresponds to
FIXADDR_START instead of the FIXADDR_TOP.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
| |
When sysmem reservation occurs exactly at the end of an existing block
that block is deleted, because it is incorrectly included in the range
of memblocks to remove. Fix that by skipping such block.
Cc: stable@vger.kernel.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Introduce fixmap area just below the vmalloc region. Use it for atomic
mapping of high memory pages.
High memory on cores with cache aliasing is not supported and is still
to be implemented. Fail build for such configurations for now.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
| |
Don't flush whole TLB if only a small kernel range is requested.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
| |
Debug dump of physical memory configuration. Useful for inspection of
resulting memory map, esp. in the presence of memmap= kernel option.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This option is useful for reserving memory regions for secondary cores
in AMP configurations.
Implement the following memmap variants:
- memmap=nn[KMG]@ss[KMG]: force usage of a specific region of memory;
- memmap=nn[KMG]$ss[KMG]: mark specified memory as reserved;
- memmap=nn[KMG]: set end of memory.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
| |
Rewrite mem_reserve so that it keeps bank order.
Also make its return code more traditional.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
| |
Rewrite add_sysmem_bank so that it keeps bank order and merges
adjacent/overlapping banks.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
Bootparam meminfo is a bootloader ABI, kernel meminfo is for the kernel
bookkeeping, keep them separate. Kernel doesn't care of memory region
types, so drop the type field and don't pass it to add_sysmem_bank.
Move kernel sysmem structures and prototypes to asm/sysmem.h and sysmem
variable and add_sysmem_bank to mm/init.c
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Xtensa fixes for 3.14:
- allow booting xtfpga on boards with new uBoot and >128MBytes memory;
- drop nonexistent GPIO32 support from fsf variant;
- don't select USE_GENERIC_SMP_HELPERS;
- enable common clock framework support, set up ethoc clock on xtfpga;
- wire up sched_setattr and sched_getattr syscalls.
Signed-off-by: Chris Zankel <chris@zankel.net>
|
| |
| |
| |
| |
| |
| |
| | |
This fixes panic when booting on machine with more than 128M memory
passed from the bootloader.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|/
|
|
|
|
|
| |
The warning only shows up when building MMUv3 configuration with OF
support disabled.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the simple-bus node to discover the io area, and remap the cached and
bypass io ranges. The parent-bus-address value of the first triplet in the
"ranges" property is used. This value is rounded down to the nearest 256MB
boundary. The length of the io area is fixed at 256MB; the "ranges" property
length value is ignored.
Other limitations: (1) only the first simple-bus node is considered, and (2)
only the first triplet of the "ranges" property is considered.
See ePAPR 1.1 §6.5 for the simple-bus node description, and §2.3.8 for the
"ranges" property description.
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
|
|
|
|
|
|
|
|
|
| |
This is largely based on SMP code from the xtensa-2.6.29-smp tree by
Piet Delaney, Marc Gauthier, Joe Taylor, Christian Zankel (and possibly
other Tensilica folks).
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes the following build warnings:
arch/xtensa/mm/misc.S: Assembler messages:
arch/xtensa/mm/misc.S:143: Warning: value 0xffffffff30000106 truncated to 0x30000106
arch/xtensa/mm/misc.S:197: Warning: value 0xffffffff30000106 truncated to 0x30000106
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the moment xtensa uses slab allocator for PTE table. It doesn't work
with enabled split page table lock: slab uses page->slab_cache and
page->first_page for its pages. These fields share stroage with
page->ptl.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Chris Zankel <chris@zankel.net>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unlike global OOM handling, memory cgroup code will invoke the OOM killer
in any OOM situation because it has no way of telling faults occuring in
kernel context - which could be handled more gracefully - from
user-triggered faults.
Pass a flag that identifies faults originating in user space from the
architecture-specific fault handlers to generic code so that memcg OOM
handling can be improved.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull Xtensa updates from Chris Zankel.
* tag 'xtensa-next-20130710' of git://github.com/czankel/xtensa-linux: (22 commits)
xtensa: remove the second argument of __bio_kmap_atomic()
xtensa: add static function tracer support
xtensa: Flat DeviceTree copy not future-safe
xtensa: check TLB sanity on return to userspace
xtensa: adjust boot parameters address when INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX is selected
xtensa: bootparams: fix typo
xtensa: tell git to ignore generated .dtb files
xtensa: ccount based sched_clock
xtensa: ccount based clockevent implementation
xtensa: consolidate ccount access routines
xtensa: cleanup ccount frequency tracking
xtensa: timex.h: remove unused symbols
xtensa: tell git to ignore copied zlib source files
xtensa: fix section mismatch in pcibios_fixup_bus
xtensa: ISS: fix section mismatch in iss_net_setup
arch: xtensa: include: asm: compiling issue, need cmpxchg64() defined.
xtensa: xtfpga: fix section mismatch
xtensa: remove unused platform_init_irq()
xtensa: tell git to ignore generated files
xtensa: flush TLB entries for pages of non-current mm correctly
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- check that user TLB mappings correspond to the current page table;
- check that TLB mapping VPN is in the kernel/user address range
in accordance with its ASID.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Sometimes under high memory pressure one process gets a page of another
process, which manifests itself with an invalid instruction exception.
This happens because flush_tlb_page fails to clear TLB entries when
called with vma that does not belong to current mm, because it does not
set RASID appropriately. When page reclaiming mechanism swaps physical
pages out replacing their PTEs with none or swap PTEs, it calls
flush_tlb_page. Later physical page may be reused elsewhere, but the
stale TLB mapping still refers to it, allowing process that owned the
mapping to see the new state of that physical page.
Put ASID of the mm that owns vma to the RASID to fix that issue.
Also replace otherwise meaningless local_save_flags with local_irq_save.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Prepare for removing num_physpages and simplify mem_init().
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Concentrate code to modify totalram_pages into the mm core, so the arch
memory initialized code doesn't need to take care of it. With these
changes applied, only following functions from mm core modify global
variable totalram_pages: free_bootmem_late(), free_all_bootmem(),
free_all_bootmem_node(), adjust_managed_page_count().
With this patch applied, it will be much more easier for us to keep
totalram_pages and zone->managed_pages in consistence.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: David Howells <dhowells@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Address more review comments from last round of code review.
1) Enhance free_reserved_area() to support poisoning freed memory with
pattern '0'. This could be used to get rid of poison_init_mem()
on ARM64.
2) A previous patch has disabled memory poison for initmem on s390
by mistake, so restore to the original behavior.
3) Remove redundant PAGE_ALIGN() when calling free_reserved_area().
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change signature of free_reserved_area() according to Russell King's
suggestion to fix following build warnings:
arch/arm/mm/init.c: In function 'mem_init':
arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default]
free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL);
^
In file included from include/linux/mman.h:4:0,
from arch/arm/mm/init.c:15:
include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *'
extern unsigned long free_reserved_area(unsigned long start, unsigned long end,
mm/page_alloc.c: In function 'free_reserved_area':
>> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default]
In file included from arch/mips/include/asm/page.h:49:0,
from include/linux/mmzone.h:20,
from include/linux/gfp.h:4,
from include/linux/mm.h:8,
from mm/page_alloc.c:18:
arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int'
mm/page_alloc.c: In function 'free_area_init_nodes':
mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds]
Also address some minor code review comments.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Reported-by: Arnd Bergmann <arnd@arndb.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: <sworddragon2@aol.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Jianguo Wu <wujianguo@huawei.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Michel Lespinasse <walken@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull xtensa updates from Chris Zankel:
"Support for the latest MMU architecture that allows for a larger
accessible memory region, and various bug-fixes"
* tag 'xtensa-next-20130508' of git://github.com/czankel/xtensa-linux:
xtensa: Switch to asm-generic/linkage.h
xtensa: fix redboot load address
xtensa: ISS: fix timer_lock usage in rs_open
xtensa: disable IRQs while IRQ handler is running
xtensa: enable lockdep support
xtensa: fix arch_irqs_disabled_flags implementation
xtensa: add irq flags trace support
xtensa: provide custom CALLER_ADDR* implementations
xtensa: add stacktrace support
xtensa: clean up stpill_registers
xtensa: don't use a7 in simcalls
xtensa: don't attempt to use unconfigured timers
xtensa: provide default platform_pcibios_init implementation
xtensa: remove KCORE_ELF again
xtensa: document MMUv3 setup sequence
xtensa: add MMU v3 support
xtensa: fix ibreakenable register update
xtensa: fix oprofile building as module
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
MMUv3 comes out of reset with identity vaddr -> paddr mapping in the TLB
way 6:
Way 6 (512 MB)
Vaddr Paddr ASID Attr RWX Cache
---------- ---------- ---- ---- --- -------
0x00000000 0x00000000 0x01 0x03 RWX Bypass
0x20000000 0x20000000 0x01 0x03 RWX Bypass
0x40000000 0x40000000 0x01 0x03 RWX Bypass
0x60000000 0x60000000 0x01 0x03 RWX Bypass
0x80000000 0x80000000 0x01 0x03 RWX Bypass
0xa0000000 0xa0000000 0x01 0x03 RWX Bypass
0xc0000000 0xc0000000 0x01 0x03 RWX Bypass
0xe0000000 0xe0000000 0x01 0x03 RWX Bypass
This patch adds remapping code at the reset vector or at the kernel
_start (depending on CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX) that
reconfigures MMUv3 as MMUv2:
Way 5 (128 MB)
Vaddr Paddr ASID Attr RWX Cache
---------- ---------- ---- ---- --- -------
0xd0000000 0x00000000 0x01 0x07 RWX WB
0xd8000000 0x00000000 0x01 0x03 RWX Bypass
Way 6 (256 MB)
Vaddr Paddr ASID Attr RWX Cache
---------- ---------- ---- ---- --- -------
0xe0000000 0xf0000000 0x01 0x07 RWX WB
0xf0000000 0xf0000000 0x01 0x03 RWX Bypass
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|
|/
|
|
|
|
|
|
|
|
| |
Use common help functions to free reserved pages.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
| |
Remove heading and trailing spaces, trim trailing lines, and wrap lines
that are longer than 80 characters.
Signed-off-by: Chris Zankel <chris@zankel.net>
|
|
|
|
|
|
|
|
| |
set_rasid_register accepts new RASID SR value, but ASID_USER_FIRST is
ASID value for the ring 1; RASID value is made by ASID_INSERT macro.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|
|
|
|
|
| |
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|
|
|
|
|
|
| |
Use ENDPROC() to mark the end of assembler functions.
Signed-off-by: Chris Zankel <chris@zankel.net>
|
|
|
|
|
|
| |
Signed-off-by: Marc Gauthier <marc@tensilica.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Chris Zankel <chris@zankel.net>
|