summaryrefslogtreecommitdiffstats
path: root/arch/arm/include/asm
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'omap-for-linus' of ↵Linus Torvalds2011-01-061-1/+11
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6 * 'omap-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6: (243 commits) omap2: Make OMAP2PLUS select OMAP_DM_TIMER OMAP4: hwmod data: Fix alignment and end of line in structurefields OMAP4: hwmod data: Move the DMA structures OMAP4: hwmod data: Move the smartreflex structures OMAP4: hwmod data: Fix missing SIDLE_SMART_WKUP in smartreflexsysc arm: omap: tusb6010: add name for MUSB IRQ arm: omap: craneboard: Add USB EHCI support omap2+: Initialize serial port for dynamic remuxing for n8x0 omap2+: Add struct omap_board_data and use it for platform level serial init omap2+: Allow hwmod state changes to mux pads based on the state changes omap2+: Add support for hwmod specific muxing of devices omap2+: Add omap_mux_get_by_name OMAP2: PM: fix compile error when !CONFIG_SUSPEND MAINTAINERS: OMAP: hwmod: update hwmod code, data maintainership OMAP4: Smartreflex framework extensions OMAP4: hwmod: Add inital data for smartreflex modules. OMAP4: PM: Program correct init voltages for scalable VDDs OMAP4: Adding voltage driver support OMAP4: Register voltage PMIC parameters with the voltage layer OMAP3: PM: Program correct init voltages for VDD1 and VDD2 ... Fix up trivial conflict in arch/arm/plat-omap/Kconfig
| *-. Merge branches 'devel-iommu-mailbox' and 'devel-l2x0' into omap-for-linusTony Lindgren2010-12-202-9/+16
| |\ \
| | | * ARM: l2x0: Add aux control register bitfieldsSantosh Shilimkar2010-12-181-1/+11
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the PL310 Auxiliary Control Register bitfields so that SOC's can use these bit fields to construct the AUXCTRL value to be passed/programmed instead of hardcoding it. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
* | | Merge branch 'devel-stable' into develRussell King2011-01-061-2/+2
|\ \ \ | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/mach-pxa/clock.c arch/arm/mach-pxa/clock.h
| * \ \ Merge branch 'hw-breakpoint' of git://repo.or.cz/linux-2.6/linux-wd into ↵Russell King2010-12-181-2/+2
| |\ \ \ | | | | | | | | | | | | | | | devel-stable
| | * | | ARM: hw_breakpoint: unify single-stepping code for watchpoints and breakpointsWill Deacon2010-12-061-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The single-stepping code is currently different depending on whether we are stepping over a breakpoint or a watchpoint. There is no good reason for this, so let's sort it out. This patch adds functions for enabling/disabling single-step for a particular hw_breakpoint and integrates this with the exception handling code. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | ARM: hw_breakpoint: do not allocate new breakpoints with preemption disabledWill Deacon2010-12-061-1/+1
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The watchpoint single-stepping code calls register_user_hw_breakpoint to register a mismatch breakpoint for stepping over the watchpoint. This is performed with preemption disabled, which is unsafe as we may end up scheduling whilst in_atomic(). Furthermore, using the perf API is rather overkill since we are already in the hw-breakpoint backend and only require access to reserved breakpoints anyway. This patch reworks the watchpoint stepping code so that we don't require another perf_event for the mismatch breakpoint. Instead, we hold a separate arch_hw_breakpoint_ctrl struct inside the watchpoint which is used exclusively for stepping. We can check whether or not stepping is enabled when installing or uninstalling the watchpoint and operate on the breakpoint accordingly. Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | | Merge branch 'pgt' (early part) into develRussell King2011-01-063-190/+181
|\ \ \ \
| * | | | ARM: pgtable: provide RDONLY page table bit rather than WRITE bitRussell King2010-12-221-19/+19
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: invert L_PTE_EXEC to L_PTE_XNRussell King2010-12-221-22/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The hardware page tables use an XN bit 'execute never'. Historically, we've had a Linux 'execute allow' bit, in the positive sense. Get rid of this artifact as future hardware will continue to have the XN sense. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: remove FIRST_USER_PGD_NRRussell King2010-12-221-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FIRST_USER_PGD_NR is now unnecessary, as this has been replaced by FIRST_USER_ADDRESS except in the architecture code. Fix up the last usage of FIRST_USER_PGD_NR, and remove the definition. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: collect up identity mapping functionsRussell King2010-12-221-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have two places where we create identity mappings - one when we bring secondary CPUs online, and one where we setup some mappings for soft- reboot. Combine these two into a single implementation. Also collect the identity mapping deletion function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: switch order of Linux vs hardware page tablesRussell King2010-12-222-38/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This switches the ordering of the Linux vs hardware page tables in each page, thereby eliminating some of the arithmetic in the page table walks. As we now place the Linux page table at the beginning of the page, we can deal with the offset in the pgt by simply masking it away, along with the other control bits. This also makes the arithmetic all be positive, rather than a mixture. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: introduce pteval_t to represent a pte valueRussell King2010-11-262-22/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes everywhere dealing with pte values use the same type. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: use phys_addr_t for physical addressesRussell King2010-11-262-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that physical addresses are typed as phys_addr_t Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: get rid of get_pgd_slow()/free_pgd_slow()Russell King2010-11-261-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These old names are just aliases for pgd_alloc/pgd_free. Just use the new names. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: directly pass pgd/pmd/pte to their error functionsRussell King2010-11-261-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than passing the pte value to __pte_error, pass the raw pte_t cookie instead. Do the same for pmd and pgd functions. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: group pte functions togetherRussell King2010-11-261-59/+51
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: group pgd functions and data togetherRussell King2010-11-261-21/+23
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | ARM: pgtable: move pgprot functions to one placeRussell King2010-11-261-22/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than scattering them throughout the file, group them together. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | | | Merge branch 'misc' into develRussell King2011-01-0618-105/+239
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/Kconfig arch/arm/common/Makefile arch/arm/kernel/Makefile arch/arm/kernel/smp.c
| * \ \ \ \ Merge branch 'smp' into miscRussell King2011-01-0611-45/+57
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/kernel/entry-armv.S arch/arm/mm/ioremap.c
| | * | | | | ARM: localtimer: clean up local timer on hot unplugRussell King2010-12-202-13/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a CPU is hot unplugged, the generic tick code cleans up the clock event device, but fails to call down to the device's set_mode function to actually shut the device down. To work around this, we've historically had a local_timer_stop() callback out of the hotplug code. However, this adds needless complexity when we have the clock event device itself available. Explicitly call the clock event device's set_mode function with CLOCK_EVT_MODE_UNUSED, so that the hardware can be cleanly shutdown without any special external callbacks. When/if the generic code is fixed, percpu_timer_stop() can be killed off. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: 6516/1: Allow SMP_ON_UP to work with Thumb-2 kernels.Dave Martin2010-12-201-3/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * __fixup_smp_on_up has been modified with support for the THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split into halfwords in case of misalignment, since we can't rely on unaligned accesses working before turning the MMU on. No attempt is made to optimise the aligned case, since the number of fixups is typically small, and it seems best to keep the code as simple as possible. * Add a rotate in the fixup_smp code in order to support CPU_BIG_ENDIAN, as suggested by Nicolas Pitre. * Add an assembly-time sanity-check to ALT_UP() to ensure that the content really is the right size (4 bytes). (No check is done for ALT_SMP(). Possibly, this could be fixed by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus ALT_SMP...SMP_UP_B) into two macros. In the first case, ALT_SMP needs to expand to >= 4 bytes, not == 4.) * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due to macro limitations) has not been modified: the affected instruction (mov) has no 16-bit encoding, so the correct instruction size is satisfied in this case. * A "mode" parameter has been added to smp_dmb: smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser) smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP() This avoids assembly failures due to use of W() inside smp_dmb, when assembling pure-ARM code in the vectors page. There might be a better way to achieve this. * Kconfig: make SMP_ON_UP depend on (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2 currently assumes little-endian order.) Tested using a single generic realview kernel on: ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y}) ARM RealView PBX-A9 (SMP) Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: SMP: remove smp_mpidr.hRussell King2010-12-201-17/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With "ARM: CPU hotplug: remove bug checks in platform_cpu_die()", we now do not use hard_smp_processor_id(), we no longer need to read the hardware processor ID. Remove the include providing this function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: SMP: consolidate the common parts of smp_prepare_cpus()Russell King2010-12-201-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a certain amount of smp_prepare_cpus() which doesn't belong in the platform support code - that is, code which is invariant to the SMP implementation. Move this code into arch/arm/kernel/smp.c, and add a platform_ prefix to the original function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: SMP: collect IPI and local timer IRQs for /proc/statRussell King2010-12-201-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IPI and local timer interrupts weren't being properly accounted for in /proc/stat. Collect them from the irq_stat structure, and return their sum. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: SMP: provide individual IPI interrupt statisticsRussell King2010-12-201-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This separates out the individual IPI interrupt counts from the total IPI count, which allows better visibility of what IPIs are being used for. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: fix /proc/interrupts formattingRussell King2010-12-202-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As per x86, align the initial column according to how many IRQs we have. Also, provide an english explaination for the 'LOC:' and 'IPI:' lines. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: SMP: move ipi_count into irq_stat structureRussell King2010-12-201-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the ipi_count into irq_stat, which allows the ipi_data structure to be entirely removed. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: SMP: provide accessors for irq_stat dataRussell King2010-12-201-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide __inc_irq_stat() and __get_irq_stat() to increment and read the irq stat counters. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: include local timer irq stats only when local timers configuredRussell King2010-12-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * | | | | ARM: SMP: pass an ipi number to smp_cross_call()Russell King2010-12-031-2/+2
| | | |_|/ / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows us to use smp_cross_call() to trigger a number of different software generated interrupts, rather than combining them all on one SGI. Recover the SGI number via do_IPI. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: DMA: add support for DMA debuggingRussell King2011-01-061-12/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: DMA: Replace page_to_dma()/dma_to_page() with pfn_to_dma()/dma_to_pfn()Russell King2011-01-031-15/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the page_to_dma() and dma_to_page() macros with their PFN equivalents. This allows us to map parts of memory which do not have a struct page allocated to them to bus addresses. This will be used internally by dma_alloc_coherent()/dma_alloc_writecombine(). Build tested on Versatile, OMAP1, IOP13xx and KS8695. Tested-by: Janusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: 6541/1: move sev definition to common system.h include fileShiraz Hashim2010-12-241-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sev is used to send wakeup event to other cores in ARMv6K and above. This has been moved from platform specific part to standard common ARM header file (asm/system.h). Also introduced wfi() and wfe(). Signed-off-by: Shiraz Hashim <shiraz.hashim@st.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: provide an early platform initialization hookRussell King2010-12-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows platforms to hook into the initialization early to setup things like scheduler clocks, etc. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: simplify early machine init hooksRussell King2010-12-243-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than storing each machine init hook separately, store a pointer to the machine description record and dereference this instead. This pointer is only available while the init sections are present, which is not a problem as we only use it from init code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: 6538/1: Subarch IRQ handler macros V3Magnus Damm2010-12-241-0/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Per subarch interrupt handler macros V3. This patch breaks out code from the irq_handler macro into arch_irq_handler and arch_irq_handler_default. The macros are put in the header file "entry-macro-multi.S" The arch_irq_handler_default macro is designed to be used by irq_handler in entry-armv.S while arch_irq_handler is suitable for per-subarch use. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: 6532/1: Allow machine to specify it's own IRQ handlers at run-timeeric miao2010-12-242-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normally different ARM platform has different way to decode the IRQ hardware status and demultiplex to the corresponding IRQ handler. This is highly optimized by macro irq_handler in entry-armv.S, and each machine defines their own macro to decode the IRQ number. However, this prevents multiple machine classes to be built into a single kernel. By allowing each machine to specify thier own handler, and making function pointer 'handle_arch_irq' to point to it at run time, this can be solved. And introduce CONFIG_MULTI_IRQ_HANDLER to allow both solutions to work. Comparing with the highly optimized macro of irq_handler, the new function must be written with care not to lose too much performance. And the IPI stuff on SMP is expected to move to the provided arch IRQ handler as well. The assembly code to invoke handle_arch_irq is optimized by Russell King. Signed-off-by: Eric Miao <eric.miao@canonical.com> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: implement support for read-mostly sectionsRussell King2010-12-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As our SMP implementation uses MESI protocols. Grouping together data which is mostly only read together means that we avoid unnecessary cache line bouncing when this code shares a cache line with other data. In other words, cache lines associated with read-mostly data are expected to spend most of their time in shared state. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: 6483/1: arm & sh: factorised duplicated clkdev.cJean-Christop PLAGNIOL-VILLARD2010-11-261-16/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | factorise some generic infrastructure to assist looking up struct clks for the ARM & SH architecture. as the code is identical at 99% put the arch specific code for allocation as example in asm/clkdev.h Signed-off-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | ARM: 6384/1: Remove the domain switching on ARMv6k/v7 CPUsCatalin Marinas2010-11-045-20/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the domain switching functionality via the set_fs and __switch_to functions on cores that have a TLS register. Currently, the ioremap and vmalloc areas share the same level 1 page tables and therefore have the same domain (DOMAIN_KERNEL). When the kernel domain is modified from Client to Manager (via the __set_fs or in the __switch_to function), the XN (eXecute Never) bit is overridden and newer CPUs can speculatively prefetch the ioremap'ed memory. Linux performs the kernel domain switching to allow user-specific functions (copy_to/from_user, get/put_user etc.) to access kernel memory. In order for these functions to work with the kernel domain set to Client, the patch modifies the LDRT/STRT and related instructions to the LDR/STR ones. The user pages access rights are also modified for kernel read-only access rather than read/write so that the copy-on-write mechanism still works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register (CPU_32v6K is defined) since writing the TLS value to the high vectors page isn't possible. The user addresses passed to the kernel are checked by the access_ok() function so that they do not point to the kernel space. Tested-by: Anton Vorontsov <cbouatmailru@gmail.com> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | | | | Merge branch 'clksrc' into develRussell King2011-01-053-2/+144
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/mach-vexpress/v2m.c arch/arm/plat-omap/counter_32k.c arch/arm/plat-versatile/Makefile
| * | | | | | ARM: sched_clock: provide common infrastructure for sched_clock()Russell King2010-12-221-0/+118
| | |_|_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide common sched_clock() infrastructure for platforms to use to create a 64-bit ns based sched_clock() implementation from a counter running at a non-variable clock rate. This implementation is based upon maintaining an epoch for the counter and an epoch for the nanosecond time. When we desire a sched_clock() time, we calculate the number of counter ticks since the last epoch update, convert this to nanoseconds and add to the epoch nanoseconds. We regularly refresh these epochs within the counter wrap interval. We perform a similar calculation as above, and store the new epochs. We read and write the epochs in such a way that sched_clock() can easily (and locklessly) detect when an update is in progress, and repeat the loading of these constants when they're known not to be stable. The one caveat is that sched_clock() is not called in the middle of an update. We achieve that by disabling IRQs. Finally, if the clock rate is known at compile time, the counter to ns conversion factors can be specified, allowing sched_clock() to be tightly optimized. We ensure that these factors are correct by providing an initialization function which performs a run-time check. Acked-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Will Deacon <will.deacon@arm.com> Tested-by: Mikael Pettersson <mikpe@it.uu.se> Tested-by: Eric Miao <eric.y.miao@gmail.com> Tested-by: Olof Johansson <olof@lixom.net> Tested-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | Merge branch 'ftrace' of git://github.com/rabinv/linux-2.6 into devel-stableRussell King2010-11-262-2/+26
| |\ \ \ \ \ | | |_|/ / / | |/| | | |
| | * | | | ARM: place C irq handlers in IRQ_ENTRY for ftraceRabin Vincent2010-11-192-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When FUNCTION_GRAPH_TRACER is enabled, place do_IRQ() and friends in the IRQ_ENTRY section so that the irq-related features of the function graph tracer work. Signed-off-by: Rabin Vincent <rabin@rab.in>
| | | | | |
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
| \ \ \ \ \
*-----------. \ \ \ \ \ Merge branches 'ftrace', 'gic', 'io', 'kexec', 'mod', 'sa11x0', 'sh' and ↵Russell King2011-01-057-24/+108
|\ \ \ \ \ \ \ \ \ \ \ \ | | |_|_|_|_|_|/ / / / / | |/| | | | | | | / / / | | | | | | | |_|/ / / | | | | | | |/| | | / | | | | |_|_|_|_|_|/ | | | |/| | | | | | 'versatile' into devel
| | | | | | | * | | ARM: 6432/1: move timer-sp.c from versatile to commonRob Herring2010-11-041-0/+2
| | | | | | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From: Rob Herring <rob.herring@smooth-stone.com> The timer-sp h/w used on versatile platforms can also be used for other platforms, so move it to a common location. Signed-off-by: Rob Herring <rob.herring@smooth-stone.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | * | | | ARM: module: clean up handling of ELF unwind tablesRussell King2010-12-011-10/+5
| | | | | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's no need to keep pointers to the ELF sections available while the module is loaded - we only need the section pointers while we're finding and registering the unwind tables, which can all be done during the finalize stage of loading. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud