summaryrefslogtreecommitdiffstats
path: root/arch/arm64
Commit message (Collapse)AuthorAgeFilesLines
...
| * | | | ARM/ARM64: arch_timer: add macros for bits in control registerSudeep KarkadaNagesha2013-09-261-4/+8
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add macros to describe the bitfields in the ARM architected timer control register to make code easy to understand. Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
* | | | Merge branch 'sched-core-for-linus' of ↵Linus Torvalds2013-11-121-0/+1
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler changes from Ingo Molnar: "The main changes in this cycle are: - (much) improved CONFIG_NUMA_BALANCING support from Mel Gorman, Rik van Riel, Peter Zijlstra et al. Yay! - optimize preemption counter handling: merge the NEED_RESCHED flag into the preempt_count variable, by Peter Zijlstra. - wait.h fixes and code reorganization from Peter Zijlstra - cfs_bandwidth fixes from Ben Segall - SMP load-balancer cleanups from Peter Zijstra - idle balancer improvements from Jason Low - other fixes and cleanups" * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (129 commits) ftrace, sched: Add TRACE_FLAG_PREEMPT_RESCHED stop_machine: Fix race between stop_two_cpus() and stop_cpus() sched: Remove unnecessary iteration over sched domains to update nr_busy_cpus sched: Fix asymmetric scheduling for POWER7 sched: Move completion code from core.c to completion.c sched: Move wait code from core.c to wait.c sched: Move wait.c into kernel/sched/ sched/wait: Fix __wait_event_interruptible_lock_irq_timeout() sched: Avoid throttle_cfs_rq() racing with period_timer stopping sched: Guarantee new group-entities always have weight sched: Fix hrtimer_cancel()/rq->lock deadlock sched: Fix cfs_bandwidth misuse of hrtimer_expires_remaining sched: Fix race on toggling cfs_bandwidth_used sched: Remove extra put_online_cpus() inside sched_setaffinity() sched/rt: Fix task_tick_rt() comment sched/wait: Fix build breakage sched/wait: Introduce prepare_to_wait_event() sched/wait: Add ___wait_cond_timeout() to wait_event*_timeout() too sched: Remove get_online_cpus() usage sched: Fix race in migrate_swap_stop() ...
| * \ \ \ Merge tag 'v3.12-rc4' into sched/coreIngo Molnar2013-10-099-21/+32
| |\ \ \ \ | | | |_|/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge Linux v3.12-rc4 to fix a conflict and also to refresh the tree before applying more scheduler patches. Conflicts: arch/avr32/include/asm/Kbuild Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | sched, arch: Create asm/preempt.hPeter Zijlstra2013-09-251-0/+1
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to prepare to per-arch implementations of preempt_count move the required bits into an asm-generic header and use this for all archs. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-h5j0c1r3e3fk015m30h8f1zx@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | | ARM64: /proc/interrupts: display IPIs of online CPUs onlySudeep KarkadaNagesha2013-11-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The non-IPI interrupts are displayed only for the online cpus from show_interrupts in kernel/irq/proc.c before calling arch_show_interrupts(). As a result, the column headers and the IPI count don't match if any CPU is offline. This patch fixes show_ipi_list to display IPIs for online CPUs only. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: locks: Remove CONFIG_GENERIC_LOCKBREAKCatalin Marinas2013-11-061-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 52ea2a560a9d (arm64: locks: introduce ticket-based spinlock implementation) introduces the arch_spin_is_contended() function making CONFIG_GENERIC_LOCKBREAK unnecessary. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
* | | | arm64: KVM: vgic: byteswap GICv2 access on world switch if BEMarc Zyngier2013-11-061-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ensure that accesses to the GICH_* registers are byteswapped when the kernel is compiled as big-endian. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: KVM: initialize HYP mode following the kernel endiannessMarc Zyngier2013-11-061-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Force SCTLR_EL2.EE to 1 if the kernel is compiled as BE. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: compat: Clear the IT state independent of the 32-bit ARM or Thumb-2 modeT.J. Purtell2013-11-051-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM architecture reference specifies that the IT state bits in the PSR must be all zeros in ARM mode or behavior is unspecified. If an ARM function is registered as a signal handler, and that signal is delivered inside a block of instructions following an IT instruction, some of the instructions at the beginning of the signal handler may be skipped if the IT state bits of the Program Status Register are not cleared by the kernel. Signed-off-by: T.J. Purtell <tj@mobisocial.us> [catalin.marinas@arm.com: code comment and commit log updated] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: Use 42-bit address space with 64K pagesCatalin Marinas2013-11-053-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch expands the VA_BITS to 42 when the 64K page configuration is enabled allowing 2TB kernel linear mapping. Linux still uses 2 levels of page tables in this configuration with pgd now being a full page. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
* | | | arm64: module: ensure instruction is little-endian before manipulationWill Deacon2013-11-051-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Relocations that require an instruction immediate to be re-encoded must ensure that the instruction pattern is represented in a little-endian format for the manipulation code to work correctly. This patch converts the loaded instruction into native-endianess prior to encoding and then converts back to little-endian byteorder before updating memory. Signed-off-by: Will Deacon <will.deacon@arm.com> Tested-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: defconfig: Enable CONFIG_PREEMPT by defaultCatalin Marinas2013-11-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This way we can spot early bugs when just testing with the default config. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: fix access to preempt_count from assembly codeMarc Zyngier2013-11-051-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | preempt_count is defined as an int. Oddly enough, we access it as a 64bit value. Things become interesting when running a BE kernel, and looking at the current CPU number, which is stored as an int next to preempt_count. Like in a per-cpu interrupt handler, for example... Using a 32bit access fixes the issue for good. Cc: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: move enabling of GIC before CPUs are set onlineMarc Zyngier2013-11-041-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 53ae3acd (arm64: Only enable local interrupts after the CPU is marked online) moved the enabling of the GIC after the CPUs are marked online. This has some interesting effect: [...] [<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160 [<ffffffc000088f58>] smp_send_reschedule+0x38/0x40 [<ffffffc0000c8728>] resched_task+0x84/0xc0 [<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98 [<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4 [<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70 [<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4 [<ffffffc0000cae6c>] default_wake_function+0x10/0x18 [<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0 [<ffffffc0000c7784>] complete+0x48/0x64 [<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110 [...] Here, we end-up calling gic_raise_softirq without having initialized the interrupt controller for this CPU. While this goes unnoticed with GICv2 (the distributor is always accessible), it explodes with GICv3. The fix is to move the call to notify_cpu_starting before we set the secondary CPU online. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: use generic RW_DATA_SECTION macro in linker scriptMark Salter2013-11-041-24/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The .data section in the arm64 linker script currently lacks a definition for page-aligned data. This leads to a .page_aligned section being placed between the end of data and start of bss. This patch corrects that by using the generic RW_DATA_SECTION macro which includes support for page-aligned data. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: Slightly improve the warning on CPU0 enable-methodCatalin Marinas2013-10-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit e8765b265a69 (arm64: read enable-method for CPU0) introduced checks for the enable method on CPU0 (to be later used with CPU suspend). However, if the kernel is compiled for UP and a DT file is used with a method like 'spin-table', Linux complains about 'invalid enable method'. This patch turns it into an 'unsupported enable method' warning. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | ARM64: simplify cpu_read_bootcpu_ops using OF/DT helperSudeep KarkadaNagesha2013-10-301-17/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once the cpu_logical_map for any logical cpu is populated with the corresponding physical identifier(i.e. mpidr), it's device node can be retrieved using the DT helper 'of_get_cpu_node'. Currently the device tree parsing code to get boot cpu node is duplicated in 'cpu_read_bootcpu_ops'. This patch replaces the code parsing the device tree for the boot cpu with of_get_cpu_node. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | ARM64: DT: define ARM64 specific arch_match_cpu_phys_idSudeep KarkadaNagesha2013-10-301-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OF/DT core library provides architecture specific hook to match the logical cpu index with the corresponding physical identifier. On ARM64, the MPIDR_EL1 contains specific bitfields(MPIDR_EL1.Aff{3..0}) which uniquely identify a CPU, in addition to some non-identifying information and reserved bits. The ARM cpu binding defines the 'reg' property to only contain the affinity bits, and any cpu nodes with other bits set in their 'reg' entry are skipped. This patch overrides the weak definition of arch_match_cpu_phys_id with ARM64 specific version using MPIDR_EL1.Aff{3..0} as cpu physical identifiers. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: allow ioremap_cache() to use existing RAM mappingsMark Salter2013-10-302-3/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some drivers (ACPI notably) use ioremap_cache() to map an area which could either be outside of kernel RAM or in an already mapped reserved area of RAM. To avoid aliases with different caching attributes, ioremap() does not allow RAM to be remapped. But for ioremap_cache(), the existing kernel mapping may be used. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: update 32-bit kuser helpers to ARMv8Robin Murphy2013-10-281-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch updates the barrier semantics in the kuser helper functions to take advantage of the ARMv8 additions to AArch32, which are guaranteed to be available in situations where these functions will be called. Note that this slightly changes the cmpxchg functions in that they are no longer necessarily full barriers if they return 1. However, the documentation only states they include their own barriers "as needed", not that they are obligated to act as a full barrier for the caller. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> CC: Matthew Leach <matthew.leach@arm.com> CC: Dave Martin <dave.martin@arm.com> CC: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: perf: fix event number maskVinayak Kale2013-10-251-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes ARMV8_EVTYPE_* macros since evtCount (event number) field width is 10bits in event selection register. Signed-off-by: Vinayak Kale <vkale@apm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: kconfig: allow CPU_BIG_ENDIAN to be selectedWill Deacon2013-10-251-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch wires up CONFIG_CPU_BIG_ENDIAN for the AArch64 kernel configuration. Selecting this option builds a big-endian kernel which can boot into a big-endian userspace. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: Fix the endianness of arch_spinlock_tCatalin Marinas2013-10-251-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The owner and next members of the arch_spinlock_t structure need to be swapped when compiling for big endian. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Matthew Leach <matthew.leach@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
* | | | arm64: big-endian: write CPU holding pen address as LEMatthew Leach2013-10-251-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently when CPUs are brought online via a spin-table, the address they should jump to is written to the cpu-release-addr in the kernel's native endianness. As the kernel may switch endianness, secondaries might read the value byte-reversed from what was intended, and they would jump to the wrong address. As the only current arm64 spin-table implementations are little-endian, stricten up the arm64 spin-table definition such that the value written to cpu-release-addr is _always_ little-endian regardless of the endianness of any CPU. If a spinning CPU is operating big-endian, it must byte-reverse the value before jumping to handle this. Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: big-endian: set correct endianess on kernel entryMatthew Leach2013-10-252-5/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The endianness of memory accesses at EL2 and EL1 are configured by SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted, the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must ensure that they are set before performing any memory accesses. This patch ensures that SCTLR_EL{2,1} are configured appropriately at boot for kernels of either endianness. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: head: create a new function for setting the boot_cpu_mode flagMatthew Leach2013-10-252-10/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the code for setting the __cpu_boot_mode flag is munged in with el2_setup. This makes things difficult on a BE bringup as a memory access has to have occurred before el2_setup which is the place that we'd like to set the endianess on the current EL. Create a new function for setting __cpu_boot_mode and have el2_setup return the mode the CPU. Also define a new constant in virt.h, BOOT_CPU_MODE_EL1, for readability. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: asm: add CPU_LE & CPU_BE assembler helpersMatthew Leach2013-10-251-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add CPU_LE and CPU_BE to select assembler code in little and big endian configurations respectively. Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: big-endian: don't treat code as data when copying sigret codeMatthew Leach2013-10-253-29/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the sigreturn compat code is copied to an offset in the vectors table. When using a BE kernel this data will be stored in the wrong endianess so when returning from a signal on a 32-bit BE system, arbitrary code will be executed. Instead of declaring the code inside a struct and copying that, use the assembler's .byte directives to store the code in the correct endianess regardless of platform endianess. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: compat: correct register concatenation for syscall wrappersMatthew Leach2013-10-252-11/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm64 port contains wrappers for arm32 syscalls that pass 64-bit values. These wrappers concatenate the two registers to hold a 64-bit value in a single X register. On BE, however, the lower and higher words are swapped. Create a new assembler macro, regs_to_64, that when on BE systems swaps the registers in the orr instruction. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: compat: add support for big-endian (BE8) AArch32 binariesWill Deacon2013-10-254-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for BE8 AArch32 tasks to the compat layer. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: setup: report ELF_PLATFORM as the machine for utsnameWill Deacon2013-10-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | uname -m reports the machine field from the current utsname, which should reflect the endianness of the system. This patch reports ELF_PLATFORM for the field, so that everything appears consistent from userspace. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: ELF: add support for big-endian executablesWill Deacon2013-10-251-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the aarch64_be ELF format to the AArch64 ELF loader. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: big-endian: fix byteorder includeWill Deacon2013-10-251-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For big-endian processors, we must include linux/byteorder/big_endian.h to get the relevant definitions for swabbing between CPU order and a defined endianness. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: big-endian: add big-endian support to top-level arch MakefileWill Deacon2013-10-251-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds big-endian support to the AArch64 top-level Makefile. This currently just passes the relevant flags to the toolchain and is predicated on a Kconfig option that will be introduced later on. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: add PSCI CPU_OFF-based hotplug supportMark Rutland2013-10-251-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for using PSCI CPU_OFF calls for CPU hotplug. With this code it is possible to hot unplug CPUs with "psci" as their boot-method, as long as there's an appropriate cpu_off function id specified in the psci node. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: add CPU_HOTPLUG infrastructureMark Rutland2013-10-258-1/+187
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the basic infrastructure necessary to support CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug support will depend on an implementation's cpu_operations (e.g. PSCI). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: read enable-method for CPU0Mark Rutland2013-10-254-19/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may tells us something useful (i.e. how to hotplug it back on), so we must read it along with all the enable-method for all the other CPUs. Even on UP the enable-method may tell us useful information (e.g. if a core has some mechanism that might be usable for cpuidle), so we should always read it. This patch factors out the reading of the enable method, and ensures that CPU0's enable method is read regardless of whether the kernel is built with SMP support. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: factor out spin-table boot methodMark Rutland2013-10-257-73/+117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm64 kernel has an internal holding pen, which is necessary for some systems where we can't bring CPUs online individually and must hold multiple CPUs in a safe area until the kernel is able to handle them. The current SMP infrastructure for arm64 is closely coupled to this holding pen, and alternative boot methods must launch CPUs into the pen, where they sit before they are launched into the kernel proper. With PSCI (and possibly other future boot methods), we can bring CPUs online individually, and need not perform the secondary_holding_pen dance. Instead, this patch factors the holding pen management code out to the spin-table boot method code, as it is the only boot method requiring the pen. A new entry point for secondaries, secondary_entry is added for other boot methods to use, which bypasses the holding pen and its associated overhead when bringing CPUs online. The smp.pen.text section is also removed, as the pen can live in head.text without problem. The cpu_operations structure is extended with two new functions, cpu_boot and cpu_postboot, for bringing a cpu into the kernel and performing any post-boot cleanup required by a bootmethod (e.g. resetting the secondary_holding_pen_release to INVALID_HWID). Documentation is added for cpu_operations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: reorganise smp_enable_opsMark Rutland2013-10-257-52/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For hotplug support, we're going to want a place to store operations that do more than bring CPUs online, and it makes sense to group these with our current smp_enable_ops. For cpuidle support, we'll want to group additional functions, and we may want them even for UP kernels. This patch renames smp_enable_ops to the more general cpu_operations, and pulls the definitions out of smp code such that they can be used in UP kernels. While we're at it, fix up instances of the cpu parameter to be an unsigned int, drop the init markings and rename the *_cpu functions to cpu_* to reduce future churn when cpu_operations is extended. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: unify smp_psci.c and psci.cMark Rutland2013-10-254-74/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The functions in psci.c are only used from smp_psci.c, and smp_psci cannot function without psci.c. Additionally psci.c is built when !SMP, where it's expected that cpu_suspend may be useful. This patch unifies the two files, removing pointless duplication and paving the way for PSCI support in UP systems. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: Export __copy_in_user() to modulesCatalin Marinas2013-10-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This function may be called from loadable modules, so it needs exporting. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Loc Ho <lho@apm.com>
* | | | arm64: cmpxchg: implement cmpxchg64_relaxedWill Deacon2013-10-241-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces cmpxchg64_relaxed for arm64 using the existing cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits) without barrier semantics. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: lockref: add support for lockless lockrefs using cmpxchgWill Deacon2013-10-242-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can deal with 8-bytes (as one would hope!). This patch wires up the cmpxchg-based lockless lockref implementation for arm64. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: locks: introduce ticket-based spinlock implementationWill Deacon2013-10-242-31/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces a ticket lock implementation for arm64, along the same lines as the implementation for arch/arm/. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | | arm64: check for number of arguments in syscall_get/set_arguments()AKASHI Takahiro2013-10-231-0/+6
| |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In ftrace_syscall_enter(), syscall_get_arguments(..., 0, n, ...) if (i == 0) { <handle orig_x0> ...; n--;} memcpy(..., n * sizeof(args[0])); If 'number of arguments(n)' is zero and 'argument index(i)' is also zero in syscall_get_arguments(), none of arguments should be copied by memcpy(). Otherwise 'n--' can be a big positive number and unexpected amount of data will be copied. Tracing system calls which take no argument, say sync(void), may hit this case and eventually make the system corrupted. This patch fixes the issue both in syscall_get_arguments() and syscall_set_arguments(). Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | arm64: Remove duplicate DEBUG_STACK_USAGE configStephen Boyd2013-10-021-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | This config item already exists generically in lib/Kconfig.debug. Remove the duplicate config in arm64. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | arm64: include VIRTIO_{MMIO,BLK} in defconfigRamkumar Ramachandra2013-09-301-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, development on arm64 is aided by a Foundation_v8 emulator distributed by ARM [1]. To run their kernels, users will execute: $ ./Foundation_v8 --image linux-system.axf --block-device raring-rootfs To mount the raring-rootfs filesystem, the kernel parameter should typically include: root=/dev/vda For this device to be present, the kernel must be compiled with VIRTIO_{MMIO,BLK}. To make this work out-of-the-box, make it part of the default configuration. [1]: https://silver.arm.com/browse/FM00A Cc: Will Deacon <will.deacon@arm.com> Cc: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | arm64: include EXT4 in defconfigRamkumar Ramachandra2013-09-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most readily available root filesystems are formatted as EXT4 these days. For example, see the raring rootfs that the Debian folk is preparing [1]. [1]: http://people.debian.org/~wookey/bootstrap/rootfs/ Cc: Will Deacon <will.deacon@arm.com> Cc: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | arm64: fix possible invalid FPSIMD initialization stateJiang Liu2013-09-271-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If context switching happens during executing fpsimd_flush_thread(), stale value in FPSIMD registers will be saved into current thread's fpsimd_state by fpsimd_thread_switch(). That may cause invalid initialization state for the new process, so disable preemption when executing fpsimd_flush_thread(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | | arm64: use correct register width when retrieving ASIDMatthew Leach2013-09-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ASID is represented as an unsigned int in mm_context_t and we currently use the mmid assembler macro to access this element of the struct. This should be accessed with a register of 32-bit width. If the incorrect register width is used the ASID will be returned in bits[32:63] of the register when running under big-endian. Fix a use of the mmid macro in tlb.S to use a 32-bit access. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
OpenPOWER on IntegriCloud