summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* ARM: 7587/1: implement optimized percpu variable accessRob Herring2012-12-034-2/+54
| | | | | | | | | | | | | | | | | | | | | | | | Use the previously unused TPIDRPRW register to store percpu offsets. TPIDRPRW is only accessible in PL1, so it can only be used in the kernel. This replaces 2 loads with a mrc instruction for each percpu variable access. With hackbench, the performance improvement is 1.4% on Cortex-A9 (highbank). Taking an average of 30 runs of "hackbench -l 1000" yields: Before: 6.2191 After: 6.1348 Will Deacon reported similar delta on v6 with 11MPCore. The asm "memory clobber" are needed here to ensure the percpu offset gets reloaded. Testing by Will found that this would not happen in __schedule() which is a bit of a special case as preemption is disabled but the execution can move cores. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7582/2: rename kvm_seq to vmalloc_seq so to avoid confusion with KVMNicolas Pitre2012-11-264-14/+14
| | | | | | | | | | The kvm_seq value has nothing to do what so ever with this other KVM. Given that KVM support on ARM is imminent, it's best to rename kvm_seq into something else to clearly identify what it is about i.e. a sequence number for vmalloc section mappings. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7585/1: kernel: fix nr_cpu_ids check in DT logical map initLorenzo Pieralisi2012-11-231-3/+7
| | | | | | | | | | | | | | | | If a kernel is configured with a DT containing more /cpu nodes than nr_cpu_ids, the number of cpus must be capped in the DT parsing code. Current code carries out the check, but fails to cap the value and the check is executed after the cpu logical index is used, which can lead to memory corruption due to index overflow. This patch refactors the check against nr_cpu_ids and move it before any computed index is used in the parsing code. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Grant Likely <grant.likely@secretlab.ca> Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7584/1: perf: fix link error when CONFIG_HW_PERF_EVENTS is not selectedMarc Zyngier2012-11-231-0/+2
| | | | | | | | | | | | | | Commit e50c541 (ARM: perf: add guest vs host discrimination) broken the link as perf_instruction_pointer and perf_misc_flags are not defined when CONFIG_HW_PERF_EVENTS is not selected. As it make little sense to try and profile a guest without any HW event, just fallback to the original code when this config option is not selected. Reported-by: Russell King <linux@arm.linux.org.uk> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'bl-cpuinfo' of git://linux-arm.org/linux-2.6-lp into devel-stableRussell King2012-11-203-35/+37
|\
| * ARM: kernel: update cpuinfo to print all online CPUs featuresLorenzo Pieralisi2012-11-191-35/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, reading /proc/cpuinfo provides userspace with CPU ID of the CPU carrying out the read from the file. This is fine as long as all CPUs in the system are the same. With the advent of big.LITTLE and heterogenous ARM systems this approach provides user space with incorrect bits of information since CPU ids in the system might differ from the one provided by the CPU reading the file. This patch updates the cpuinfo show function so that a read from /proc/cpuinfo prints HW information for all online CPUs at once, mirroring x86 behaviour. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| * ARM: kernel: add MIDR to per-CPU information dataLorenzo Pieralisi2012-11-192-0/+2
| | | | | | | | | | | | | | | | | | | | The advent of big.LITTLE ARM platforms requires the kernel to be able to identify the MIDRs of all online CPUs upon request. MIDRs are stashed at boot time so that kernel subsystems can detect the MIDR of online CPUs by simply retrieving per-CPU data updated by all booted CPUs. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
* | Merge branch 'cluster-boot-protocol' of git://linux-arm.org/linux-2.6-lp ↵Russell King2012-11-208-48/+256
|\ \ | | | | | | | | | into devel-stable
| * | ARM: gic: use a private mapping for CPU target interfacesNicolas Pitre2012-11-191-9/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GIC interface numbering does not necessarily follow the logical CPU numbering, especially for complex topologies such as multi-cluster systems. Fortunately we can easily probe the GIC to create a mapping as the Interrupt Processor Targets Registers for the first 32 interrupts are read-only, and each field returns a value that always corresponds to the processor reading the register. Initially all mappings target all CPUs in case an IPI is required to boot secondary CPUs. It is refined as those CPUs discover what their actual mapping is. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com>
| * | ARM: kernel: add logical mappings look-upLorenzo Pieralisi2012-11-191-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In ARM SMP systems the MPIDR register ([23:0] bits) is used to uniquely identify CPUs. In order to retrieve the logical CPU index corresponding to a given MPIDR value and guarantee a consistent translation throughout the kernel, this patch adds a look-up based on the MPIDR[23:0] so that kernel subsystems can use it whenever the logical cpu index corresponding to a given MPIDR value is needed. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| * | ARM: kernel: add cpu logical map DT init in setup_archLorenzo Pieralisi2012-11-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As soon as the device tree is unflattened the cpu logical to physical mapping is carried out in setup_arch to build a proper array of MPIDR and corresponding logical indexes. The mapping could have been carried out using the flattened DT blob and related primitives, but since the mapping is not needed by early boot code it can safely be executed when the device tree has been uncompressed to its tree data structure. This patch adds the arm_dt_init_cpu maps() function call in setup_arch(). If the kernel is not compiled with DT support the function is empty and no logical mapping takes place through it; the mapping carried out in smp_setup_processor_id() is left unchanged. If DT is supported the mapping created in smp_setup_processor_id() is overriden. The DT mapping also sets the possible cpus mask, hence platform code need not set it again in the respective smp_init_cpus() functions. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| * | ARM: kernel: add device tree init map functionLorenzo Pieralisi2012-11-193-0/+179
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When booting through a device tree, the kernel cpu logical id map can be initialized using device tree data passed by FW or through an embedded blob. This patch adds a function that parses device tree "cpu" nodes and retrieves the corresponding CPUs hardware identifiers (MPIDR). It sets the possible cpus and the cpu logical map values according to the number of CPUs defined in the device tree and respective properties. The device tree HW identifiers are considered valid if all CPU nodes contain a "reg" property, there are no duplicate "reg" entries and the DT defines a CPU node whose "reg" property matches the MPIDR[23:0] of the boot CPU. The primary CPU is assigned cpu logical number 0 to keep the current convention valid. Current bindings documentation is included in the patch: Documentation/devicetree/bindings/arm/cpus.txt Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| * | ARM: kernel: smp_setup_processor_id() updatesLorenzo Pieralisi2012-11-191-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch applies some basic changes to the smp_setup_processor_id() ARM implementation to make the code that builds cpu_logical_map more uniform across the kernel. The function now prints the full extent of the boot CPU MPIDR[23:0] and initializes the cpu_logical_map for CPUs up to nr_cpu_ids. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com>
| * | ARM: kernel: update topology to use new MPIDR macrosLorenzo Pieralisi2012-11-191-10/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch updates the topology initialization code to use the newly defined accessors to retrieve the MPIDR affinity levels. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
| * | ARM: kernel: enhance MPIDR macro definitionsLorenzo Pieralisi2012-11-192-26/+14
| |/ | | | | | | | | | | | | | | | | | | | | | | Kernel subsystems other than the topology layer need the MPIDR mask definitions to access the MPIDR without relying on hardcoded masks. This patch moves the MPIDR register masks definition to a header file and defines a macro to simplify access to MPIDR bit fields representing affinity levels. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org>
* | Merge branch 'asid-allocation' of ↵Russell King2012-11-193-185/+115
|\ \ | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| * | ARM: mm: use bitmap operations when allocating new ASIDsWill Deacon2012-11-051-19/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When allocating a new ASID, we must take care not to re-assign a reserved ASID-value to a new mm. This requires us to check each candidate ASID against those currently reserved by other cores before assigning a new ASID to the current mm. This patch improves the ASID allocation algorithm by using a bitmap-based approach. Rather than iterating over the reserved ASID array for each candidate ASID, we simply find the first zero bit, ensuring that those indices corresponding to reserved ASIDs are set when flushing during a rollover event. Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | ARM: mm: avoid taking ASID spinlock on fastpathWill Deacon2012-11-051-8/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When scheduling a new mm, we take a spinlock so that we can: 1. Safely allocate a new ASID, if required 2. Update our active_asids field without worrying about parallel updates to reserved_asids 3. Ensure that we flush our local TLB, if required However, this has the nasty affect of serialising context-switch across all CPUs in the system. The usual (fast) case is where the next mm has a valid ASID for the current generation. In such a scenario, we can avoid taking the lock and instead use atomic64_xchg to update the active_asids variable for the current CPU. If a rollover occurs on another CPU (which would take the lock), when copying the active_asids into the reserved_asids another atomic64_xchg is used to replace each active_asids with 0. The fast path can then detect this case and fall back to spinning on the lock. Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | ARM: mm: remove IPI broadcasting on ASID rolloverWill Deacon2012-11-053-186/+93
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ASIDs are allocated to MMU contexts based on a rolling counter. This means that after 255 allocations we must invalidate all existing ASIDs via an expensive IPI mechanism to synchronise all of the online CPUs and ensure that all tasks execute with an ASID from the new generation. This patch changes the rollover behaviour so that we rely instead on the hardware broadcasting of the TLB invalidation to avoid the IPI calls. This works by keeping track of the active ASID on each core, which is then reserved in the case of a rollover so that currently scheduled tasks can continue to run. For cores without hardware TLB broadcasting, we keep track of pending flushes in a cpumask, so cores can flush their local TLB before scheduling a new mm. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | Merge branch 'for-rmk/prot-none' of ↵Russell King2012-11-197-12/+25
|\ \ \ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| * | | ARM: mm: introduce present, faulting entries for PAGE_NONEWill Deacon2012-11-096-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PROT_NONE mappings apply the page protection attributes defined by _P000 which translate to PAGE_NONE for ARM. These attributes specify an XN, RDONLY pte that is inaccessible to userspace. However, on kernels configured without support for domains, such a pte *is* accessible to the kernel and can be read via get_user, allowing tasks to read PROT_NONE pages via syscalls such as read/write over a pipe. This patch introduces a new software pte flag, L_PTE_NONE, that is set to identify faulting, present entries. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: mm: introduce L_PTE_VALID for page table entriesWill Deacon2012-11-095-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For long-descriptor translation table formats, the ARMv7 architecture defines the last two bits of the second- and third-level descriptors to be: x0b - Invalid 01b - Block (second-level), Reserved (third-level) 11b - Table (second-level), Page (third-level) This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to create ptes directly. However, when determining whether a given pte value is present in the low-level page table accessors, we only need to check the least significant bit of the descriptor, allowing us to write faulting, present entries which are required for PROT_NONE mappings. This patch introduces L_PTE_VALID, which can be used to test whether a pte should fault, and updates the low-level page table accessors accordingly. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: mm: don't use the access flag permissions mechanism for classic MMUWill Deacon2012-11-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The simplified access permissions model is not used for the classic MMU translation regime, so ensure that it is turned off in the sctlr prior to turning on address translation for ARMv7. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: mm: use pteval_t to represent page protection valuesWill Deacon2012-11-091-1/+1
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When updating the page protection map after calculating the user_pgprot value, the base protection map is temporarily stored in an unsigned long type, causing truncation of the protection bits when LPAE is enabled. This effectively means that calls to mprotect() will corrupt the upper page attributes, clearing the XN bit unconditionally. This patch uses pteval_t to store the intermediate protection values, preserving the upper bits for 64-bit descriptors. Cc: stable@vger.kernel.org Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | Merge branch 'hw-breakpoint' of ↵Russell King2012-11-193-91/+91
|\ \ \ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| * | | ARM: cti: fix manipulation of debug lock registersWill Deacon2012-11-151-18/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The LOCKSTATUS register for memory-mapped coresight devices indicates whether or not the device in question implements hardware locking. If not, locking is not present (i.e. LSR.SLI == 0) and LAR is write-ignore, so software doesn't actually need to check the status register at all. This patch removes the broken LSR checks. Cc: Ming Lei <ming.lei@canonical.com> Reported-by: Mike Williams <michael.williams@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: kill WARN_ONCE usageWill Deacon2012-11-091-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WARN_ONCE is a bit OTT for some of the simple failure cases encountered in hw_breakpoint, so use either pr_warning or pr_warn_once instead. Reported-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: use CRn as argument for debug reg accessor macrosDietmar Eggemann2012-11-092-24/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The coprocessor register CRn for accesses to the debug register can be a different one than C0. Take this into account for the ARM_DBG_READ and the ARM_DBG_WRITE macro. The inline assembler calls which used a coprocessor register CRn other than C0 are replaced by the ARM_DBG_READ or ARM_DBG_WRITE macro. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: check if monitor mode is enabled during validationWill Deacon2012-11-091-13/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than attempt to enable monitor mode explicitly when scheduling in a breakpoint event (which could raise an undefined exception trap when accessing DBGDSCRext), instead check that DBGDSCRint.MDBGen is set during event validation and report an error to the caller if not. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: make boot quieter without CPUID feature registersWill Deacon2012-11-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Booting on a v6 core without the CPUID feature registers (e.g. 1136) leads to a noisy dmesg complaining about their absence. This patch changes the pr_warning into a pr_warn_once to keep the log quieter. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: don't try to clear v6 debug registers during bootWill Deacon2012-11-091-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | v6 cores do not provide a way to clear the debug registers without first enabling monitor mode, meaning that we could take spurious debug exceptions. Instead, rely on the registers being in a sane state when we boot as they are defined to be disabled out of reset anyway. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: fix ordering of debug register reset sequenceWill Deacon2012-11-091-10/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The debug register reset sequence for v7 and v7.1 is congruent with tap-dancing through a minefield. Rather than wait until we've blown ourselves to pieces, this patch instead checks the debug_err_mask after each potentially faulting operation. We also move the enabling of monitor_mode to the end of the sequence in order to prevent spurious debug events generated by UNKNOWN register values. Reported-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: fix monitor mode detection with v7.1Will Deacon2012-11-091-20/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Detecting whether halting debug is enabled is no longer possible via the DBGDSCR in v7.1, returning an UNKNOWN value for the HDBGen bit via CP14 when the OS lock is clear. This patch removes the halting mode check and ensures that accesses to the internal and external views of the DBGDSCR are serialised with an instruction barrier. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: hw_breakpoint: only clear OS lock when implemented on v7Will Deacon2012-11-091-6/+14
| |/ / | | | | | | | | | | | | | | | | | | | | | The OS save and restore register are optional in debug architecture v7, so check the status register before attempting to clear the OS lock. Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | Merge branch 'perf/updates' of ↵Russell King2012-11-198-338/+385
|\ \ \ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
| * | | ARM: PMU: fix runtime PM enableJon Hunter2012-11-092-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 7be2958 (ARM: PMU: Add runtime PM Support) updated the ARM PMU code to use runtime PM which was prototyped and validated on the OMAP devices. In this commit, there is no call pm_runtime_enable() and for OMAP devices pm_runtime_enable() is currently being called from the OMAP PMU code when the PMU device is created. However, there are two problems with this: 1. For any other ARM device wishing to use runtime PM for PMU they will need to call pm_runtime_enable() for runtime PM to work. 2. When booting with device-tree and using device-tree to create the PMU device, pm_runtime_enable() needs to be called from within the ARM PERF driver as we are no longer calling any device specific code to create the device. Hence, PMU does not work on OMAP devices that use the runtime PM callbacks when using device-tree to create the PMU device. Therefore, call pm_runtime_enable() directly from the ARM PMU driver when registering the device. For platforms that do not use runtime PM, pm_runtime_enable() does nothing and for platforms that do use runtime PM but may not require it specifically for PMU, this will just add a little overhead when initialising and uninitialising the PMU device. Tested with PERF on OMAP2420, OMAP3430 and OMAP4460. Acked-by: Kevin Hilman <khilman@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Jon Hunter <jon-hunter@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: perf: consistently use arm_pmu->name for PMU nameWill Deacon2012-11-093-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Perf has three ways to name a PMU: either by passing an explicit char *, reading arm_pmu->name or accessing arm_pmu->pmu.name. Just use arm_pmu->name consistently in the ARM backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: perf: return NOTIFY_DONE from cpu notifier when no available PMUWill Deacon2012-11-091-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When attempting to reset the PMU state for either a NULL PMU or a PMU implementation without a reset function, return NOTIFY_DONE from the CPU notifier as we don't care about the hotplug event. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: perf: register cpu_notifier at driver initMark Rutland2012-11-091-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current practice of registering the cpu hotplug notifier at PMU registration time won't be safe with multiple PMUs, as we'll repeatedly attempt to register the notifier. This has the unfortunate effect of silently corrupting the notifier list, leading to boot stalling. Instead, register the notifier at init time. Its sanity checks will prevent anything bad from happening if the notifier is called before we have any PMUs registered. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: perf: check ARMv7 counter validity on a per-pmu basisSudeep KarkadaNagesha2012-11-091-64/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Multi-cluster ARMv7 systems may have CPU PMUs with different number of counters. This patch updates armv7_pmnc_counter_valid so that it takes a pmu argument and checks the counter validity against that. We also remove a number of redundant counter checks whether the current PMU is not easily retrievable. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: perf: consistently use struct perf_event in arm_pmu functionsSudeep KarkadaNagesha2012-11-096-121/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm_pmu functions have wildly varied parameters which can often be derived from struct perf_event. This patch changes the arm_pmu function prototypes so that struct perf_event pointers are passed in preference to fields that can be derived from the event. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: perf: allocate CPU PMU dynamically at probe timeSudeep KarkadaNagesha2012-11-094-144/+153
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Supporting multiple, heterogeneous CPU PMUs requires us to allocate the arm_pmu structures dynamically as the devices are probed. This patch removes the static structure definitions for each CPU PMU type and instead passes pointers to the PMU-specific init functions. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | ARM: perf: add guest vs host discriminationMarc Zyngier2012-11-092-0/+41
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add minimal guest support to perf, so it can distinguish whether the PMU interrupt was in the host or the guest, as well as collecting some very basic information (guest PC, user vs kernel mode). This is not feature complete though, as it doesn't support backtracing in the guest. Based on the x86 implementation, tested with KVM/ARM. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | fanotify: fix FAN_Q_OVERFLOW case of fanotify_read()Al Viro2012-11-181-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the FAN_Q_OVERFLOW bit set in event->mask, the fanotify event metadata will not contain a valid file descriptor, but copy_event_to_user() didn't check for that, and unconditionally does a fd_install() on the file descriptor. Which in turn will cause a BUG_ON() in __fd_install(). Introduced by commit 352e3b249284 ("fanotify: sanitize failure exits in copy_event_to_user()") Mea culpa - missed that path ;-/ Reported-by: Alex Shi <lkml.alex@gmail.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Merge branch 'for-linus' of ↵Linus Torvalds2012-11-182-4/+3
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull misc VFS fixes from Al Viro: "Remove a bogus BUG_ON() that can trigger spuriously + alpha bits of do_mount() constification I'd missed during the merge window." This pull request came in a week ago, I missed it for some reason. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: kill bogus BUG_ON() in do_close_on_exec() missing const in alpha callers of do_mount()
| * | | kill bogus BUG_ON() in do_close_on_exec()Al Viro2012-11-121-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be legitimately triggered via procfs access. Now, at least 2 of 3 of get_files_struct() callers in procfs are useless, but when and if we get rid of those we can always add WARN_ON() here. BUG_ON() at that spot is simply wrong. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | missing const in alpha callers of do_mount()Al Viro2012-10-201-3/+3
| | | | | | | | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | | Merge branch 'for-linus' of ↵Linus Torvalds2012-11-181-3/+3
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k Pull m68k fix from Geert Uytterhoeven: "This is a bug fix for asm constraints that affect sending RT signals, also destined for -stable." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k: m68k: fix sigset_t accessor functions
| * | | | m68k: fix sigset_t accessor functionsAndreas Schwab2012-11-181-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sigaddset/sigdelset/sigismember functions that are implemented with bitfield insn cannot allow the sigset argument to be placed in a data register since the sigset is wider than 32 bits. Remove the "d" constraint from the asm statements. The effect of the bug is that sending RT signals does not work, the signal number is truncated modulo 32. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: stable@vger.kernel.org
* | | | | Merge tag 'gpio-fixes-for-v3.7' of ↵Linus Torvalds2012-11-183-4/+27
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio Pull last minute GPIO fixes from Linus Walleij: - Disable blinking on the Orion GPIO driver - Two Kconfig-style fixes to avoid broken builds * tag 'gpio-fixes-for-v3.7' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpio-mcp23s08: Build I2C support even when CONFIG_I2C=m gpio: adnp: Depend on OF_GPIO instead of OF mvebu-gpio: Disable blinking when enabling a GPIO for output
OpenPOWER on IntegriCloud