diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-03-23 16:59:10 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-03-23 16:59:10 -0700 |
commit | 8e3ade251bc7c0a4f0777df4dd34343a03efadba (patch) | |
tree | 6c0b78731e3d6609057951d07660efbd90992ad0 /arch/x86/kernel/cpu | |
parent | e317234975cb7463b8ca21a93bb6862d9dcf113f (diff) | |
parent | e075f59152890ffd7e3d704afc997dd686c8a781 (diff) | |
download | blackbird-op-linux-8e3ade251bc7c0a4f0777df4dd34343a03efadba.tar.gz blackbird-op-linux-8e3ade251bc7c0a4f0777df4dd34343a03efadba.zip |
Merge branch 'akpm' (Andrew's patch-bomb)
Merge second batch of patches from Andrew Morton:
- various misc things
- core kernel changes to prctl, exit, exec, init, etc.
- kernel/watchdog.c updates
- get_maintainer
- MAINTAINERS
- the backlight driver queue
- core bitops code cleanups
- the led driver queue
- some core prio_tree work
- checkpatch udpates
- largeish crc32 update
- a new poll() feature for the v4l guys
- the rtc driver queue
- fatfs
- ptrace
- signals
- kmod/usermodehelper updates
- coredump
- procfs updates
* emailed from Andrew Morton <akpm@linux-foundation.org>: (141 commits)
seq_file: add seq_set_overflow(), seq_overflow()
proc-ns: use d_set_d_op() API to set dentry ops in proc_ns_instantiate().
procfs: speed up /proc/pid/stat, statm
procfs: add num_to_str() to speed up /proc/stat
proc: speed up /proc/stat handling
fs/proc/kcore.c: make get_sparsemem_vmemmap_info() static
coredump: add VM_NODUMP, MADV_NODUMP, MADV_CLEAR_NODUMP
coredump: remove VM_ALWAYSDUMP flag
kmod: make __request_module() killable
kmod: introduce call_modprobe() helper
usermodehelper: ____call_usermodehelper() doesn't need do_exit()
usermodehelper: kill umh_wait, renumber UMH_* constants
usermodehelper: implement UMH_KILLABLE
usermodehelper: introduce umh_complete(sub_info)
usermodehelper: use UMH_WAIT_PROC consistently
signal: zap_pid_ns_processes: s/SEND_SIG_NOINFO/SEND_SIG_FORCED/
signal: oom_kill_task: use SEND_SIG_FORCED instead of force_sig()
signal: cosmetic, s/from_ancestor_ns/force/ in prepare_signal() paths
signal: give SEND_SIG_FORCED more power to beat SIGNAL_UNKILLABLE
Hexagon: use set_current_blocked() and block_sigmask()
...
Diffstat (limited to 'arch/x86/kernel/cpu')
-rw-r--r-- | arch/x86/kernel/cpu/perf_event.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index 0a18d16cb58d..fa2900c0e398 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -643,14 +643,14 @@ static bool __perf_sched_find_counter(struct perf_sched *sched) /* Prefer fixed purpose counters */ if (x86_pmu.num_counters_fixed) { idx = X86_PMC_IDX_FIXED; - for_each_set_bit_cont(idx, c->idxmsk, X86_PMC_IDX_MAX) { + for_each_set_bit_from(idx, c->idxmsk, X86_PMC_IDX_MAX) { if (!__test_and_set_bit(idx, sched->state.used)) goto done; } } /* Grab the first unused counter starting with idx */ idx = sched->state.counter; - for_each_set_bit_cont(idx, c->idxmsk, X86_PMC_IDX_FIXED) { + for_each_set_bit_from(idx, c->idxmsk, X86_PMC_IDX_FIXED) { if (!__test_and_set_bit(idx, sched->state.used)) goto done; } |