summaryrefslogtreecommitdiffstats
path: root/kernel/trace/trace_sched_wakeup.c
Commit message (Collapse)AuthorAgeFilesLines
* tracing, function_graph: Remove dependency of abstime and duration fields on ↵Jiri Olsa2011-06-141-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | latency The display of absolute time and duration fields is based on the latency field. This was added during the irqsoff/wakeup tracers graph support changes. It's causing confusion in what fields will be displayed for the function_graph tracer itself. So I'm removing this depency, and adding absolute time and duration fields to the preemptirqsoff preemptoff irqsoff wakeup tracers. With following commands: # echo function_graph > ./current_tracer # cat trace This is what it looked like before: # tracer: function_graph # # TIME CPU DURATION FUNCTION CALLS # | | | | | | | | 0) 0.068 us | } /* page_add_file_rmap */ 0) | _raw_spin_unlock() { ... This is what it looks like now: # tracer: function_graph # # CPU DURATION FUNCTION CALLS # | | | | | | | 0) 0.068 us | } /* add_preempt_count */ 0) 0.993 us | } /* vfsmount_lock_local_lock */ ... For preemptirqsoff preemptoff irqsoff wakeup tracers, this is what it looked like before: SNIP # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / _-=> lock-depth # |||| / # CPU TASK/PID ||||| DURATION FUNCTION CALLS # | | | ||||| | | | | | | 1) <idle>-0 | d..1 0.000 us | acpi_idle_enter_simple(); ... This is what it looks like now: SNIP # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / # TIME CPU TASK/PID |||| DURATION FUNCTION CALLS # | | | | |||| | | | | | | 19.847735 | 1) <idle>-0 | d..1 0.000 us | acpi_idle_enter_simple(); ... Signed-off-by: Jiri Olsa <jolsa@redhat.com> Link: http://lkml.kernel.org/r/1307113131-10045-2-git-send-email-jolsa@redhat.com Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* ftrace: Implement separate user function filteringSteven Rostedt2011-05-181-0/+1
| | | | | | | | | | | | ftrace_ops that are registered to trace functions can now be agnostic to each other in respect to what functions they trace. Each ops has their own hash of the functions they want to trace and a hash to what they do not want to trace. A empty hash for the functions they want to trace denotes all functions should be traced that are not in the notrace hash. Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: Fix compile issue for trace_sched_wakeup.cSteven Rostedt2010-10-191-2/+1
| | | | | | | | | | | | | | The function start_func_tracer() was incorrectly added in the #ifdef CONFIG_FUNCTION_TRACER condition, but is still used even when function tracing is not enabled. The calls to register_ftrace_function() and register_ftrace_graph() become nops (and their arguments are even ignored), thus there is no reason to hide start_func_tracer() when function tracing is not enabled. Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: Use one prologue for the wakeup tracer function tracersSteven Rostedt2010-10-181-52/+50
| | | | | | | | | | | | | The wakeup tracer has three types of function tracers. Normal function tracer, function graph entry, and function graph return. Each of these use a complex dance to prevent recursion and whether to trace the data or not (depending on the wake_task variable). This patch moves the duplicate code into a single routine, to prevent future mistakes with modifying duplicate complex code. Cc: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: Graph support for wakeup tracerJiri Olsa2010-10-181-10/+221
| | | | | | | | | | Add function graph support for wakeup latency tracer. The graph output is enabled by setting the 'display-graph' trace option. Signed-off-by: Jiri Olsa <jolsa@redhat.com> LKML-Reference: <1285243253-7372-4-git-send-email-jolsa@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: Shrink max latency ringbuffer if unnecessaryKOSAKI Motohiro2010-07-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Documentation/trace/ftrace.txt says buffer_size_kb: This sets or displays the number of kilobytes each CPU buffer can hold. The tracer buffers are the same size for each CPU. The displayed number is the size of the CPU buffer and not total size of all buffers. The trace buffers are allocated in pages (blocks of memory that the kernel uses for allocation, usually 4 KB in size). If the last page allocated has room for more bytes than requested, the rest of the page will be used, making the actual allocation bigger than requested. ( Note, the size may not be a multiple of the page size due to buffer management overhead. ) This can only be updated when the current_tracer is set to "nop". But it's incorrect. currently total memory consumption is 'buffer_size_kb x CPUs x 2'. Why two times difference is there? because ftrace implicitly allocate the buffer for max latency too. That makes sad result when admin want to use large buffer. (If admin want full logging and makes detail analysis). example, If admin have 24 CPUs machine and write 200MB to buffer_size_kb, the system consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is usually unacceptable. Fortunatelly, almost all users don't use max latency feature. The max latency buffer can be disabled easily. This patch shrink buffer size of the max latency buffer if unnecessary. Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> LKML-Reference: <20100701104554.DA2D.A69D9226@jp.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: Remove ftrace_preempt_disable/enableSteven Rostedt2010-06-031-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ftrace_preempt_disable/enable functions were to address a recursive race caused by the function tracer. The function tracer traces all functions which makes it easily susceptible to recursion. One area was preempt_enable(). This would call the scheduler and the schedulre would call the function tracer and loop. (So was it thought). The ftrace_preempt_disable/enable was made to protect against recursion inside the scheduler by storing the NEED_RESCHED flag. If it was set before the ftrace_preempt_disable() it would not call schedule on ftrace_preempt_enable(), thinking that if it was set before then it would have already scheduled unless it was already in the scheduler. This worked fine except in the case of SMP, where another task would set the NEED_RESCHED flag for a task on another CPU, and then kick off an IPI to trigger it. This could cause the NEED_RESCHED to be saved at ftrace_preempt_disable() but the IPI to arrive in the the preempt disabled section. The ftrace_preempt_enable() would not call the scheduler because the flag was already set before entring the section. This bug would cause a missed preemption check and cause lower latencies. Investigating further, I found that the recusion caused by the function tracer was not due to schedule(), but due to preempt_schedule(). Now that preempt_schedule is completely annotated with notrace, the recusion no longer is an issue. Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: Let tracepoints have data passed to tracepoint callbacksSteven Rostedt2010-05-141-13/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds data to be passed to tracepoint callbacks. The created functions from DECLARE_TRACE() now need a mandatory data parameter. For example: DECLARE_TRACE(mytracepoint, int value, value) Will create the register function: int register_trace_mytracepoint((void(*)(void *data, int value))probe, void *data); As the first argument, all callbacks (probes) must take a (void *data) parameter. So a callback for the above tracepoint will look like: void myprobe(void *data, int value) { } The callback may choose to ignore the data parameter. This change allows callbacks to register a private data pointer along with the function probe. void mycallback(void *data, int value); register_trace_mytracepoint(mycallback, mydata); Then the mycallback() will receive the "mydata" as the first parameter before the args. A more detailed example: DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status)); /* In the C file */ DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status)); [...] trace_mytracepoint(status); /* In a file registering this tracepoint */ int my_callback(void *data, int status) { struct my_struct my_data = data; [...] } [...] my_data = kmalloc(sizeof(*my_data), GFP_KERNEL); init_my_data(my_data); register_trace_mytracepoint(my_callback, my_data); The same callback can also be registered to the same tracepoint as long as the data registered is different. Note, the data must also be used to unregister the callback: unregister_trace_mytracepoint(my_callback, my_data); Because of the data parameter, tracepoints declared this way can not have no args. That is: DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS()); will cause an error. If no arguments are needed, a new macro can be used instead: DECLARE_TRACE_NOARGS(mytracepoint); Since there are no arguments, the proto and args fields are left out. This is part of a series to make the tracepoint footprint smaller: text data bss dec hex filename 4913961 1088356 861512 6863829 68bbd5 vmlinux.orig 4914025 1088868 861512 6864405 68be15 vmlinux.class 4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint Again, this patch also increases the size of the kernel, but lays the ground work for decreasing it. v5: Fixed net/core/drop_monitor.c to handle these updates. v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the #ifdef CONFIG_TRACE_POINTS, since the two are the same in both cases. The __DECLARE_TRACE() is what changes. Thanks to Frederic Weisbecker for pointing this out. v3: Made all register_* functions require data to be passed and all callbacks to take a void * parameter as its first argument. This makes the calling functions comply with C standards. Also added more comments to the modifications of DECLARE_TRACE(). v2: Made the DECLARE_TRACE() have the ability to pass arguments and added a new DECLARE_TRACE_NOARGS() for tracepoints that do not need any arguments. Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: Masami Hiramatsu <mhiramat@redhat.com> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* sched: Remove rq argument to the tracepointsPeter Zijlstra2010-05-071-3/+2
| | | | | | | | | | struct rq isn't visible outside of sched.o so its near useless to expose the pointer, also there are no users of it, so remove it. Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1272997616.1642.207.camel@laptop> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* locking: Convert __raw_spin* functions to arch_spin*Thomas Gleixner2009-12-141-6/+6
| | | | | | | | | | Name space cleanup. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: linux-arch@vger.kernel.org
* locking: Rename __RAW_SPIN_LOCK_UNLOCKED to __ARCH_SPIN_LOCK_UNLOCKEDThomas Gleixner2009-12-141-1/+1
| | | | | | | | | | Further name space cleanup. No functional change Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: linux-arch@vger.kernel.org
* locking: Convert raw_spinlock to arch_spinlockThomas Gleixner2009-12-141-2/+2
| | | | | | | | | | | | | | | | | | | The raw_spin* namespace was taken by lockdep for the architecture specific implementations. raw_spin_* would be the ideal name space for the spinlocks which are not converted to sleeping locks in preempt-rt. Linus suggested to convert the raw_ to arch_ locks and cleanup the name space instead of using an artifical name like core_spin, atomic_spin or whatever No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: linux-arch@vger.kernel.org
* tracing: do not update tracing_max_latency when tracer is stoppedCarsten Emde2009-09-121-2/+4
| | | | | | | | | | | | The state of the function pair tracing_stop()/tracing_start() is correctly considered when tracer data are updated. However, the global and externally accessible variable tracing_max_latency is always updated - even when tracing is stopped. The update should only occur, if tracing was not stopped. Signed-off-by: Carsten Emde <C.Emde@osadl.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: remove unused local variables in tracer probe functionsCarsten Emde2009-09-121-10/+0
| | | | | | | | | | When the nsecs_to_usecs() conversion in probe_wakeup_sched_switch() and check_critical_timing() was moved to a later stage in order to avoid unnecessary computing, it was overlooked to remove the original variables, assignments and comments.. Signed-off-by: Carsten Emde <C.Emde@osadl.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: do not grab lock in wakeup latency function tracingSteven Rostedt2009-09-091-14/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The wakeup tracer, when enabled, has its own function tracer. It only traces the functions on the CPU where the task it is following is on. If a task is woken on one CPU but then migrates to another CPU before it wakes up, the latency tracer will then start tracing functions on the other CPU. To find which CPU the task is on, the wakeup function tracer performs a task_cpu(wakeup_task). But to make sure the task does not disappear it grabs the wakeup_lock, which is also taken when the task wakes up. By taking this lock, the function tracer does not need to worry about the task being freed as it checks its cpu. Jan Blunck found a problem with this approach on his 32 CPU box. When a task is being traced by the wakeup tracer, all functions take this lock. That means that on all 32 CPUs, each function call is taking this one lock to see if the task is on that CPU. This lock has just serialized all functions on all 32 CPUs. Needless to say, this caused major issues on that box. It would even lockup. This patch changes the wakeup latency to insert a probe on the migrate task tracepoint. When a task changes its CPU that it will run on, the probe will take note. Now the wakeup function tracer no longer needs to take the lock. It only compares the current CPU with a variable that holds the current CPU the task is on. We don't worry about races since it is OK to add or miss a function trace. Reported-by: Jan Blunck <jblunck@suse.de> Tested-by: Jan Blunck <jblunck@suse.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: use timestamp to determine start of latency tracesSteven Rostedt2009-09-041-5/+2
| | | | | | | | | | | | | | | | | | | | | | Currently the latency tracers reset the ring buffer. Unfortunately if a commit is in process (due to a trace event), this can corrupt the ring buffer. When this happens, the ring buffer will detect the corruption and then permanently disable the ring buffer. The bug does not crash the system, but it does prevent further tracing after the bug is hit. Instead of reseting the trace buffers, the timestamp of the start of the trace is used instead. The buffers will still contain the previous data, but the output will not count any data that is before the timestamp of the trace. Note, this only affects the static trace output (trace) and not the runtime trace output (trace_pipe). The runtime trace output does not make sense for the latency tracers anyway. Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing/wakeup: move access to wakeup_cpu into spinlockSteven Rostedt2009-04-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | The code had the following outside the lock: if (next != wakeup_task) return; pc = preempt_count(); /* The task we are waiting for is waking up */ data = wakeup_trace->data[wakeup_cpu]; On initialization, wakeup_task is NULL and wakeup_cpu -1. This code is not under a lock. If wakeup_task is set on another CPU as that task is waking up, we can see the wakeup_task before wakeup_cpu is set. If we read wakeup_cpu while it is still -1 then we will have a bad data pointer. This patch moves the reading of wakeup_cpu within the protection of the spinlock used to protect the writing of wakeup_cpu and wakeup_task. [ Impact: remove possible race causing invalid pointer dereference ] Reported-by: Maneesh Soni <maneesh@in.ibm.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com>
* tracing/events: move trace point headers into include/trace/eventsSteven Rostedt2009-04-141-1/+1
| | | | | | | | | | | | | | | | Impact: clean up Create a sub directory in include/trace called events to keep the trace point headers in their own separate directory. Only headers that declare trace points should be defined in this directory. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: Zhao Lei <zhaolei@cn.fujitsu.com> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* tracing: remove CALLER_ADDR2 from wakeup tracerSteven Rostedt2009-04-071-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Maneesh Soni was getting a crash when running the wakeup tracer. We debugged it down to the recording of the function with the CALLER_ADDR2 macro. This is used to get the location of the caller to schedule. But the problem comes when schedule is called by assmebly. In the case that Maneesh had, retint_careful would call schedule. But retint_careful does not set up a proper frame pointer. CALLER_ADDR2 is defined as __builtin_return_address(2). This produces the following assembly in the wakeup tracer code. mov 0x0(%rbp),%rcx <--- get the frame pointer of the caller mov %r14d,%r8d mov 0xf2de8e(%rip),%rdi mov 0x8(%rcx),%rsi <-- this is __builtin_return_address(1) mov 0x28(%rdi,%rax,8),%rbx mov (%rcx),%rax <-- get the frame pointer of the caller's caller mov %r12,%rcx mov 0x8(%rax),%rdx <-- this is __builtin_return_address(2) At the reading of 0x8(%rax) Maneesh's machine would take a fault. The reason is that retint_careful did not set up the return address and the content of %rax here was zero. To verify this, I sent Maneesh a patch to create a frame pointer in retint_careful. He ran the test again but this time he would take the same type of fault from sysret_careful. The retint_careful was no longer an issue, but there are other callers that still have issues. Instead of adding frame pointers for all callers to schedule (in possibly all archs), it is much safer to simply not use CALLER_ADDR2. This loses out on knowing what called schedule, but the function tracer will help there if needed. Reported-by: Maneesh Soni <maneesh@in.ibm.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* tracing: have latency tracers set the latency formatSteven Rostedt2009-03-041-0/+8
| | | | | | | | | | The latency tracers (irqsoff, preemptoff, preemptirqsoff, and wakeup) are pretty useless with the default output format. This patch makes them automatically enable the latency format when they are selected. They also record the state of the latency option, and if it was not enabled when selected, they disable it on reset. Signed-off-by: Steven Rostedt <srostedt@redhat.com>
* tracing/core: use appropriate waiting on trace_pipeFrederic Weisbecker2009-02-181-0/+1
| | | | | | | | | | | | | | | | | | | | | | Impact: api and pipe waiting change Currently, the waiting used in tracing_read_pipe() is done through a 100 msecs schedule_timeout() loop which periodically check if there are traces on the buffer. This can cause small latencies for programs which are reading the incoming events. This patch makes the reader waiting for the trace_wait waitqueue except for few tracers such as the sched and functions tracers which might be already hold the runqueue lock while waking up the reader. This is performed through a new callback wait_pipe() on struct tracer. If none is implemented on a specific tracer, the default waiting for trace_wait queue is attached. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* tracing: fix typing mistake in hint message and commentsWenji Huang2009-02-171-1/+1
| | | | | | | | | Impact: cleanup Fix incorrect hint message in code and typos in comments. Signed-off-by: Wenji Huang <wenji.huang@oracle.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com>
* trace: Remove unused trace_array_cpu parameterArnaldo Carvalho de Melo2009-02-051-7/+5
| | | | | | | | Impact: cleanup Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* trace_sched_wakeup: Remove unused variableArnaldo Carvalho de Melo2009-01-291-4/+1
| | | | | | | Impact: cleanup Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* wakeup-tracer: show scheduling data in outputSteven Rostedt2009-01-221-3/+8
| | | | | | | | | | Impact: better data for wakeup tracer This patch adds the wakeup and schedule calls that are used by the scheduler tracer to make the wakeup tracer more readable. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* trace: separate out rt tasks from wakeup tracerSteven Rostedt2009-01-221-2/+32
| | | | | | | | | | | | | | | | Impact: add option to trace all tasks or just RT tasks The current wakeup tracer only traces RT task wakeups. This is fine for those interested in wake up timings of RT tasks, but it is useless for those that are interested in the causes of long wakeups for non RT tasks. This patch creates a "wakeup_rt" to implement the tracing of just RT tasks (as the current "wakeup" does). And makes "wakeup" now trace all tasks as an average developer would expect. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* trace: do not disable wake up tracer on output of traceSteven Rostedt2009-01-221-31/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix to erased trace output To try not to have the outputing of a trace interfere with the wakeup tracer, it would disable tracing while the output was printing. But if a trace had started when it was disabled, it can show a partial trace. To try to solve this, on closing of the tracer, it would clear the trace buffer. The latency tracers (wakeup and irqsoff) have two buffers. One for recording and one for holding the max trace that is printed. The clearing of the trace above should only affect the recording buffer. But for some reason it would move the erased trace to the print buffer. Probably due to a race with the closing of the trace and the saving ofhe max race. The above is all pretty useless, and if the user does not want the printing of the trace to be traced itself, then the user can manual disable tracing. This patch removes all the code that tries to keep the output of the tracer from modifying the trace. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* trace: set max latency variable to zero on defaultSteven Rostedt2009-01-161-0/+1
| | | | | | | | | | | | | | | | Impact: trace max latencies on start of latency tracing This patch sets the max latency to zero whenever one of the irq variant tracers or the wakeup tracer is set to current tracer. Most developers expect to see output when starting up a latency tracer. But since the max_latency is already set to max, and it takes a latency greater than max_latency to be recorded, there is no trace. This is not the expected behavior and has even confused myself. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* sched, trace: update trace_sched_wakeup()Peter Zijlstra2008-12-251-1/+1
| | | | | | | | | | Impact: extend the wakeup tracepoint with the info whether the wakeup was real Add the information needed to distinguish 'real' wakeups from 'false' wakeups. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* tracing/ftrace: change the type of the init() callbackFrederic Weisbecker2008-11-161-1/+2
| | | | | | | | | | | | | | | | | Impact: extend the ->init() method with the ability to fail This bring a way to know if the initialization of a tracer successed. A tracer must return 0 on success and a traditional error (ie: -ENOMEM) if it fails. If a tracer fails to init, it is free to print a detailed warn. The tracing api will not and switch to a new tracer will just return the error from the init callback. Note: this will be used for the return tracer. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: remove trace array ctrlSteven Rostedt2008-11-081-8/+4
| | | | | | | | | | Impact: remove obsolete variable in trace_array structure With the new start / stop method of ftrace, the ctrl variable in the trace_array structure is now obsolete. Remove it. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: remove ctrl_update methodSteven Rostedt2008-11-081-9/+0
| | | | | | | | | | Impact: Remove the ctrl_update tracer method With the new quick start/stop method of tracing, the ctrl_update method is out of date. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: restructure tracing start/stop infrastructureSteven Rostedt2008-11-061-5/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: change where tracing is started up and stopped Currently, when a new tracer is selected via echo'ing a tracer name into the current_tracer file, the startup is only done if tracing_enabled is set to one. If tracing_enabled is changed to zero (by echo'ing 0 into the tracing_enabled file) a full shutdown is performed. The full startup and shutdown of a tracer can be expensive and the user can lose out traces when echo'ing in 0 to the tracing_enabled file, because the process takes too long. There can also be places that the user would like to start and stop the tracer several times and doing the full startup and shutdown of a tracer might be too expensive. This patch performs the full startup and shutdown when a tracer is selected. It also adds a way to do a quick start or stop of a tracer. The quick version is just a flag that prevents the tracing from taking place, but the overhead of the code is still there. For example, the startup of a tracer may enable tracepoints, or enable the function tracer. The stop and start will just set a flag to have the tracer ignore the calls when the tracepoint or function trace is called. The overhead of the tracer may still be present when the tracer is stopped, but no tracing will occur. Setting the tracer to the 'nop' tracer (or any other tracer) will perform the shutdown of the tracer which will disable the tracepoint or disable the function tracer. The tracing_enabled file will simply start or stop tracing. This change is all internal. The end result for the user should be the same as before. If tracing_enabled is not set, no trace will happen. If tracing_enabled is set, then the trace will happen. The tracing_enabled variable is static between tracers. Enabling tracing_enabled and going to another tracer will keep tracing_enabled enabled. Same is true with disabling tracing_enabled. This patch will now provide a fast start/stop method to the users for enabling or disabling tracing. Note: There were two methods to the struct tracer that were never used: The methods start and stop. These were to be used as a hook to the reading of the trace output, but ended up not being necessary. These two methods are now used to enable the start and stop of each tracer, in case the tracer needs to do more than just not write into the buffer. For example, the irqsoff tracer must stop recording max latencies when tracing is stopped. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: insert in the ftrace_preempt_disable()/enable() functionsSteven Rostedt2008-11-041-11/+2
| | | | | | | | | | Impact: use new, consolidated APIs in ftrace plugins This patch replaces the schedule safe preempt disable code with the ftrace_preempt_disable() and ftrace_preempt_enable() safe functions. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: rename FTRACE to FUNCTION_TRACERSteven Rostedt2008-10-201-2/+2
| | | | | | | | | | | | Due to confusion between the ftrace infrastructure and the gcc profiling tracer "ftrace", this patch renames the config options from FTRACE to FUNCTION_TRACER. The other two names that are offspring from FTRACE DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same. This patch was generated mostly by script, and partially by hand. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: preempt disable over interrupt disableSteven Rostedt2008-10-141-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the new ring buffer infrastructure in ftrace, I'm trying to make ftrace a little more light weight. This patch converts a lot of the local_irq_save/restore into preempt_disable/enable. The original preempt count in a lot of cases has to be sent in as a parameter so that it can be recorded correctly. Some places were recording it incorrectly before anyway. This is also laying the ground work to make ftrace a little bit more reentrant, and remove all locking. The function tracers must still protect from reentrancy. Note: All the function tracers must be careful when using preempt_disable. It must do the following: resched = need_resched(); preempt_disable_notrace(); [...] if (resched) preempt_enable_no_resched_notrace(); else preempt_enable_notrace(); The reason is that if this function traces schedule() itself, the preempt_enable_notrace() will cause a schedule, which will lead us into a recursive failure. If we needed to reschedule before calling preempt_disable, we should have already scheduled. Since we did not, this is most likely that we should not and are probably inside a schedule function. If resched was not set, we still need to catch the need resched flag being set when preemption was off and the if case at the end will catch that for us. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: make work with new ring bufferSteven Rostedt2008-10-141-1/+1
| | | | | | | This patch ports ftrace over to the new ring buffer. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: port to tracepointsMathieu Desnoyers2008-10-141-101/+34
| | | | | | | | | | | | Porting the trace_mark() used by ftrace to tracepoints. (cleanup) Changelog : - Change error messages : marker -> tracepoint [ mingo@elte.hu: conflict resolutions ] Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Acked-by: 'Peter Zijlstra' <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: fix 4d3702b6 (post-v2.6.26): WARNING: at kernel/lockdep.c:2731 ↵Steven Rostedt2008-07-181-11/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check_flags (ftrace) On Wed, 16 Jul 2008, Vegard Nossum wrote: > When booting 4d3702b6, I got this huge thing: > > Testing tracer wakeup: <4>------------[ cut here ]------------ > WARNING: at kernel/lockdep.c:2731 check_flags+0x123/0x160() > Modules linked in: > Pid: 1, comm: swapper Not tainted 2.6.26-crashing-02127-g4d3702b6 #30 > [<c015c349>] warn_on_slowpath+0x59/0xb0 > [<c01276c6>] ? ftrace_call+0x5/0x8 > [<c012d800>] ? native_read_tsc+0x0/0x20 > [<c0158de2>] ? sub_preempt_count+0x12/0xf0 > [<c01814eb>] ? trace_hardirqs_off+0xb/0x10 > [<c0182fbc>] ? __lock_acquire+0x2cc/0x1120 > [<c01814eb>] ? trace_hardirqs_off+0xb/0x10 > [<c01276af>] ? mcount_call+0x5/0xa > [<c017ff53>] check_flags+0x123/0x160 > [<c0183e61>] lock_acquire+0x51/0xd0 > [<c01276c6>] ? ftrace_call+0x5/0x8 > [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0 > [<c01a8d45>] ? ftrace_record_ip+0xf5/0x220 > [<c02d5413>] ? debug_locks_off+0x3/0x50 > [<c01a8d45>] ftrace_record_ip+0xf5/0x220 > [<c01276af>] mcount_call+0x5/0xa > [<c02d5418>] ? debug_locks_off+0x8/0x50 > [<c017ff27>] check_flags+0xf7/0x160 > [<c0183e61>] lock_acquire+0x51/0xd0 > [<c01276c6>] ? ftrace_call+0x5/0x8 > [<c0613d4f>] _spin_lock_irqsave+0x5f/0xa0 > [<c01affcd>] ? wakeup_tracer_call+0x6d/0xf0 > [<c01625e2>] ? _local_bh_enable+0x62/0xb0 > [<c0158ddd>] ? sub_preempt_count+0xd/0xf0 > [<c01affcd>] wakeup_tracer_call+0x6d/0xf0 > [<c0162724>] ? __do_softirq+0xf4/0x110 > [<c01afff1>] ? wakeup_tracer_call+0x91/0xf0 > [<c01276c6>] ftrace_call+0x5/0x8 > [<c0162724>] ? __do_softirq+0xf4/0x110 > [<c0158de2>] ? sub_preempt_count+0x12/0xf0 > [<c01625e2>] _local_bh_enable+0x62/0xb0 > [<c0162724>] __do_softirq+0xf4/0x110 > [<c01627ed>] do_softirq+0xad/0xb0 > [<c0162a15>] irq_exit+0xa5/0xb0 > [<c013a506>] smp_apic_timer_interrupt+0x66/0xa0 > [<c02d3fac>] ? trace_hardirqs_off_thunk+0xc/0x10 > [<c0127449>] apic_timer_interrupt+0x2d/0x34 > [<c018007b>] ? find_usage_backwards+0xb/0xf0 > [<c0613a09>] ? _spin_unlock_irqrestore+0x69/0x80 > [<c014ef32>] tg_shares_up+0x132/0x1d0 > [<c014d2a2>] walk_tg_tree+0x62/0xa0 > [<c014ee00>] ? tg_shares_up+0x0/0x1d0 > [<c014a860>] ? tg_nop+0x0/0x10 > [<c015499d>] update_shares+0x5d/0x80 > [<c0154a2f>] try_to_wake_up+0x6f/0x280 > [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0 > [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0 > [<c0154c94>] wake_up_process+0x14/0x20 > [<c01725f6>] kthread_create+0x66/0xb0 > [<c0195400>] ? do_stop+0x0/0x200 > [<c0195320>] ? __stop_machine_run+0x30/0xb0 > [<c0195340>] __stop_machine_run+0x50/0xb0 > [<c0195400>] ? do_stop+0x0/0x200 > [<c01a8b90>] ? __ftrace_modify_code+0x0/0xc0 > [<c061242d>] ? mutex_unlock+0xd/0x10 > [<c01953cc>] stop_machine_run+0x2c/0x60 > [<c01a94d3>] unregister_ftrace_function+0x103/0x180 > [<c01b0517>] stop_wakeup_tracer+0x17/0x60 > [<c01b056f>] wakeup_tracer_ctrl_update+0xf/0x30 > [<c01ab8d5>] trace_selftest_startup_wakeup+0xb5/0x130 > [<c01ab950>] ? trace_wakeup_test_thread+0x0/0x70 > [<c01aadf5>] register_tracer+0x135/0x1b0 > [<c0877d02>] init_wakeup_tracer+0xd/0xf > [<c085d437>] kernel_init+0x1a9/0x2ce > [<c061397b>] ? _spin_unlock_irq+0x3b/0x60 > [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10 > [<c0877cf5>] ? init_wakeup_tracer+0x0/0xf > [<c0182646>] ? trace_hardirqs_on_caller+0x126/0x180 > [<c02d3f9c>] ? trace_hardirqs_on_thunk+0xc/0x10 > [<c01269c8>] ? restore_nocheck_notrace+0x0/0xe > [<c085d28e>] ? kernel_init+0x0/0x2ce > [<c085d28e>] ? kernel_init+0x0/0x2ce > [<c01275fb>] kernel_thread_helper+0x7/0x10 > ======================= > ---[ end trace a7919e7f17c0a725 ]--- > irq event stamp: 579530 > hardirqs last enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10 > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10 > softirqs last enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110 > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0 > irq event stamp: 579530 > hardirqs last enabled at (579528): [<c01826ab>] trace_hardirqs_on+0xb/0x10 > hardirqs last disabled at (579529): [<c01814eb>] trace_hardirqs_off+0xb/0x10 > softirqs last enabled at (579530): [<c0162724>] __do_softirq+0xf4/0x110 > softirqs last disabled at (579517): [<c01627ed>] do_softirq+0xad/0xb0 > PASSED > > Incidentally, the kernel also hung while I was typing in this report. Things get weird between lockdep and ftrace because ftrace can be called within lockdep internal code (via the mcount pointer) and lockdep can be called with ftrace (via spin_locks). Signed-off-by: Steven Rostedt <srostedt@redhat.com> Tested-by: Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: start wakeup tracing after setting function tracerSteven Rostedt2008-07-111-1/+2
| | | | | | | | | | | Enabling the wakeup tracer before enabling the function tracing causes some strange results due to the dynamic enabling of the functions. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* ftrace: add function tracing to wake up tracingSteven Rostedt2008-05-261-1/+66
| | | | | | | | | | | | This patch adds function tracing to the functions that are called on the CPU of the task being traced. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: pq@iki.fi Cc: proski@gnu.org Cc: sandmann@redhat.com Cc: a.p.zijlstra@chello.nl Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* Port ftrace to markersMathieu Desnoyers2008-05-231-6/+100
| | | | | | | | | | | | Porting ftrace to the marker infrastructure. Don't need to chain to the wakeup tracer from the sched tracer, because markers support multiple probes connected. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> CC: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: printk and trace irqsoff and wakeupsSteven Rostedt2008-05-231-13/+0
| | | | | | | | | | | | | | | | | | printk called from wakeup critical timings and irqs off can cause deadlocks since printk might do a wakeup itself. If the call to printk happens with the runqueue lock held, it can deadlock. This patch protects the printk from being called in trace irqs off with a test to see if the runqueue for the current CPU is locked. If it is locked, the printk is skipped. The wakeup always holds the runqueue lock, so the printk is simply removed. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: sched tracer fixIngo Molnar2008-05-231-10/+3
| | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: add wakeup events to sched tracerIngo Molnar2008-05-231-1/+1
| | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: remove notraceIngo Molnar2008-05-231-14/+14
| | | | | | | | now that we have a kbuild method for notrace, no need to pollute the C code with the annotations. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: add trace_function api for other tracers to useSteven Rostedt2008-05-231-2/+3
| | | | | | | | | | | | | | | | A new check was added in the ftrace function that wont trace if the CPU trace buffer is disabled. Unfortunately, other tracers used ftrace() to write to the buffer after they disabled it. The new disable check makes these calls into a nop. This patch changes the __ftrace that is called without the check into a new api for the other tracers to use, called "trace_function". The other tracers use this interface instead when the trace CPU buffer is already disabled. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: timestamp syncing, prepareIngo Molnar2008-05-231-2/+2
| | | | | | | rename and uninline now() to ftrace_now(). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: cleanupsIngo Molnar2008-05-231-12/+6
| | | | | | | factor out code and clean it up. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* ftrace: add self-testsSteven Rostedt2008-05-231-0/+3
| | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
OpenPOWER on IntegriCloud