summaryrefslogtreecommitdiffstats
path: root/include/linux/cpuidle.h
Commit message (Collapse)AuthorAgeFilesLines
* cpuidle: introduce CPU_PM_CPU_IDLE_ENTER macro for ARM{32, 64}Sudeep Holla2016-07-211-0/+18
| | | | | | | | | | | | | | | The function arm_enter_idle_state is exactly the same in both generic ARM{32,64} CPUIdle driver and will be the same even on ARM64 backend for ACPI processor idle driver. So we can unify it and move it to a common place by introducing CPU_PM_CPU_IDLE_ENTER macro that can be used in all places avoiding duplication. This is in preparation of reuse of the generic cpuidle entry function for ACPI LPI support on ARM64. Suggested-by: Rafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Do not access cpuidle_devices when !CONFIG_CPU_IDLECatalin Marinas2016-06-021-0/+3
| | | | | | | | | | | | | | | | | The cpuidle_devices per-CPU variable is only defined when CPU_IDLE is enabled. Commit c8cc7d4de7a4 ("sched/idle: Reorganize the idle loop") removed the #ifdef CONFIG_CPU_IDLE around cpuidle_idle_call() with the compiler optimising away __this_cpu_read(cpuidle_devices). However, with CONFIG_UBSAN && !CONFIG_CPU_IDLE, this optimisation no longer happens and the kernel fails to link since cpuidle_devices is not defined. This patch introduces an accessor function for the current CPU cpuidle device (returning NULL when !CONFIG_CPU_IDLE) and uses it in cpuidle_idle_call(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: 4.5+ <stable@vger.kernel.org> # 4.5+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle/coupled: Remove cpuidle_device::safe_state_indexXunlei Pang2015-08-281-1/+0
| | | | | | | | | | | | | cpuidle_device::safe_state_index need to be initialized before use, it should be the same as cpuidle_driver::safe_state_index. We tackled this issue by removing the safe_state_index from the cpuidle_device structure and use the one in the cpuidle_driver structure instead. Suggested-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*-. Merge branches 'pm-sleep' and 'pm-runtime'Rafael J. Wysocki2015-06-191-6/+10
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-sleep: PM / sleep: trace_device_pm_callback coverage in dpm_prepare/complete PM / wakeup: add a dummy wakeup_source to record statistics PM / sleep: Make suspend-to-idle-specific code depend on CONFIG_SUSPEND PM / sleep: Return -EBUSY from suspend_enter() on wakeup detection PM / tick: Add tracepoints for suspend-to-idle diagnostics PM / sleep: Fix symbol name in a comment in kernel/power/main.c leds / PM: fix hibernation on arm when gpio-led used with CPU led trigger ARM: omap-device: use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS bus: omap_l3_noc: add missed callbacks for suspend-to-disk PM / sleep: Add macro to define common noirq system PM callbacks PM / sleep: Refine diagnostic messages in enter_state() PM / wakeup: validate wakeup source before activating it. * pm-runtime: PM / Runtime: Update last_busy in rpm_resume PM / runtime: add note about re-calling in during device probe()
| * | PM / sleep: Make suspend-to-idle-specific code depend on CONFIG_SUSPENDRafael J. Wysocki2015-05-191-6/+10
| |/ | | | | | | | | | | | | | | | | | | | | | | Since idle_should_freeze() is defined to always return 'false' for CONFIG_SUSPEND unset, all of the code depending on it in cpuidle_idle_call() is not necessary in that case. Make that code depend on CONFIG_SUSPEND too to avoid building it when it is not going to be used. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de>
* | sched / idle: Call default_idle_call() from cpuidle_enter_state()Rafael J. Wysocki2015-05-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The check of the cpuidle_enter() return value against -EBUSY made in call_cpuidle() will not be necessary any more if cpuidle_enter_state() calls default_idle_call() directly when it is about to return -EBUSY, so make that happen and eliminate the check. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Kevin Hilman <khilman@linaro.org>
* | sched / idle: Call idle_set_state() from cpuidle_enter_state()Rafael J. Wysocki2015-05-141-0/+3
|/ | | | | | | | | | | | | | | | | | | | Introduce a wrapper function around idle_set_state() called sched_idle_set_state() that will pass this_rq() to it as the first argument and make cpuidle_enter_state() call the new function before and after entering the target state. At the same time, remove direct invocations of idle_set_state() from call_cpuidle(). This will allow the invocation of default_idle_call() to be moved from call_cpuidle() to cpuidle_enter_state() safely and call_cpuidle() to be simplified a bit as a result. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Kevin Hilman <khilman@linaro.org>
* cpuidle: remove state_count field from struct cpuidle_deviceBartlomiej Zolnierkiewicz2015-04-031-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Thomas Schlichter reports the following issue on his Samsung NC20: "The C-states C1 and C2 to the OS when connected to AC, and additionally provides the C3 C-state when disconnected from AC. However, the number of C-states shown in sysfs is fixed to the number of C-states present at boot. If I boot with AC connected, I always only see the C-states up to C2 even if I disconnect AC. The reason is commit 130a5f692425 (ACPI / cpuidle: remove dev->state_count setting). It removes the update of dev->state_count, but sysfs uses exactly this variable to show the C-states. The fix is to use drv->state_count in sysfs. As this is currently the last user of dev->state_count, this variable can be completely removed." Remove dev->state_count as per the above. Reported-by: Thomas Schlichter <thomas.schlichter@web.de> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ [ rjw: Changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle / sleep: Use broadcast timer for states that stop local timerRafael J. Wysocki2015-03-051-2/+15
| | | | | | | | | | | | | | | | | | | | | Commit 381063133246 (PM / sleep: Re-implement suspend-to-idle handling) overlooked the fact that entering some sufficiently deep idle states by CPUs may cause their local timers to stop and in those cases it is necessary to switch over to a broadcast timer prior to entering the idle state. If the cpuidle driver in use does not provide the new ->enter_freeze callback for any of the idle states, that problem affects suspend-to-idle too, but it is not taken into account after the changes made by commit 381063133246. Fix that by changing the definition of cpuidle_enter_freeze() and re-arranging of the code in cpuidle_idle_call(), so the former does not call cpuidle_enter() any more and the fallback case is handled by cpuidle_idle_call() directly. Fixes: 381063133246 (PM / sleep: Re-implement suspend-to-idle handling) Reported-and-tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* PM / sleep: Make it possible to quiesce timers during suspend-to-idleRafael J. Wysocki2015-02-151-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The efficiency of suspend-to-idle depends on being able to keep CPUs in the deepest available idle states for as much time as possible. Ideally, they should only be brought out of idle by system wakeup interrupts. However, timer interrupts occurring periodically prevent that from happening and it is not practical to chase all of the "misbehaving" timers in a whack-a-mole fashion. A much more effective approach is to suspend the local ticks for all CPUs and the entire timekeeping along the lines of what is done during full suspend, which also helps to keep suspend-to-idle and full suspend reasonably similar. The idea is to suspend the local tick on each CPU executing cpuidle_enter_freeze() and to make the last of them suspend the entire timekeeping. That should prevent timer interrupts from triggering until an IO interrupt wakes up one of the CPUs. It needs to be done with interrupts disabled on all of the CPUs, though, because otherwise the suspended clocksource might be accessed by an interrupt handler which might lead to fatal consequences. Unfortunately, the existing ->enter callbacks provided by cpuidle drivers generally cannot be used for implementing that, because some of them re-enable interrupts temporarily and some idle entry methods cause interrupts to be re-enabled automatically on exit. Also some of these callbacks manipulate local clock event devices of the CPUs which really shouldn't be done after suspending their ticks. To overcome that difficulty, introduce a new cpuidle state callback, ->enter_freeze, that will be guaranteed (1) to keep interrupts disabled all the time (and return with interrupts disabled) and (2) not to touch the CPU timer devices. Modify cpuidle_enter_freeze() to look for the deepest available idle state with ->enter_freeze present and to make the CPU execute that callback with suspended tick (and the last of the online CPUs to execute it with suspended timekeeping). Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* PM / sleep: Re-implement suspend-to-idle handlingRafael J. Wysocki2015-02-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for adding support for quiescing timers in the final stage of suspend-to-idle transitions, rework the freeze_enter() function making the system wait on a wakeup event, the freeze_wake() function terminating the suspend-to-idle loop and the mechanism by which deep idle states are entered during suspend-to-idle. First of all, introduce a simple state machine for suspend-to-idle and make the code in question use it. Second, prevent freeze_enter() from losing wakeup events due to race conditions and ensure that the number of online CPUs won't change while it is being executed. In addition to that, make it force all of the CPUs re-enter the idle loop in case they are in idle states already (so they can enter deeper idle states if possible). Next, drop cpuidle_use_deepest_state() and replace use_deepest_state checks in cpuidle_select() and cpuidle_reflect() with a single suspend-to-idle state check in cpuidle_idle_call(). Finally, introduce cpuidle_enter_freeze() that will simply find the deepest idle state available to the given CPU and enter it using cpuidle_enter(). Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* cpuidle / ACPI: remove unused CPUIDLE_FLAG_TIME_INVALIDLen Brown2014-12-171-3/+0
| | | | | | | | | | | | | | | | | | | | | | CPUIDLE_FLAG_TIME_INVALID is no longer checked by menu or ladder cpuidle governors, so don't bother setting or defining it. It was originally invented to account for the fact that acpi_safe_halt() enables interrupts to invoke HLT. That would allow interrupt service routines to be included in the last_idle duration measurements made in cpuidle_enter_state(), potentially returning a duration much larger than reality. But menu and ladder can gracefully handle erroneously large duration intervals without checking for CPUIDLE_FLAG_TIME_INVALID. Further, if they don't check CPUIDLE_FLAG_TIME_INVALID, they can also benefit from the instances when the duration interval is not erroneously large. Signed-off-by: Len Brown <len.brown@intel.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Invert CPUIDLE_FLAG_TIME_VALID logicDaniel Lezcano2014-11-121-2/+2
| | | | | | | | | | | | | | | The only place where the time is invalid is when the ACPI_CSTATE_FFH entry method is not set. Otherwise for all the drivers, the time can be correctly measured. Instead of duplicating the CPUIDLE_FLAG_TIME_VALID flag in all the drivers for all the states, just invert the logic by replacing it by the flag CPUIDLE_FLAG_TIME_INVALID, hence we can set this flag only for the acpi idle driver, remove the former flag from all the drivers and invert the logic with this flag in the different governor. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds2014-06-091-0/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull MIPS updates from Ralf Baechle: - three fixes for 3.15 that didn't make it in time - limited Octeon 3 support. - paravirtualization support - improvment to platform support for Netlogix SOCs. - add support for powering down the Malta eval board in software - add many instructions to the in-kernel microassembler. - add support for the BPF JIT. - minor cleanups of the BCM47xx code. - large cleanup of math emu code resulting in significant code size reduction, better readability of the code and more accurate emulation. - improvments to the MIPS CPS code. - support C3 power status for the R4k count/compare clock device. - improvments to the GIO support for older SGI workstations. - increase number of supported CPUs to 256; this can be reached on certain embedded multithreaded ccNUMA configurations. - various small cleanups, updates and fixes * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (173 commits) MIPS: IP22/IP28: Improve GIO support MIPS: Octeon: Add twsi interrupt initialization for OCTEON 3XXX, 5XXX, 63XX DEC: Document the R4k MB ASIC mini interrupt controller DEC: Add self as the maintainer MIPS: Add microMIPS MSA support. MIPS: Replace calls to obsolete strict_strto call with kstrto* equivalents. MIPS: Replace obsolete strict_strto call with kstrto MIPS: BFP: Simplify code slightly. MIPS: Call find_vma with the mmap_sem held MIPS: Fix 'write_msa_##' inline macro. MIPS: Fix MSA toolchain support detection. mips: Update the email address of Geert Uytterhoeven MIPS: Add minimal defconfig for mips_paravirt MIPS: Enable build for new system 'paravirt' MIPS: paravirt: Add pci controller for virtio MIPS: Add code for new system 'paravirt' MIPS: Add functions for hypervisor call MIPS: OCTEON: Add OCTEON3 to __get_cpu_type MIPS: Add function get_ebase_cpunum MIPS: Add minimal support for OCTEON3 to c-r4k.c ...
| * cpuidle: declare cpuidle_dev in cpuidle.hPaul Burton2014-05-281-0/+1
| | | | | | | | | | | | | | | | | | Declaring this allows drivers which need to initialise each struct cpuidle_device at initialisation time to make use of the structures already defined in cpuidle.c, rather than having to wastefully define their own. Signed-off-by: Paul Burton <paul.burton@imgtec.com>
* | PM / suspend: Always use deepest C-state in the "freeze" sleep stateRafael J. Wysocki2014-05-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | If freeze_enter() is called, we want to bypass the current cpuidle governor and always use the deepest available (that is, not disabled) C-state, because we want to save as much energy as reasonably possible then and runtime latency constraints don't matter at that point, since the system is in a sleep state anyway. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Aubrey Li <aubrey.li@linux.intel.com>
* | cpuidle: Combine cpuidle_enabled() with cpuidle_select()Rafael J. Wysocki2014-05-011-5/+0
|/ | | | | | | | | | | Since both cpuidle_enabled() and cpuidle_select() are only called by cpuidle_idle_call(), it is not really useful to keep them separate and combining them will help to avoid complicating cpuidle_idle_call() even further if governors are changed to return error codes sometimes. This code modification shouldn't lead to any functional changes. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* sched/idle: Reorganize the idle loopDaniel Lezcano2014-03-111-0/+2
| | | | | | | | | | | | | | | | | Now that we have the main cpuidle function in idle.c, move some code from the idle mainloop to this function for the sake of clarity. That removes if then else indentation difficult to follow when looking at the code. This patch does not change the current behavior. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: tglx@linutronix.de Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-3-git-send-email-daniel.lezcano@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* cpuidle/idle: Move the cpuidle_idle_call function to idle.cDaniel Lezcano2014-03-111-2/+0
| | | | | | | | | | | | | | | | | The cpuidle_idle_call does nothing more than calling the three individuals function and is no longer used by any arch specific code but only in the cpuidle framework code. We can move this function into the idle task code to ensure better proximity to the scheduler code. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-2-git-send-email-daniel.lezcano@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* idle/cpuidle: Split cpuidle_idle_call main function into smaller functionsDaniel Lezcano2014-03-111-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | In order to allow better integration between the cpuidle framework and the scheduler, reducing the distance between these two sub-components will facilitate this integration by moving part of the cpuidle code in the idle task file and, because idle.c is in the sched directory, we have access to the scheduler's private structures. This patch splits the cpuidle_idle_call main entry function into 3 calls to a newly added API: 1. select the idle state 2. enter the idle state 3. reflect the idle state The cpuidle_idle_call calls these three functions to implement the main idle entry function. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-1-git-send-email-daniel.lezcano@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* cpuidle: remove cpuidle_unregister_governor()Viresh Kumar2013-10-301-6/+0
| | | | | | | | | | | cpuidle_unregister_governor() and cpuidle_replace_governor() aren't used anymore and can be removed. They were used by cpufreq governors earlier, but since the governors can't be compiled as modules any more, these two functions aren't necessary. Suggested-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: fix indentation of cpumaskViresh Kumar2013-10-301-1/+1
| | | | | | | | Use tabs for cpumask indentation in struct cpuidle_driver. [rjw: Changelog] Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Add missing forward declarations of structuresDaniel Lezcano2013-07-151-0/+2
| | | | | | | | | Add missing forward declarations of struct cpuidle_state_kobj and struct cpuidle_driver_kobj in cpuidle.h. [rjw: Changelog] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Make cpuidle's sysfs directory dynamically allocatedDaniel Lezcano2013-07-151-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | The cpuidle sysfs code is designed to have a single instance of per CPU cpuidle directory. It is not possible to remove the sysfs entry and create it again. This is not a problem with the current code but future changes will add CPU hotplug support to enable/disable the device, so it will need to remove the sysfs entry like other subsystems do. That won't be possible without this change, because the kobj is a static object which can't be reused for kobj_init_and_add(). Add cpuidle_device_kobj to be allocated dynamically when adding/removing a sysfs entry which is consistent with the other cpuidle's sysfs entries. An added benefit is that the sysfs code is now more self-contained and the includes needed for sysfs can be moved from cpuidle.h directly into sysfs.c so as to reduce the total number of headers dragged along with cpuidle.h. [rjw: Changelog] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: simplify multiple driver supportDaniel Lezcano2013-06-111-3/+3
| | | | | | | | | | | | | | | | | | | | | | | Commit bf4d1b5 (cpuidle: support multiple drivers) introduced support for using multiple cpuidle drivers at the same time. It added a couple of new APIs to register the driver per CPU, but that led to some unnecessary code complexity related to the kernel config options deciding whether or not the multiple driver support is enabled. The code has to work as it did before when the multiple driver support is not enabled and the multiple driver support has to be compatible with the previously existing API. Remove the new API, not used by any driver in the tree yet (but needed for the HMP cpuidle drivers that will be submitted soon), and add a new cpumask pointer to the cpuidle driver structure that will point to the mask of CPUs handled by the given driver. That will allow the cpuidle_[un]register_driver() API to be used for the multiple driver support along with the cpuidle_[un]register() functions added recently. [rjw: Changelog] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge branch 'release' of ↵Linus Torvalds2013-05-111-1/+1
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux Pull idle update from Len Brown: "Add support for new Haswell-ULT CPU idle power states" * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux: intel_idle: initial C8, C9, C10 support tools/power turbostat: display C8, C9, C10 residency
| * intel_idle: initial C8, C9, C10 supportLen Brown2013-04-171-1/+1
| | | | | | | | | | | | | | | | | | Allow intel_idle and cpuidle to utilize C8, C9, C10 when they are present on... "Fourth Generation Intel(R) Core(TM) Processors", which are based on Intel(R) microarchitecture code name Haswell. Signed-off-by: Len Brown <len.brown@intel.com>
* | cpuidle: make a single register function for allDaniel Lezcano2013-04-231-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The usual scheme to initialize a cpuidle driver on a SMP is: cpuidle_register_driver(drv); for_each_possible_cpu(cpu) { device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); } This code is duplicated in each cpuidle driver. On UP systems, it is done this way: cpuidle_register_driver(drv); device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); On UP, the macro 'for_each_cpu' does one iteration: #define for_each_cpu(cpu, mask) \ for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask) Hence, the initialization loop is the same for UP than SMP. Beside, we saw different bugs / mis-initialization / return code unchecked in the different drivers, the code is duplicated including bugs. After fixing all these ones, it appears the initialization pattern is the same for everyone. Please note, some drivers are doing dev->state_count = drv->state_count. This is not necessary because it is done by the cpuidle_enable_device function in the cpuidle framework. This is true, until you have the same states for all your devices. Otherwise, the 'low level' API should be used instead with the specific initialization for the driver. Let's add a wrapper function doing this initialization with a cpumask parameter for the coupled idle states and use it for all the drivers. That will save a lot of LOC, consolidate the code, and the modifications in the future could be done in a single place. Another benefit is the consolidation of the cpuidle_device variable which is now in the cpuidle framework and no longer spread accross the different arch specific drivers. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: remove en_core_tk_irqen flagDaniel Lezcano2013-04-231-11/+0
| | | | | | | | | | | | | | | | | | | | | | The en_core_tk_irqen flag is set in all the cpuidle driver which means it is not necessary to specify this flag. Remove the flag and the code related to it. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> # for mach-omap2/* Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: initialize the broadcast timer frameworkDaniel Lezcano2013-04-011-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The commit 89878baa73f0f1c679355006bd8632e5d78f96c2 introduced the CPUIDLE_FLAG_TIMER_STOP flag where we specify a specific idle state stops the local timer. Now use this flag to check at init time if one state will need the broadcast timer and, in this case, setup the broadcast timer framework. That prevents multiple code duplication in the drivers. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle : handle clockevent notify from the cpuidle frameworkDaniel Lezcano2013-04-011-0/+1
|/ | | | | | | | | | | | | | | | | | When a cpu enters a deep idle state, the local timers are stopped and the time framework falls back to the timer device used as a broadcast timer. The different cpuidle drivers are calling clockevents_notify ENTER/EXIT when the idle state stops the local timer. Add a new flag CPUIDLE_FLAG_TIMER_STOP which can be set by the cpuidle drivers. If the flag is set, the cpuidle core code takes care of the notification on behalf of the driver to avoid pointless code duplication. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: remove vestage definition of cpuidle_state_usage.driver_dataLen Brown2013-02-111-22/+0
| | | | | | | This field is no longer used. Signed-off-by: Len Brown <len.brown@intel.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
* cpuidle: remove the power_specified field in the driverDaniel Lezcano2013-01-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We realized that the power usage field is never filled and when it is filled for tegra, the power_specified flag is not set causing all of these values to be reset when the driver is initialized with set_power_state(). However, the power_specified flag can be simply removed under the assumption that the states are always backward sorted, which is the case with the current code. This change allows the menu governor select function and the cpuidle_play_dead() to be simplified. Moreover, the set_power_states() function can removed as it does not make sense any more. Drop the power_specified flag from struct cpuidle_driver and make the related changes as described above. As a consequence, this also fixes the bug where on the dynamic C-states system, the power fields are not initialized. [rjw: Changelog] References: https://bugzilla.kernel.org/show_bug.cgi?id=42870 References: https://bugzilla.kernel.org/show_bug.cgi?id=43349 References: https://lkml.org/lkml/2012/10/16/518 Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: support multiple driversDaniel Lezcano2012-11-151-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the tegra3 and the big.LITTLE [1] new architectures, several cpus with different characteristics (latencies and states) can co-exists on the system. The cpuidle framework has the limitation of handling only identical cpus. This patch removes this limitation by introducing the multiple driver support for cpuidle. This option is configurable at compile time and should be enabled for the architectures mentioned above. So there is no impact for the other platforms if the option is disabled. The option defaults to 'n'. Note the multiple drivers support is also compatible with the existing drivers, even if just one driver is needed, all the cpu will be tied to this driver using an extra small chunk of processor memory. The multiple driver support use a per-cpu driver pointer instead of a global variable and the accessor to this variable are done from a cpu context. In order to keep the compatibility with the existing drivers, the function 'cpuidle_register_driver' and 'cpuidle_unregister_driver' will register the specified driver for all the cpus. The semantic for the output of /sys/devices/system/cpu/cpuidle/current_driver remains the same except the driver name will be related to the current cpu. The /sys/devices/system/cpu/cpu[0-9]/cpuidle/driver/name files are added allowing to read the per cpu driver name. [1] http://lwn.net/Articles/481055/ Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: move driver's refcount to cpuidleDaniel Lezcano2012-11-151-0/+1
| | | | | | | | | | We want to support different cpuidle drivers co-existing together. In this case we should move the refcount to the cpuidle_driver structure to handle several drivers at a time. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle / sysfs: move structure declaration into the sysfs.c fileDaniel Lezcano2012-11-151-7/+0
| | | | | | | | | | The structure cpuidle_state_kobj is not used anywhere except in the sysfs.c file. The definition of this structure is not needed in the cpuidle header file. This patch moves it to the sysfs.c file in order to encapsulate the code a bit more. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* ARM: omap: allow building omap44xx without SMPArnd Bergmann2012-08-231-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | The new omap4 cpuidle implementation currently requires ARCH_NEEDS_CPU_IDLE_COUPLED, which only works on SMP. This patch makes it possible to build a non-SMP kernel for that platform. This is not normally desired for end-users but can be useful for testing. Without this patch, building rand-0y2jSKT results in: drivers/cpuidle/coupled.c: In function 'cpuidle_coupled_poke': drivers/cpuidle/coupled.c:317:3: error: implicit declaration of function '__smp_call_function_single' [-Werror=implicit-function-declaration] It's not clear if this patch is the best solution for the problem at hand. I have made sure that we can now build the kernel in all configurations, but that does not mean it will actually work on an OMAP44xx. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Kevin Hilman <khilman@ti.com> Cc: Tony Lindgren <tony@atomide.com>
* Merge branch 'release' of ↵Linus Torvalds2012-07-261-0/+11
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux Pull ACPI & power management update from Len Brown: "Re-write of the turbostat tool. lower overhead was necessary for measuring very large system when they are very idle. IVB support in intel_idle It's what I run on my IVB, others should be able to also:-) ACPICA core update We have found some bugs due to divergence between Linux and the upstream ACPICA base. Most of these patches are to reduce that divergence to reduce the risk of future bugs. Some cpuidle updates, mostly for non-Intel More will be coming, as they depend on this part. Some thermal management changes needed by non-ACPI systems. Some _OST (OS Status Indication) updates for hot ACPI hot-plug." * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux: (51 commits) Thermal: Documentation update Thermal: Add Hysteresis attributes Thermal: Make Thermal trip points writeable ACPI/AC: prevent OOPS on some boxes due to missing check power_supply_register() return value check tools/power: turbostat: fix large c1% issue tools/power: turbostat v2 - re-write for efficiency ACPICA: Update to version 20120711 ACPICA: AcpiSrc: Fix some translation issues for Linux conversion ACPICA: Update header files copyrights to 2012 ACPICA: Add new ACPI table load/unload external interfaces ACPICA: Split file: tbxface.c -> tbxfload.c ACPICA: Add PCC address space to space ID decode function ACPICA: Fix some comment fields ACPICA: Table manager: deploy new firmware error/warning interfaces ACPICA: Add new interfaces for BIOS(firmware) errors and warnings ACPICA: Split exception code utilities to a new file, utexcep.c ACPI: acpi_pad: tune round_robin_time ACPICA: Update to version 20120620 ACPICA: Add support for implicit notify on multiple devices ACPICA: Update comments; no functional change ...
| * cpuidle: coupled: add parallel barrier functionColin Cross2012-06-021-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds cpuidle_coupled_parallel_barrier, which can be used by coupled cpuidle state enter functions to handle resynchronization after determining if any cpu needs to abort. The normal use case will be: static bool abort_flag; static atomic_t abort_barrier; int arch_cpuidle_enter(struct cpuidle_device *dev, ...) { if (arch_turn_off_irq_controller()) { /* returns an error if an irq is pending and would be lost if idle continued and turned off power */ abort_flag = true; } cpuidle_coupled_parallel_barrier(dev, &abort_barrier); if (abort_flag) { /* One of the cpus didn't turn off it's irq controller */ arch_turn_on_irq_controller(); return -EINTR; } /* continue with idle */ ... } This will cause all cpus to abort idle together if one of them needs to abort. Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Len Brown <len.brown@intel.com>
| * cpuidle: add support for states that affect multiple cpusColin Cross2012-06-021-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the cpus cannot be independently powered down, either due to sequencing restrictions (on Tegra 2, cpu 0 must be the last to power down), or due to HW bugs (on OMAP4460, a cpu powering up will corrupt the gic state unless the other cpu runs a work around). Each cpu has a power state that it can enter without coordinating with the other cpu (usually Wait For Interrupt, or WFI), and one or more "coupled" power states that affect blocks shared between the cpus (L2 cache, interrupt controller, and sometimes the whole SoC). Entering a coupled power state must be tightly controlled on both cpus. The easiest solution to implementing coupled cpu power states is to hotplug all but one cpu whenever possible, usually using a cpufreq governor that looks at cpu load to determine when to enable the secondary cpus. This causes problems, as hotplug is an expensive operation, so the number of hotplug transitions must be minimized, leading to very slow response to loads, often on the order of seconds. This file implements an alternative solution, where each cpu will wait in the WFI state until all cpus are ready to enter a coupled state, at which point the coupled state function will be called on all cpus at approximately the same time. Once all cpus are ready to enter idle, they are woken by an smp cross call. At this point, there is a chance that one of the cpus will find work to do, and choose not to enter idle. A final pass is needed to guarantee that all cpus will call the power state enter function at the same time. During this pass, each cpu will increment the ready counter, and continue once the ready counter matches the number of online coupled cpus. If any cpu exits idle, the other cpus will decrement their counter and retry. To use coupled cpuidle states, a cpuidle driver must: Set struct cpuidle_device.coupled_cpus to the mask of all coupled cpus, usually the same as cpu_possible_mask if all cpus are part of the same cluster. The coupled_cpus mask must be set in the struct cpuidle_device for each cpu. Set struct cpuidle_device.safe_state to a state that is not a coupled state. This is usually WFI. Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each state that affects multiple cpus. Provide a struct cpuidle_state.enter function for each state that affects multiple cpus. This function is guaranteed to be called on all cpus at approximately the same time. The driver should ensure that the cpus all abort together if any cpu tries to abort once the function is called. update1: cpuidle: coupled: fix count of online cpus online_count was never incremented on boot, and was also counting cpus that were not part of the coupled set. Fix both issues by introducting a new function that counts online coupled cpus, and call it from register as well as the hotplug notifier. update2: cpuidle: coupled: fix decrementing ready count cpuidle_coupled_set_not_ready sometimes refuses to decrement the ready count in order to prevent a race condition. This makes it unsuitable for use when finished with idle. Add a new function cpuidle_coupled_set_done that decrements both the ready count and waiting count, and call it after idle is complete. Cc: Amit Kucheria <amit.kucheria@linaro.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Trinabh Gupta <g.trinabh@gmail.com> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Acked-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Len Brown <len.brown@intel.com>
* | Merge branch 'pm-domains'Rafael J. Wysocki2012-07-191-0/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-domains: PM / Domains: Fix build warning for CONFIG_PM_RUNTIME unset PM / Domains: Replace plain integer with NULL pointer in domain.c file PM / Domains: Add missing static storage class specifier in domain.c file PM / Domains: Allow device callbacks to be added at any time PM / Domains: Add device domain data reference counter PM / Domains: Add preliminary support for cpuidle, v2 PM / Domains: Do not stop devices after restoring their states PM / Domains: Use subsystem runtime suspend/resume callbacks by default
| * | PM / Domains: Add preliminary support for cpuidle, v2Rafael J. Wysocki2012-07-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some systems there are CPU cores located in the same power domains as I/O devices. Then, power can only be removed from the domain if all I/O devices in it are not in use and the CPU core is idle. Add preliminary support for that to the generic PM domains framework. First, the platform is expected to provide a cpuidle driver with one extra state designated for use with the generic PM domains code. This state should be initially disabled and its exit_latency value should be set to whatever time is needed to bring up the CPU core itself after restoring power to it, not including the domain's power on latency. Its .enter() callback should point to a procedure that will remove power from the domain containing the CPU core at the end of the CPU power transition. The remaining characteristics of the extra cpuidle state, referred to as the "domain" cpuidle state below, (e.g. power usage, target residency) should be populated in accordance with the properties of the hardware. Next, the platform should execute genpd_attach_cpuidle() on the PM domain containing the CPU core. That will cause the generic PM domains framework to treat that domain in a special way such that: * When all devices in the domain have been suspended and it is about to be turned off, the states of the devices will be saved, but power will not be removed from the domain. Instead, the "domain" cpuidle state will be enabled so that power can be removed from the domain when the CPU core is idle and the state has been chosen as the target by the cpuidle governor. * When the first I/O device in the domain is resumed and __pm_genpd_poweron(() is called for the first time after power has been removed from the domain, the "domain" cpuidle state will be disabled to avoid subsequent surprise power removals via cpuidle. The effective exit_latency value of the "domain" cpuidle state depends on the time needed to bring up the CPU core itself after restoring power to it as well as on the power on latency of the domain containing the CPU core. Thus the "domain" cpuidle state's exit_latency has to be recomputed every time the domain's power on latency is updated, which may happen every time power is restored to the domain, if the measured power on latency is greater than the latency stored in the corresponding generic_pm_domain structure. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Reviewed-by: Kevin Hilman <khilman@ti.com>
* | | PM / cpuidle: System resume hang fix with cpuidlePreeti U Murthy2012-07-101-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On certain bios, resume hangs if cpus are allowed to enter idle states during suspend [1]. This was fixed in apci idle driver [2].But intel_idle driver does not have this fix. Thus instead of replicating the fix in both the idle drivers, or in more platform specific idle drivers if needed, the more general cpuidle infrastructure could handle this. A suspend callback in cpuidle_driver could handle this fix. But a cpuidle_driver provides only basic functionalities like platform idle state detection capability and mechanisms to support entry and exit into CPU idle states. All other cpuidle functions are found in the cpuidle generic infrastructure for good reason that all cpuidle drivers, irrepective of their platforms will support these functions. One option therefore would be to register a suspend callback in cpuidle which handles this fix. This could be called through a PM_SUSPEND_PREPARE notifier. But this is too generic a notfier for a driver to handle. Also, ideally the job of cpuidle is not to handle side effects of suspend. It should expose the interfaces which "handle cpuidle 'during' suspend" or any other operation, which the subsystems call during that respective operation. The fix demands that during suspend, no cpus should be allowed to enter deep C-states. The interface cpuidle_uninstall_idle_handler() in cpuidle ensures that. Not just that it also kicks all the cpus which are already in idle out of their idle states which was being done during cpu hotplug through a CPU_DYING_FROZEN callbacks. Now the question arises about when during suspend should cpuidle_uninstall_idle_handler() be called. Since we are dealing with drivers it seems best to call this function during dpm_suspend(). Delaying the call till dpm_suspend_noirq() does no harm, as long as it is before cpu_hotplug_begin() to avoid race conditions with cpu hotpulg operations. In dpm_suspend_noirq(), it would be wise to place this call before suspend_device_irqs() to avoid ugly interactions with the same. Ananlogously, during resume. References: [1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/674075. [2] http://marc.info/?l=linux-pm&m=133958534231884&w=2 Reported-and-tested-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* | | ACPI: intel_idle : break dependency between modulesDaniel Lezcano2012-07-051-7/+0
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the system is booted with some cpus offline, the idle driver is not initialized. When a cpu is set online, the acpi code call the intel idle init function. Unfortunately this code introduce a dependency between intel_idle and acpi. This patch is intended to remove this dependency by using the notifier of intel_idle. This patch has the benefit of encapsulating the intel_idle driver and remove some exported functions. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* | PM / cpuidle: Add driver reference counterRafael J. Wysocki2012-07-031-1/+5
| | | | | | | | | | | | | | Add a reference counter for the cpuidle driver, so that it can't be unregistered when it is in use. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* | cpuidle: move field disable from per-driver to per-cpuShuoX Liu2012-07-031-1/+1
|/ | | | | | | | | | | | | | | | Andrew J.Schorr raises a question. When he changes the disable setting on a single CPU, it affects all the other CPUs. Basically, currently, the disable field is per-driver instead of per-cpu. All the C states of the same driver are shared by all CPU in the same machine. The patch changes the `disable' field to per-cpu, so we could set this separately for each cpu. Signed-off-by: ShuoX Liu <shuox.liu@intel.com> Reported-by: Andrew J.Schorr <aschorr@telemetry-investments.com> Reviewed-by: Yanmin Zhang <yanmin_zhang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* cpuidle: power_usage should be declared signed integerBoris Ostrovsky2012-03-301-1/+1
| | | | | | | | power_usage is always assigned a negative value and should be declared a signed integer Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com> Signed-off-by: Len Brown <len.brown@intel.com>
* idle, x86: Allow off-lined CPU to enter deeper C statesBoris Ostrovsky2012-03-301-0/+5
| | | | | | | | | | Currently when a CPU is off-lined it enters either MWAIT-based idle or, if MWAIT is not desired or supported, HLT-based idle (which places the processor in C1 state). This patch allows processors without MWAIT support to stay in states deeper than C1. Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com> Signed-off-by: Len Brown <len.brown@intel.com>
* cpuidle: remove unused 'governor_data' fieldDaniel Lezcano2012-03-301-1/+0
| | | | | | | As far as I can see, this field is never used in the code. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Len Brown <len.brown@intel.com>
* cpuidle: remove useless array definition in cpuidle_structureDaniel Lezcano2012-03-301-1/+1
| | | | | | | | | | | | | | | | | | | All the modules name are ro-data, it is never copied to the array. eg. static struct cpuidle_driver intel_idle_driver = { .name = "intel_idle", .owner = THIS_MODULE, }; It safe to assign the pointer of this ro-data to a const char *. By this way we save 12 bytes. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Tested-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: Len Brown <len.brown@intel.com>
OpenPOWER on IntegriCloud