summaryrefslogtreecommitdiffstats
path: root/arch/sh/include
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'master' into sh/hw-breakpointsPaul Mundt2009-12-0834-587/+461
|\ | | | | | | | | | | | | | | | | | | Conflict between FPU thread flag migration and debug thread flag addition. Conflicts: arch/sh/include/asm/thread_info.h arch/sh/include/asm/ubc.h arch/sh/kernel/process_32.c
| * sh: mach-ecovec24: Remove un-defined settings for VPUKuninori Morimoto2009-12-041-1/+0
| | | | | | | | | | | | | | | | | | The setting of VPU need not be changed from default. And current setting value is not defined on SH7724 Reported-by: Goda Yusuke <goda.yusuke@renesas.com> Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Break out SuperH PFC codeMagnus Damm2009-11-301-81/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | This file breaks out the SuperH PFC code from arch/sh/kernel/gpio.c + arch/sh/include/asm/gpio.h to drivers/sh/pfc.c + include/linux/sh_pfc.h. Similar to the INTC stuff. The non-SuperH specific file location makes it possible to share the code between multiple architectures. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Move KEYSC header fileMagnus Damm2009-11-301-14/+0
| | | | | | | | | | | | | | | | | | This patch moves the KEYSC header file from the SuperH specific asm directory to a place where it can be shared by multiple architectures. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: mach-ecovec24: modify address mapKuninori Morimoto2009-11-301-1/+1
| | | | | | | | | | | | | | | | ecovec24 board expect address map 2 instead of map 1 Signed-off-by: Mizukawa Tatsuo <mizukawa.tatsuo@renesas.com> Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Fix up the FPU emulation build.Paul Mundt2009-11-251-0/+1
| | | | | | | | Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Minor optimisations to FPU handlingStuart Menefy2009-11-243-14/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A number of small optimisations to FPU handling, in particular: - move the task USEDFPU flag from the thread_info flags field (which is accessed asynchronously to the thread) to a new status field, which is only accessed by the thread itself. This allows locking to be removed in most cases, or can be reduced to a preempt_lock(). This mimics the i386 behaviour. - move the modification of regs->sr and thread_info->status flags out of save_fpu() to __unlazy_fpu(). This gives the compiler a better chance to optimise things, as well as making save_fpu() symmetrical with restore_fpu() and init_fpu(). - implement prepare_to_copy(), so that when creating a thread, we can unlazy the FPU prior to copying the thread data structures. Also make sure that the FPU is disabled while in the kernel, in particular while booting, and for newly created kernel threads, In a very artificial benchmark, the execution time for 2500000 context switches was reduced from 50 to 45 seconds. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Improve performance of SH4 versions of copy/clear_user_highpageStuart Menefy2009-11-241-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous implementation of clear_user_highpage and copy_user_highpage checked to see if there was a D-cache aliasing issue between the user and kernel mappings of a page, but if there was they always did a flush with writeback on the dirtied kernel alias. However as we now have the ability to map a page into kernel space with the same cache colour as the user mapping, there is no need to write back this data. Currently we also invalidate the kernel alias as a precaution, however I'm not sure if this is actually required. Also correct the definition of FIX_CMAP_END so that the mappings created by kmap_coherent() are actually at the correct colour. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * Merge branch 'master' into sh/st-integrationPaul Mundt2009-11-2429-478/+432
| |\
| | * sh64: Fix up the CONFIG_GENERIC_BUG=n build.Paul Mundt2009-11-121-4/+0
| | | | | | | | | | | | | | | | | | | | | sh64 doesn't use GENERIC_BUG, which presently causes the handle_BUG() code to blow up. Fix up the dependencies and get it all building again. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * sh: Use the generic I/O port base for slowdown.Paul Mundt2009-11-121-9/+3
| | | | | | | | | | | | | | | | | | | | | | | | This fixes up the build and behaviour for various configurations. Namely the CONFIG_32BIT cases where legacy mappings do not exist, as well as the sh64 build. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * Merge branch 'sh/stable-updates'Paul Mundt2009-11-091-1/+1
| | |\
| | * | sh: mach-se: Convert SE7722 FPGA to dynamic IRQ allocation.Paul Mundt2009-11-041-9/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This gets rid of the arbitrary set of vectors used by the SE7722 FPGA interrupt controller and witches over to a completely dynamic set. No assumptions regarding a contiguous range are made, and the platform resources themselves need to be filled in lazily. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Add R-standby sleep mode supportMagnus Damm2009-10-301-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add R-standby specific bits to the SuperH Mobile sleep code. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Add MMU and Cache handling sleep mode codeMagnus Damm2009-10-301-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add MMU and cache handling functionality to the SuperH Mobile sleep code. The MMU and cache registers are saved and restored. The MMU is disabled and the cache is flushed and disabled before entering sleep modes if the SUSP_SH_MMU flag is set. This flag should be set in the case of R-standby and most likely for future U-standby support as well. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Keep track of allowed sleep modesMagnus Damm2009-10-301-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add code to keep track of supported sleep modes. This to only export cpuidle modes that are backed by board support code. Also, do not allow suspend-to-ram if sdram board code is missing. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Rework SuperH Mobile sleep mode codeMagnus Damm2009-10-301-0/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rework the SuperH Mobile sleep code from including board specific code to allowing each board to provide pre/post code snippets. These snippets should contain sdram management code to enter and leave self-refresh. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Allow boards to register memory pre/post sleep codeMagnus Damm2009-10-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add code to allow boards registering self-contained functions for going to/from self-refresh. At this point the board code is unused. When all supported boards have been converted then the new sleep code will make use of these functions. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Add notifiers chains for cpu/board codeMagnus Damm2009-10-301-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds atomic notifier chains for pre/post sleep events. Useful for cpu code and boards that need to save and restore register state before and after entering a sleep mode. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: perf events: Add preliminary support for SH-4A counters.Paul Mundt2009-10-281-2/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds in preliminary support for the SH-4A performance counters. Presently only the first 2 counters are supported, as these are the ones of the most interest to the perf tool and end users. Counter chaining is not presently handled, so these are simply implemented as 32-bit counters. This also establishes a perf event support framework for other hardware counters, which the existing SH-4 oprofile code will migrate over to as the SH-4A support evolves. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Fix up dma_is_consistent().Paul Mundt2009-10-271-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes up the dma_is_consistent() definition for the various coherence options. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Revamp PCI DMA coherence Kconfig bits.Paul Mundt2009-10-271-13/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Leaving this configurable caused more trouble than it was ever worth, so just make it explicit. Boards that are verified one way or the other can fix up their selects accordingly. We presently default to non-coherent for most platforms. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: fix watchdog timer for sh7780/sh7785Valentin R Sitsikov2009-10-272-1/+71
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Add dma-mapping support for dma_alloc/free_coherent() overrides.Paul Mundt2009-10-261-8/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This moves the current dma_alloc/free_coherent() calls to a generic variant and plugs them in for the nommu default. Other variants can override the defaults in the dma mapping ops directly. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Convert to asm-generic/dma-mapping-common.hPaul Mundt2009-10-202-183/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This converts the old DMA mapping support to the new generic dma-mapping-common.h abstraction. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Fix up smp_mb__xxx() memory barriers for SH-4A SMP.Paul Mundt2009-10-182-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the past these were simply wrapping to barrier() which was sufficient on SH SMP platforms predating SH-4A. Unfortunately due to ll/sc semantics an explicit synco is needed in these cases, which is sorted for us by just switching these over to smp_mb(). smp_mb() also has the benefit of being wrapped to barrier() in the UP and non-SH4A cases, so old behaviour is maintained for those parts. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Convert to asm-generic/irqflags.h.Paul Mundt2009-10-175-212/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This simplifies the irqflags support by switching over to the asm-generic version. The necessary support functions are brought out-of-line for both SHcompact and SHmedia instruction sets. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Kill off legacy UBC wakeup cruft.Paul Mundt2009-10-161-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This code was added for some ancient SH-4 solution engines with peculiar boot ROMs that did silly things to the UBC MSTP bits. None of these have been in the wild for years, and these days the clock framework wraps up the MSTP bits, meaning that the UBC code is one of the few interfaces that is stomping MSTP bits underneath the clock framework. At this point the risks far outweigh any benefit this code provided, so just kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Support SCHED_MC for SH-X3 multi-cores.Paul Mundt2009-10-161-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This enables SCHED_MC support for SH-X3 multi-cores. Presently this is just a simple wrapper around the possible map, but this allows for tying in support for some of the more exotic NUMA clusters where we can actually do something with the topology. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Idle loop chainsawing for SMP-based light sleep.Paul Mundt2009-10-161-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This does a bit of chainsawing of the idle loop code to get light sleep working on SMP. Previously this was forcing secondary CPUs in to sleep mode with them not coming back if they didn't have their own local timers. Given that we use clockevents broadcasting by default, the CPU managing the clockevents can't have IRQs disabled before entering its sleep state. This unfortunately leaves us with the age-old need_resched() race in between local_irq_enable() and cpu_sleep(), but at present this is unavoidable. After some more experimentation it may be possible to layer on SR.BL bit manipulation over top of this scheme to inhibit the race condition, but given the current potential for missing wakeups, this is left as a future exercise. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Count NMIs in irq_cpustat_t.Paul Mundt2009-10-141-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This plugs in support for NMI counting per-CPU via irq_cpustat_t. Modelled after the x86 implementation. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: TS_RESTORE_SIGMASK conversion.Paul Mundt2009-10-141-4/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace TIF_RESTORE_SIGMASK with TS_RESTORE_SIGMASK and define our own set_restore_sigmask() function. This saves the costly SMP-safe set_bit operation, which we do not need for the sigmask flag since TIF_SIGPENDING always has to be set too. Based on the x86 and powerpc change. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Provide CALLER_ADDRx definitions even when ftrace is disabled.Paul Mundt2009-10-141-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Despite being located in the ftrace header, the CALLER_ADDRx definitions are used by generic code. As such, we have to provide it generically, and given that there is no real dependence on ftrace in the first place, the definitions can just be moved out. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Tidy up the dwarf module helpers.Paul Mundt2009-10-131-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This enables us to build the dwarf unwinder both with modules enabled and disabled in addition to reducing code size in the latter case. The helpers are also consolidated, and modified to resemble the BUG module helpers. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | sh: Generalize CALLER_ADDRx support.Paul Mundt2009-10-132-42/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This splits out the unwinder implementation and adds a new return_address() abstraction modelled after the ARM code. The DWARF unwinder is tied in to this, returning NULL otherwise in the case of being unable to support arbitrary depths. This enables us to get correct behaviour with the unwinder enabled, as well as disabling the arbitrary depth support when frame pointers are enabled, as arbitrary depths with __builtin_return_address() are not supported regardless. With this abstraction it's also possible to layer on a simplified implementation with frame pointers in the event that the unwinder isn't enabled, although this is left as a future exercise. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | Merge branch 'sh/ftrace' of git://github.com/mfleming/linux-2.6Paul Mundt2009-10-131-0/+47
| | |\ \
| | | * | sh: tracing: Use the DWARF unwinder for CALLER_ADDRxMatt Fleming2009-10-111-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The major reason for implementing the DWARF unwinder in the first place was so that we could stop using __builtin_return_address(n), which doesn't work on SH for n > 0. Signed-off-by: Matt Fleming <matt@console-pimps.org>
| | * | | Merge branch 'sh/dwarf-unwinder'Paul Mundt2009-10-121-0/+16
| | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/sh/kernel/dwarf.c
| | | * \ \ Merge branch 'sh/dwarf-unwinder' of git://github.com/mfleming/linux-2.6 into ↵Paul Mundt2009-10-121-0/+16
| | | |\ \ \ | | | | | | | | | | | | | | | | | | | | | sh/dwarf-unwinder
| | | | * | | sh: Fix memory leak in dwarf_unwind_stack()Matt Fleming2009-10-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we broke out of the while (1) loop because the return address of "frame" was zero, then "frame" needs to be free'd before we return. Signed-off-by: Matt Fleming <matt@console-pimps.org>
| | | | * | | sh: Teach the DWARF unwinder about modulesMatt Fleming2009-10-111-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pass a module's .eh_frame section to the DWARF unwinder at module load time so that the section's FDEs and CIEs can be registered with the DWARF unwinder. This allows us to unwind the stack through module code when generating backtraces. Signed-off-by: Matt Fleming <matt@console-pimps.org>
| | * | | | | sh: Shut up CONFIG_32BIT=n compiler warnings.Paul Mundt2009-10-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | | | | sh: Fold fixed-PMB support into dynamic PMB supportMatt Fleming2009-10-101-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initialisation process differs for CONFIG_PMB and for CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB entries that were allocated by the bootloader. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | | | | sh: Remap physical memory into P1 and P2 in pmb_init()Matt Fleming2009-10-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Eventually we'll have complete control over what physical memory gets mapped where and we can probably do other interesting things. For now though, when the MMU is in 32-bit mode, we map physical memory into the P1 and P2 virtual address ranges with the same semantics as they have in 29-bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | | | | sh: Make most PMB functions staticMatt Fleming2009-10-101-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's no need to export the internal PMB functions for allocating, freeing and modifying PMB entries, etc. This way we can restrict the interface for PMB. Also remove the static from pmb_init() so that we have more freedom in setting up the initial PMB entries and turning on MMU 32bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | | | | sh: Prepare for dynamic PMB supportMatt Fleming2009-10-105-7/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To allow the MMU to be switched between 29bit and 32bit mode at runtime some constants need to swapped for functions that return a runtime value. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | | | | sh: Obliterate the P1 area macrosMatt Fleming2009-10-101-3/+0
| | | |_|/ / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the idea that all addresses in P1SEG are untranslated, so we can access an address's physical page as an offset from P1SEG. This doesn't work for CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used for PMB mappings and so can be translated to any physical address. Likewise, replace a P1SEGADDR() use with virt_to_phys(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| | * | | | Merge branch 'sh/cachetlb'Paul Mundt2009-10-101-3/+3
| | |\ \ \ \
| | | * | | | sh: Factor in cpu id for selection of cache colour fixmap.Paul Mundt2009-09-091-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the SMP VIPT case the page copy/clear ops still perform colouring, care needs to be taken that CPUs don't end up stepping on each other, so we give them a bit of room to work with. At the same time, we reduce the worst-case colouring given that these pages are always consumed. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * | | | | | sh: add sleazy FPU optimizationGiuseppe CAVALLARO2009-11-241-0/+3
| | |_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sh port of the sLeAZY-fpu feature currently implemented for some architectures such us i386. Right now the SH kernel has a 100% lazy fpu behaviour. This is of course great for applications that have very sporadic or no FPU use. However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: after 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. After 256 switches, this is reset and the 100% lazy behavior is returned. Tests with LMbench showed no regression. I saw a little improvement due to the prefetching (~2%). The tests below also show that, with this sLeazy patch, indeed, the number of FPU exceptions is reduced. To test this. I hacked the lat_ctx LMBench to use the FPU a little more. sLeasy implementation =========================================== switch_to calls | 79326 sleasy calls | 42577 do_fpu_state_restore calls| 59232 restore_fpu calls | 59032 Exceptions: 0x800 (FPU disabled ): 16604 100% Leazy (default implementation) =========================================== switch_to calls | 79690 do_fpu_state_restore calls | 53299 restore_fpu calls | 53101 Exceptions: 0x800 (FPU disabled ): 53273 Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
OpenPOWER on IntegriCloud