| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflict between FPU thread flag migration and debug
thread flag addition.
Conflicts:
arch/sh/include/asm/thread_info.h
arch/sh/include/asm/ubc.h
arch/sh/kernel/process_32.c
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The setting of VPU need not be changed from default.
And current setting value is not defined on SH7724
Reported-by: Goda Yusuke <goda.yusuke@renesas.com>
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This file breaks out the SuperH PFC code from
arch/sh/kernel/gpio.c + arch/sh/include/asm/gpio.h
to drivers/sh/pfc.c + include/linux/sh_pfc.h.
Similar to the INTC stuff. The non-SuperH specific
file location makes it possible to share the code
between multiple architectures.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch moves the KEYSC header file from the
SuperH specific asm directory to a place where
it can be shared by multiple architectures.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |
| |
| |
| |
| |
| |
| |
| | |
ecovec24 board expect address map 2 instead of map 1
Signed-off-by: Mizukawa Tatsuo <mizukawa.tatsuo@renesas.com>
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |
| |
| |
| | |
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A number of small optimisations to FPU handling, in particular:
- move the task USEDFPU flag from the thread_info flags field (which
is accessed asynchronously to the thread) to a new status field,
which is only accessed by the thread itself. This allows locking to
be removed in most cases, or can be reduced to a preempt_lock().
This mimics the i386 behaviour.
- move the modification of regs->sr and thread_info->status flags out
of save_fpu() to __unlazy_fpu(). This gives the compiler a better
chance to optimise things, as well as making save_fpu() symmetrical
with restore_fpu() and init_fpu().
- implement prepare_to_copy(), so that when creating a thread, we can
unlazy the FPU prior to copying the thread data structures.
Also make sure that the FPU is disabled while in the kernel, in
particular while booting, and for newly created kernel threads,
In a very artificial benchmark, the execution time for 2500000
context switches was reduced from 50 to 45 seconds.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The previous implementation of clear_user_highpage and copy_user_highpage
checked to see if there was a D-cache aliasing issue between the user
and kernel mappings of a page, but if there was they always did a
flush with writeback on the dirtied kernel alias.
However as we now have the ability to map a page into kernel space
with the same cache colour as the user mapping, there is no need to
write back this data.
Currently we also invalidate the kernel alias as a precaution, however
I'm not sure if this is actually required.
Also correct the definition of FIX_CMAP_END so that the mappings created
by kmap_coherent() are actually at the correct colour.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
sh64 doesn't use GENERIC_BUG, which presently causes the handle_BUG()
code to blow up. Fix up the dependencies and get it all building again.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This fixes up the build and behaviour for various configurations. Namely
the CONFIG_32BIT cases where legacy mappings do not exist, as well as the
sh64 build.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This gets rid of the arbitrary set of vectors used by the SE7722 FPGA
interrupt controller and witches over to a completely dynamic set.
No assumptions regarding a contiguous range are made, and the platform
resources themselves need to be filled in lazily.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add R-standby specific bits to the SuperH Mobile sleep code.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add MMU and cache handling functionality to the SuperH Mobile
sleep code. The MMU and cache registers are saved and restored.
The MMU is disabled and the cache is flushed and disabled before
entering sleep modes if the SUSP_SH_MMU flag is set. This flag
should be set in the case of R-standby and most likely for future
U-standby support as well.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add code to keep track of supported sleep modes. This to
only export cpuidle modes that are backed by board support
code. Also, do not allow suspend-to-ram if sdram board code
is missing.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Rework the SuperH Mobile sleep code from including
board specific code to allowing each board to provide
pre/post code snippets. These snippets should contain
sdram management code to enter and leave self-refresh.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add code to allow boards registering self-contained
functions for going to/from self-refresh. At this
point the board code is unused. When all supported
boards have been converted then the new sleep code
will make use of these functions.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch adds atomic notifier chains for pre/post
sleep events. Useful for cpu code and boards that
need to save and restore register state before and
after entering a sleep mode.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This adds in preliminary support for the SH-4A performance counters.
Presently only the first 2 counters are supported, as these are the ones
of the most interest to the perf tool and end users. Counter chaining is
not presently handled, so these are simply implemented as 32-bit
counters.
This also establishes a perf event support framework for other hardware
counters, which the existing SH-4 oprofile code will migrate over to as
the SH-4A support evolves.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This fixes up the dma_is_consistent() definition for the various
coherence options.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Leaving this configurable caused more trouble than it was ever worth, so
just make it explicit. Boards that are verified one way or the other can
fix up their selects accordingly. We presently default to non-coherent
for most platforms.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This moves the current dma_alloc/free_coherent() calls to a generic
variant and plugs them in for the nommu default. Other variants can
override the defaults in the dma mapping ops directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This converts the old DMA mapping support to the new generic
dma-mapping-common.h abstraction.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In the past these were simply wrapping to barrier() which was sufficient
on SH SMP platforms predating SH-4A. Unfortunately due to ll/sc semantics
an explicit synco is needed in these cases, which is sorted for us by
just switching these over to smp_mb(). smp_mb() also has the benefit of
being wrapped to barrier() in the UP and non-SH4A cases, so old behaviour
is maintained for those parts.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This simplifies the irqflags support by switching over to the asm-generic
version. The necessary support functions are brought out-of-line for both
SHcompact and SHmedia instruction sets.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This code was added for some ancient SH-4 solution engines with peculiar
boot ROMs that did silly things to the UBC MSTP bits. None of these have
been in the wild for years, and these days the clock framework wraps up
the MSTP bits, meaning that the UBC code is one of the few interfaces
that is stomping MSTP bits underneath the clock framework. At this point
the risks far outweigh any benefit this code provided, so just kill it
off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This enables SCHED_MC support for SH-X3 multi-cores. Presently this is
just a simple wrapper around the possible map, but this allows for
tying in support for some of the more exotic NUMA clusters where we can
actually do something with the topology.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This does a bit of chainsawing of the idle loop code to get light sleep
working on SMP. Previously this was forcing secondary CPUs in to sleep
mode with them not coming back if they didn't have their own local
timers. Given that we use clockevents broadcasting by default, the CPU
managing the clockevents can't have IRQs disabled before entering its
sleep state.
This unfortunately leaves us with the age-old need_resched() race in
between local_irq_enable() and cpu_sleep(), but at present this is
unavoidable. After some more experimentation it may be possible to layer
on SR.BL bit manipulation over top of this scheme to inhibit the race
condition, but given the current potential for missing wakeups, this is
left as a future exercise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This plugs in support for NMI counting per-CPU via irq_cpustat_t.
Modelled after the x86 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Replace TIF_RESTORE_SIGMASK with TS_RESTORE_SIGMASK and define our own
set_restore_sigmask() function. This saves the costly SMP-safe set_bit
operation, which we do not need for the sigmask flag since TIF_SIGPENDING
always has to be set too.
Based on the x86 and powerpc change.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Despite being located in the ftrace header, the CALLER_ADDRx definitions
are used by generic code. As such, we have to provide it generically, and
given that there is no real dependence on ftrace in the first place, the
definitions can just be moved out.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This enables us to build the dwarf unwinder both with modules enabled and
disabled in addition to reducing code size in the latter case. The
helpers are also consolidated, and modified to resemble the BUG module
helpers.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This splits out the unwinder implementation and adds a new
return_address() abstraction modelled after the ARM code. The DWARF
unwinder is tied in to this, returning NULL otherwise in the case of
being unable to support arbitrary depths.
This enables us to get correct behaviour with the unwinder enabled,
as well as disabling the arbitrary depth support when frame pointers are
enabled, as arbitrary depths with __builtin_return_address() are not
supported regardless.
With this abstraction it's also possible to layer on a simplified
implementation with frame pointers in the event that the unwinder isn't
enabled, although this is left as a future exercise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The major reason for implementing the DWARF unwinder in the first place
was so that we could stop using __builtin_return_address(n), which
doesn't work on SH for n > 0.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
| | |\ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Conflicts:
arch/sh/kernel/dwarf.c
|
| | | |\ \ \
| | | | | | |
| | | | | | |
| | | | | | | |
sh/dwarf-unwinder
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
If we broke out of the while (1) loop because the return address of
"frame" was zero, then "frame" needs to be free'd before we return.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Pass a module's .eh_frame section to the DWARF unwinder at module load
time so that the section's FDEs and CIEs can be registered with the
DWARF unwinder. This allows us to unwind the stack through module code
when generating backtraces.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Eventually we'll have complete control over what physical memory gets
mapped where and we can probably do other interesting things. For now
though, when the MMU is in 32-bit mode, we map physical memory into the
P1 and P2 virtual address ranges with the same semantics as they have in
29-bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.
Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | |_|/ /
| | |/| | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.
Likewise, replace a P1SEGADDR() use with virt_to_phys().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | |\ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
In the SMP VIPT case the page copy/clear ops still perform colouring,
care needs to be taken that CPUs don't end up stepping on each other,
so we give them a bit of room to work with.
At the same time, we reduce the worst-case colouring given that these
pages are always consumed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | |_|_|_|/
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
sh port of the sLeAZY-fpu feature currently implemented for some architectures
such us i386.
Right now the SH kernel has a 100% lazy fpu behaviour.
This is of course great for applications that have very sporadic or no FPU use.
However for very frequent FPU users... you take an extra trap every context
switch.
The patch below adds a simple heuristic to this code: after 5 consecutive
context switches of FPU use, the lazy behavior is disabled and the context
gets restored every context switch.
After 256 switches, this is reset and the 100% lazy behavior is returned.
Tests with LMbench showed no regression.
I saw a little improvement due to the prefetching (~2%).
The tests below also show that, with this sLeazy patch, indeed,
the number of FPU exceptions is reduced.
To test this. I hacked the lat_ctx LMBench to use the FPU a little more.
sLeasy implementation
===========================================
switch_to calls | 79326
sleasy calls | 42577
do_fpu_state_restore calls| 59232
restore_fpu calls | 59032
Exceptions: 0x800 (FPU disabled ): 16604
100% Leazy (default implementation)
===========================================
switch_to calls | 79690
do_fpu_state_restore calls | 53299
restore_fpu calls | 53101
Exceptions: 0x800 (FPU disabled ): 53273
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|