summaryrefslogtreecommitdiffstats
path: root/arch/sh
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'sh/stable-updates'Paul Mundt2009-10-144-6/+9
|\
| * sh: Fix a TRACE_IRQS_OFF typo.Paul Mundt2009-10-141-1/+1
| | | | | | | | | | | | | | | | The resume_userspace path had TRACE_IRQS_OFF written incorrectly and so never handled the transition properly. This was fixed once before but seems to have made it back in the tree. Fix it for good. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Optimize the setup_rt_frame() I-cache flush.Paul Mundt2009-10-141-2/+1
| | | | | | | | | | | | | | | | This only needs to flush the return code via the legacy path, and just invalidates uselessly otherwise. This makes the behaviour consistent for all of the trampoline setup paths. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Populate initial secondary CPU info from boot_cpu_data.Paul Mundt2009-10-141-0/+2
| | | | | | | | | | | | | | | | The secondary CPU info was seeing corrupted results due to not entering all of the setup paths taken by the boot CPU. So we just memcpy() the boot cpu data over directly, and then fix up the per-CPU bits. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Tidy up SMP cpuinfo.Paul Mundt2009-10-141-0/+2
| | | | | | | | | | | | | | Trivial change for cleaning up the cpuinfo pretty printing on SMP, adds a newline between CPUs. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Use boot_cpu_data for FPU tests in sigcontext paths.Paul Mundt2009-10-141-3/+3
| | | | | | | | | | | | | | We do not want to use smp_processor_id() from these paths, as they trip preempt BUGs. Switch the test over to the boot cpu directly. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Only invalidate the I-cache range for secondary CPUs stack_start.Paul Mundt2009-10-141-1/+3
| | | | | | | | | | | | | | | | | | | | Secondary CPUs already take care of the D-cache bits through the common cache initialization path, and the only thing that is necessary after twiddling around with stack_start is ensuring that the I-cache changes are visible (particularly since this tends to be the only part lacking coherency). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Provide CALLER_ADDRx definitions even when ftrace is disabled.Paul Mundt2009-10-141-1/+5
| | | | | | | | | | | | | | | | | | Despite being located in the ftrace header, the CALLER_ADDRx definitions are used by generic code. As such, we have to provide it generically, and given that there is no real dependence on ftrace in the first place, the definitions can just be moved out. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: ftrace: Make code modification NMI safe.Paul Mundt2009-10-132-1/+146
| | | | | | | | | | | | | | | | | | | | This cribs the x86 implementation of ftrace_nmi_enter() and friends to make ftrace_modify_code() NMI safe, particularly on SMP configurations. For additional notes on the problems involved, see the comment below ftrace_call_replace(). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Don't profile return_address().Paul Mundt2009-10-131-0/+2
| | | | | | | | | | | | | | This adds return_address.c to the -pg exclusion list, as this is the building block for CALLER_ADDRx we do not want to profile this. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Tidy up the dwarf module helpers.Paul Mundt2009-10-133-36/+53
| | | | | | | | | | | | | | | | | | This enables us to build the dwarf unwinder both with modules enabled and disabled in addition to reducing code size in the latter case. The helpers are also consolidated, and modified to resemble the BUG module helpers. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Generalize CALLER_ADDRx support.Paul Mundt2009-10-134-42/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This splits out the unwinder implementation and adds a new return_address() abstraction modelled after the ARM code. The DWARF unwinder is tied in to this, returning NULL otherwise in the case of being unable to support arbitrary depths. This enables us to get correct behaviour with the unwinder enabled, as well as disabling the arbitrary depth support when frame pointers are enabled, as arbitrary depths with __builtin_return_address() are not supported regardless. With this abstraction it's also possible to layer on a simplified implementation with frame pointers in the event that the unwinder isn't enabled, although this is left as a future exercise. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | Merge branch 'sh/stable-updates'Paul Mundt2009-10-131-10/+27
|\ \ | |/
| * sh: ftrace: Fix up syscall tracepoint support.Paul Mundt2009-10-131-10/+27
| | | | | | | | | | | | | | | | | | | | | | | | Sync up with latest core changes in the syscalls tracing area: - tracing: Map syscall name to number (syscall_name_to_nr()) - tracing: Call arch_init_ftrace_syscalls at boot - tracing: add support tracepoint ids (set_syscall_{enter,exit}_id()) Taken from the s390 change. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | Merge branch 'sh/stable-updates'Paul Mundt2009-10-132-4/+5
|\ \ | |/
| * sh: force dcache flush if dcache_dirty bit set.Paul Mundt2009-10-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This too follows the ARM change, given that the issue at hand applies to all platforms that implement lazy D-cache writeback. This fixes up the case when a page mapping disappears between the flush_dcache_page() call (when PG_dcache_dirty is set for the page) and the update_mmu_cache() call -- such as in the case of swap cache being freed early. This kills off the mapping test in update_mmu_cache() and switches to simply testing for PG_dcache_dirty. Reported-by: Nitin Gupta <ngupta@vflare.org> Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: update die() output.Paul Mundt2009-10-131-3/+4
| | | | | | | | | | | | | | | | | | | | | | This follows the ARM change, as SH had all of the same issues: Make die() better match x86: - add printing of the last accessed sysfs file - ensure console_verbose() is called under the lock - ensure we panic outside of oops_exit() Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | Merge branch 'sh/ftrace' of git://github.com/mfleming/linux-2.6Paul Mundt2009-10-131-0/+47
|\ \
| * | sh: tracing: Use the DWARF unwinder for CALLER_ADDRxMatt Fleming2009-10-111-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | The major reason for implementing the DWARF unwinder in the first place was so that we could stop using __builtin_return_address(n), which doesn't work on SH for n > 0. Signed-off-by: Matt Fleming <matt@console-pimps.org>
* | | Merge branch 'sh/dwarf-unwinder'Paul Mundt2009-10-123-50/+183
|\ \ \ | | | | | | | | | | | | | | | | Conflicts: arch/sh/kernel/dwarf.c
| * \ \ Merge branch 'sh/dwarf-unwinder' of git://github.com/mfleming/linux-2.6 into ↵Paul Mundt2009-10-123-47/+180
| |\ \ \ | | | | | | | | | | | | | | | sh/dwarf-unwinder
| | * | | sh: Remove any reference to recursive functions from commentsMatt Fleming2009-10-111-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally, dwarf_unwind_stack() was a recursive function and it seems that some of the old comments were never updated. Signed-off-by: Matt Fleming <matt@console-pimps.org>
| | * | | sh: Fix memory leak in dwarf_unwind_stack()Matt Fleming2009-10-112-6/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we broke out of the while (1) loop because the return address of "frame" was zero, then "frame" needs to be free'd before we return. Signed-off-by: Matt Fleming <matt@console-pimps.org>
| | * | | sh: Teach the DWARF unwinder about modulesMatt Fleming2009-10-113-30/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pass a module's .eh_frame section to the DWARF unwinder at module load time so that the section's FDEs and CIEs can be registered with the DWARF unwinder. This allows us to unwind the stack through module code when generating backtraces. Signed-off-by: Matt Fleming <matt@console-pimps.org>
* | | | | sh: Reinstate ILSEL -> IRL intc mappings for SH-X3 proto CPU.Paul Mundt2009-10-101-10/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the multi-evt conversion for the SH-X3 proto CPU, IRLs were dropped down to a single unique masking source, which ended up blowing up on ILSEL-based IRQs which have special semantics that otherwise confuse the intc code. While this does result in intc spewing about not having a unique masking source, we don't really care. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Shut up CONFIG_32BIT=n compiler warnings.Paul Mundt2009-10-101-1/+1
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Fold fixed-PMB support into dynamic PMB supportMatt Fleming2009-10-105-53/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initialisation process differs for CONFIG_PMB and for CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB entries that were allocated by the bootloader. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Fix the offset from P1SEG/P2SEG where we map RAMMatt Fleming2009-10-101-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to map the gap between 0x00000000 and __MEMORY_START in the PMB, as well as RAM. With this change my 7785LCR board can switch to 32bit MMU mode at runtime. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Remap physical memory into P1 and P2 in pmb_init()Matt Fleming2009-10-103-42/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Eventually we'll have complete control over what physical memory gets mapped where and we can probably do other interesting things. For now though, when the MMU is in 32-bit mode, we map physical memory into the P1 and P2 virtual address ranges with the same semantics as they have in 29-bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Get rid of the kmem cache codeMatt Fleming2009-10-101-55/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unfortunately, at the time during in boot when we want to be setting up the PMB entries, the kmem subsystem hasn't been initialised. We now match pmb_map slots with pmb_entry_list slots. When we find an empty slot in pmb_map, we set the bit, thereby acquiring the corresponding pmb_entry_list entry. There is a benefit in using this static array of struct pmb_entry's; we don't need to acquire any locks in order to traverse the list of struct pmb_entry's. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Make most PMB functions staticMatt Fleming2009-10-103-16/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's no need to export the internal PMB functions for allocating, freeing and modifying PMB entries, etc. This way we can restrict the interface for PMB. Also remove the static from pmb_init() so that we have more freedom in setting up the initial PMB entries and turning on MMU 32bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: CONFIG_PMB doesn't mean the MMU is in 32bit modeMatt Fleming2009-10-102-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CONFIG_PMB will eventually allow the MMU to be switched between 29-bit and 32-bit mode dynamically at runtime. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Prepare for dynamic PMB supportMatt Fleming2009-10-107-10/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To allow the MMU to be switched between 29bit and 32bit mode at runtime some constants need to swapped for functions that return a runtime value. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Obliterate the P1 area macrosMatt Fleming2009-10-105-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the idea that all addresses in P1SEG are untranslated, so we can access an address's physical page as an offset from P1SEG. This doesn't work for CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used for PMB mappings and so can be translated to any physical address. Likewise, replace a P1SEGADDR() use with virt_to_phys(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | sh: Allocate PMB entry slot earlierMatt Fleming2009-10-101-41/+39
| |_|/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify set_pmb_entry() by removing the possibility of not finding a free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so that if there are no free slots we fail at allocation time, rather than in set_pmb_entry(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | Merge branch 'sh/cachetlb'Paul Mundt2009-10-104-425/+87
|\ \ \ \
| * | | | sh: Factor in cpu id for selection of cache colour fixmap.Paul Mundt2009-09-092-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the SMP VIPT case the page copy/clear ops still perform colouring, care needs to be taken that CPUs don't end up stepping on each other, so we give them a bit of room to work with. At the same time, we reduce the worst-case colouring given that these pages are always consumed. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * | | | sh: Fix up redundant cache flushing for PAGE_SIZE > 4k.Paul Mundt2009-09-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If PAGE_SIZE is presently over 4k we do a lot of extra flushing given that we purge the cache 4k at a time. Make it explicitly 4k per iteration, rather than iterating for PAGE_SIZE before looping over again. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * | | | sh: Rework sh4_flush_cache_page() for coherent kmap mapping.Paul Mundt2009-09-091-27/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This builds on top of the MIPS r4k code that does roughly the same thing. This permits the use of kmap_coherent() for mapped pages with dirty dcache lines and falls back on kmap_atomic() otherwise. This also fixes up a problem with the alias check and defers to shm_align_mask directly. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * | | | sh: Kill off segment-based d-cache flushing on SH-4.Paul Mundt2009-09-091-271/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This kills off the unrolled segment based flushers on SH-4 and switches over to a generic unrolled approach derived from the writethrough segment flusher. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * | | | sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().Paul Mundt2009-09-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PHYSADDR() runs in to issues in 32-bit mode when we do not have the legacy P1/P2 areas mapped, as such, we need to use page_to_phys() directly, which also happens to do the right thing in legacy 29-bit mode. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * | | | sh: sh4_flush_cache_mm() optimizations.Paul Mundt2009-09-092-120/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The i-cache flush in the case of VM_EXEC was added way back when as a sanity measure, and in practice we only care about evicting aliases from the d-cache. As a result, it's possible to drop the i-cache flush completely here. After careful profiling it's also come up that all of the work associated with hunting down aliases and doing ranged flushing ends up generating more overhead than simply blasting away the entire dcache, particularly if there are many mm's that need to be iterated over. As a result of that, just move back to flush_dcache_all() in these cases, which restores the old behaviour, and vastly simplifies the path. Additionally, on platforms without aliases at all, this can simply be nopped out. Presently we have the alias check in the SH-4 specific version, but this is true for all of the platforms, so move the check up to a generic location. This cuts down quite a bit on superfluous cacheop IPIs. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | | SH: add support for the RJ54N1CB0C camera for the kfr2r09 platformGuennadi Liakhovetski2009-10-101-0/+139
| |_|_|/ |/| | | | | | | | | | | | | | | Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | sh: Don't allocate smaller sized mappings on every iterationMatt Fleming2009-10-091-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we've got the less than ideal situation where if we need to allocate a 256MB mapping we'll allocate four entries like so, entry 1: 128MB entry 2: 64MB entry 3: 16MB entry 4: 16MB This is because as we execute the loop in pmb_remap() we will progressively try mapping the remaining address space with smaller and smaller sizes. This isn't good because the size we use on one iteration may be the perfect size to use on the next iteration, for instance when the initial size is divisible by one of the PMB mapping sizes. With this patch, we now only need two entries in the PMB to map 256MB of address space, entry 1: 128MB entry 2: 128MB Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | sh: Try PMB mapping based on physical address, not mapping sizeMatt Fleming2009-10-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should favour PMB mappings when the physical address cannot be reached with 29-bits. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | sh: Plug PMB alloc memory leakMatt Fleming2009-10-091-6/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we fail to allocate a PMB entry in pmb_remap() we must remember to clear and free any PMB entries that we may have previously allocated, e.g. if we were allocating a multiple entry mapping. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | sh: Sprinkle __uses_jump_to_uncachedMatt Fleming2009-10-092-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix some callers of jump_to_uncached() and back_to_cached() that were not annotated with __uses_jump_to_uncached. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | sh: enable sleep state LEDs on Ecovec24Magnus Damm2009-10-091-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Extend the ecovec24 board code to enable Power Management LEDs showing the current sh7724 sleep state. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | sh: mach-ecovec24: Document DS2 switch settings.Kuninori Morimoto2009-10-051-0/+14
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | | | sh: Build fix: export __movmemLubomir Rintel2009-09-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ERROR: "__movmem" [net/irda/irda.ko] undefined! ERROR: "__movmem" [fs/nfsd/nfsd.ko] undefined! ERROR: "__movmem" [fs/lockd/lockd.ko] undefined! ERROR: "__movmem" [crypto/sha1_generic.ko] undefined! Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
OpenPOWER on IntegriCloud