summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* [SPARC64]: Sun4v specific ASI defines.David S. Miller2006-03-201-0/+9
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Niagara optimized memcpy() and copy_{to,from}_user().David S. Miller2006-03-205-0/+474
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add Niagara init-store twin-load ASI defines.David S. Miller2006-03-201-1/+8
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add some hypervisor tlb_type checks.David S. Miller2006-03-202-8/+30
| | | | | | | And more consistently check cheetah{,_plus} instead of assuming anything not spitfire is cheetah{,_plus}. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add 'hypervisor' to ultra_tlb_type enumeration.David S. Miller2006-03-201-0/+1
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: SUN4V hypervisor TLB flush support code.David S. Miller2006-03-201-10/+214
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: SUN4V hypervisor interface defines.David S. Miller2006-03-201-0/+2072
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Refine register window trap handling.David S. Miller2006-03-207-440/+447
| | | | | | | | | | | When saving and restoing trap state, do the window spill/fill handling inline so that we never trap deeper than 2 trap levels. This is important for chips like Niagara. The window fixup code is massively simplified, and many more improvements are now possible. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add explicit register args to trap state loading macros.David S. Miller2006-03-206-71/+64
| | | | | | | This, as well as making the code cleaner, allows a simplification in the TSB miss handling path. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Refine code sequences to get the cpu id.David S. Miller2006-03-209-134/+144
| | | | | | | | | | | | | | | | | | On uniprocessor, it's always zero for optimize that. On SMP, the jmpl to the stub kills the return address stack in the cpu branch prediction logic, so expand the code sequence inline and use a code patching section to fix things up. This also always better and explicit register selection, which will be taken advantage of in a future changeset. The hard_smp_processor_id() function is big, so do not inline it. Fix up tests for Jalapeno to also test for Serrano chips too. These tests want "jbus Ultra-IIIi" cases to match, so that is what we should test for. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Turn off TSB growing for now.David S. Miller2006-03-202-15/+1
| | | | | | | | | | | | | | | | | | | | | | | | There are several tricky races involved with growing the TSB. So just use base-size TSBs for user contexts and we can revisit enabling this later. One part of the SMP problems is that tsb_context_switch() can see partially updated TSB configuration state if tsb_grow() is running in parallel. That's easily solved with a seqlock taken as a writer by tsb_grow() and taken as a reader to capture all the TSB config state in tsb_context_switch(). Then there is flush_tsb_user() running in parallel with a tsb_grow(). In theory we could take the seqlock as a reader there too, and just resample the TSB pointer and reflush but that looks really ugly. Lastly, I believe there is a case with threads that results in a TSB entry lock bit being set spuriously which will cause the next access to that TSB entry to wedge the cpu (since the TSB entry lock bit will never clear). It's either copy_tsb() or some bug elsewhere in the TSB assembly. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Correctable ECC errors cannot occur at trap level > 0.David S. Miller2006-03-203-24/+4
| | | | | | | | | | | | | The are distrupting, which by the sparc v9 definition means they can only occur when interrupts are enabled in the %pstate register. This never occurs in any of the trap handling code running at trap levels > 0. So just mark it as an unexpected trap. This allows us to kill off the cee_stuff member of struct thread_info. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Access TSB with physical addresses when possible.David S. Miller2006-03-209-53/+234
| | | | | | | | | | | | | This way we don't need to lock the TSB into the TLB. The trick is that every TSB load/store is registered into a special instruction patch section. The default uses virtual addresses, and the patch instructions use physical address load/stores. We can't do this on all chips because only cheetah+ and later have the physical variant of the atomic quad load. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill out-of-date commentary in asm-sparc64/tsb.hDavid S. Miller2006-03-201-8/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Don't clobber alt-global %g4 on window fixups.David S. Miller2006-03-201-1/+1
| | | | | | | | If we are returning back to kernel mode, %g4 could be live (for example, in the case where we window spill in the etrap code). So do not change it's value if going back to kernel. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix race in LOAD_PER_CPU_BASE()David S. Miller2006-03-204-13/+18
| | | | | | | | | | Since we use %g5 itself as a temporary, it can get clobbered if we take an interrupt mid-stream and thus cause end up with the final %g5 value too early as a result of rtrap processing. Set %g5 at the very end, atomically, to avoid this problem. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill swapper_pgd_zero, totally unused.David S. Miller2006-03-201-3/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix too early reference to %g6David S. Miller2006-03-201-2/+5
| | | | | | | | | %g6 is not necessarily set to current_thread_info() at sparc64_realfault_common. So store the fault code and address after we invoke etrap and %g6 is properly set up. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill hard-coded %pstate setting in sparc_exit.David S. Miller2006-03-201-2/+3
| | | | | | | Just flip the bit off of whatever it's currently set to. PSTATE_IE is guarenteed to be enabled when we get here. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Increase swapper_tsb size to 32K.David S. Miller2006-03-204-20/+16
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill sole argument passed to setup_tba().David S. Miller2006-03-203-10/+4
| | | | | | No longer used, and move extern declaration to a header file. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill PROM locked TLB entry preservation code.David S. Miller2006-03-203-296/+10
| | | | | | | It is totally unnecessary complexity. After we take over the trap table, we handle all PROM tlb misses fully. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Use sparc64_highest_unlocked_tlb_ent in __tsb_context_switch()David S. Miller2006-03-201-6/+8
| | | | | | Instead of ugly hard-coded value. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix bogus flush instruction usage.David S. Miller2006-03-205-17/+30
| | | | | | | | | | Some of the trap code was still assuming that alternate global %g6 was hard coded with current_thread_info(). Let's just consistently flush at KERNBASE when we need a pipeline synchronization. That's locked into the TLB and will always work. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Fix incorrect TSB lock bit handling.David S. Miller2006-03-202-3/+4
| | | | | | | | | | The TSB_LOCK_BIT define is actually a special value shifted down by 32-bits for the assembler code macros. In C code, this isn't what we want. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill {save,restore}_alternate_globals()David S. Miller2006-03-202-76/+1
| | | | | | | No longer needed now that we no longer have hard-coded alternate global register usage. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Preload TSB entries from update_mmu_cache().David S. Miller2006-03-203-0/+29
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Dynamically grow TSB in response to RSS growth.David S. Miller2006-03-205-10/+184
| | | | | | | | | As the RSS grows, grow the TSB in order to reduce the likelyhood of hash collisions and thus poor hit rates in the TSB. This definitely needs some serious tuning. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add infrastructure for dynamic TSB sizing.David S. Miller2006-03-207-58/+142
| | | | | | | | | | | | | | | This also cleans up tsb_context_switch(). The assembler routine is now __tsb_context_switch() and the former is an inline function that picks out the bits from the mm_struct and passes it into the assembler code as arguments. setup_tsb_parms() computes the locked TLB entry to map the TSB. Later when we support using the physical address quad load instructions of Cheetah+ and later, we'll simply use the physical address for the TSB register value and set the map virtual and PTE both to zero. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: TSB refinements.David S. Miller2006-03-204-30/+45
| | | | | | | | | | Move {init_new,destroy}_context() out of line. Do not put huge pages into the TSB, only base page size translations. There are some clever things we could do here, but for now let's be correct instead of fancy. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Elminate all usage of hard-coded trap globals.David S. Miller2006-03-2016-168/+287
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | UltraSPARC has special sets of global registers which are switched to for certain trap types. There is one set for MMU related traps, one set of Interrupt Vector processing, and another set (called the Alternate globals) for all other trap types. For what seems like forever we've hard coded the values in some of these trap registers. Some examples include: 1) Interrupt Vector global %g6 holds current processors interrupt work struct where received interrupts are managed for IRQ handler dispatch. 2) MMU global %g7 holds the base of the page tables of the currently active address space. 3) Alternate global %g6 held the current_thread_info() value. Such hardcoding has resulted in some serious issues in many areas. There are some code sequences where having another register available would help clean up the implementation. Taking traps such as cross-calls from the OBP firmware requires some trick code sequences wherein we have to save away and restore all of the special sets of global registers when we enter/exit OBP. We were also using the IMMU TSB register on SMP to hold the per-cpu area base address, which doesn't work any longer now that we actually use the TSB facility of the cpu. The implementation is pretty straight forward. One tricky bit is getting the current processor ID as that is different on different cpu variants. We use a stub with a fancy calling convention which we patch at boot time. The calling convention is that the stub is branched to and the (PC - 4) to return to is in register %g1. The cpu number is left in %g6. This stub can be invoked by using the __GET_CPUID macro. We use an array of per-cpu trap state to store the current thread and physical address of the current address space's page tables. The TRAP_LOAD_THREAD_REG loads %g6 with the current thread from this table, it uses __GET_CPUID and also clobbers %g1. TRAP_LOAD_IRQ_WORK is used by the interrupt vector processing to load the current processor's IRQ software state into %g6. It also uses __GET_CPUID and clobbers %g1. Finally, TRAP_LOAD_PGD_PHYS loads the physical address base of the current address space's page tables into %g7, it clobbers %g1 and uses __GET_CPUID. Many refinements are possible, as well as some tuning, with this stuff in place. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Kill pgtable quicklists and use SLAB.David S. Miller2006-03-205-166/+44
| | | | | | | | | Taking a nod from the powerpc port. With the per-cpu caching of both the page allocator and SLAB, the pgtable quicklist scheme becomes relatively silly and primitive. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: No need to D-cache color page tables any longer.David S. Miller2006-03-203-122/+55
| | | | | | | Unlike the virtual page tables, the new TSB scheme does not require this ugly hack. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move away from virtual page tables, part 1.David S. Miller2006-03-2030-888/+693
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We now use the TSB hardware assist features of the UltraSPARC MMUs. SMP is currently knowingly broken, we need to find another place to store the per-cpu base pointers. We hid them away in the TSB base register, and that obviously will not work any more :-) Another known broken case is non-8KB base page size. Also noticed that flush_tlb_all() is not referenced anywhere, only the internal __flush_tlb_all() (local cpu only) is used by the sparc64 port, so we can get rid of flush_tlb_all(). The kernel gets it's own 8KB TSB (swapper_tsb) and each address space gets it's own private 8K TSB. Later we can add code to dynamically increase the size of per-process TSB as the RSS grows. An 8KB TSB is good enough for up to about a 4MB RSS, after which the TSB starts to incur many capacity and conflict misses. We even accumulate OBP translations into the kernel TSB. Another area for refinement is large page size support. We could use a secondary address space TSB to handle those. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: BUG_ON() Conversion in arch/sparc/kernel/ioport.cEric Sesterhenn2006-03-201-25/+15
| | | | | | | | this changes if() BUG(); constructs to BUG_ON() which is cleaner and can better optimized away Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: fix sparc_floppy_irq's auxio_register resetingBernhard R Link2006-03-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | The patch "[SPARC64]: Get rid of fast IRQ feature" moved the the code from arch/sparc64/kernel/entry.S: lduba [%g7] ASI_PHYS_BYPASS_EC_E, %g5 or %g5, AUXIO_AUX1_FTCNT, %g5 stba %g5, [%g7] ASI_PHYS_BYPASS_EC_E andn %g5, AUXIO_AUX1_FTCNT, %g5 stba %g5, [%g7] ASI_PHYS_BYPASS_EC_E to arch/sparc64/kernel/irq.c: val = readb(auxio_register); val |= AUXIO_AUX1_FTCNT; writeb(val, auxio_register); val &= AUXIO_AUX1_FTCNT; writeb(val, auxio_register); This looks like it it missing a bitwise not, which is reintroduced by this patch. Due to lack of a floppy device, I could not test it, but it looks evident. Signed-off-by: Bernhard R Link <brlink@debian.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* Linux 2.6.16v2.6.16Linus Torvalds2006-03-191-1/+1
|
* [PATCH] Remove obsolete CREDITS addressAndrea Arcangeli2006-03-191-1/+0
| | | | This address is going to be obsolete, so I should update it.
* Merge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linusLinus Torvalds2006-03-1916-134/+217
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus: [MIPS] SB1: Check for -mno-sched-prolog if building corelis debug kernel. [MIPS] Sibyte: Fix race in sb1250_gettimeoffset(). [MIPS] Sibyte: Fix interrupt timer off by one bug. [MIPS] Sibyte: Fix M_SCD_TIMER_INIT and M_SCD_TIMER_CNT wrong field width. [MIPS] Protect more of timer_interrupt() by xtime_lock. [MIPS] Work around bad code generation for <asm/io.h>. [MIPS] Simple patch to power off DBAU1200 [MIPS] Fix DBAu1550 software power off. [MIPS] local_r4k_flush_cache_page fix [MIPS] SB1: Fix interrupt disable hazard. [MIPS] Get rid of the IP22-specific code in arclib. Update MAINTAINERS entry for MIPS.
| * [MIPS] SB1: Check for -mno-sched-prolog if building corelis debug kernel.Ralf Baechle2006-03-181-1/+2
| | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Sibyte: Fix race in sb1250_gettimeoffset().Ralf Baechle2006-03-183-18/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From Dave Johnson <djohnson+linuxmips@sw.starentnetworks.com>: sb1250_gettimeoffset() simply reads the current cpu 0 timer remaining value, however once this counter reaches 0 and the interrupt is raised, it immediately resets and begins to count down again. If sb1250_gettimeoffset() is called on cpu 1 via do_gettimeofday() after the timer has reset but prior to cpu 0 processing the interrupt and taking write_seqlock() in timer_interrupt() it will return a full value (or close to it) causing time to jump backwards 1ms. Once cpu 0 handles the interrupt and timer_interrupt() gets far enough along it will jump forward 1ms. Fix this problem by implementing mips_hpt_*() on sb1250 using a spare timer unrelated to the existing periodic interrupt timers. It runs at 1Mhz with a full 23bit counter. This eliminated the custom do_gettimeoffset() for sb1250 and allowed use of the generic fixed_rate_gettimeoffset() using mips_hpt_*() and timerhi/timerlo. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Sibyte: Fix interrupt timer off by one bug.Ralf Baechle2006-03-181-2/+2
| | | | | | | | | | | | | | | | | | From Dave Johnson <djohnson+linuxmips@sw.starentnetworks.com>: The timers need to be loaded with 1 less than the desired interval not the interval itself. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Sibyte: Fix M_SCD_TIMER_INIT and M_SCD_TIMER_CNT wrong field width.Ralf Baechle2006-03-181-2/+3
| | | | | | | | | | | | | | | | From Dave Johnson <djohnson+linuxmips@sw.starentnetworks.com>: Field width should be 23 bits not 20 bits. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Protect more of timer_interrupt() by xtime_lock.Ralf Baechle2006-03-181-2/+4
| | | | | | | | | | | | | | | | | | | | | | From Dave Johnson <djohnson+linuxmips@sw.starentnetworks.com>: * do_timer() expects the arch-specific handler to take the lock as it modifies jiffies[_64] and xtime. * writing timerhi/lo in timer_interrupt() will mess up fixed_rate_gettimeoffset() which reads timerhi/lo. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Work around bad code generation for <asm/io.h>.Ralf Baechle2006-03-181-3/+15
| | | | | | | | | | | | | | | | | | If a call to set_io_port_base() was being followed by usage of mips_io_port_base in the same function gcc was possibly using the old value due to some clever abuse of const. Adding a barrier will keep the optimization and result in correct code with latest gcc. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Simple patch to power off DBAU1200Matej Kupljen2006-03-181-0/+3
| | | | | | | | | | Signed-off-by: Matej Kupljen <matej.kupljen@ultra.si> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Fix DBAu1550 software power off.Sergei Shtylylov2006-03-181-3/+4
| | | | | | | | | | Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] local_r4k_flush_cache_page fixAtsushi Nemoto2006-03-185-6/+15
| | | | | | | | | | | | | | | | | | | | If dcache_size != icache_size or dcache_size != scache_size, or set-associative cache, icache/scache does not flushed properly. Make blast_?cache_page_indexed() masks its index value correctly. Also, use physical address for physically indexed pcache/scache. Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] SB1: Fix interrupt disable hazard.Ralf Baechle2006-03-181-77/+103
| | | | | | | | | | | | | | The SB1 core has a three cycle interrupt disable hazard but we were wrongly treating it as fully interlocked. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * [MIPS] Get rid of the IP22-specific code in arclib.Ralf Baechle2006-03-181-19/+0
| | | | | | | | | | | | This breaks the kernel build if sgiwd93 was configured as a module. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
OpenPOWER on IntegriCloud