summaryrefslogtreecommitdiffstats
path: root/arch/sparc64/kernel/vmlinux.lds.S
Commit message (Collapse)AuthorAgeFilesLines
* [SPARC]: Add missing NOTES section.David S. Miller2007-07-241-0/+2
| | | | | | | | This fixes boot failures when the build-id LD option is actually used, because without it we end up with multiple PT_LOAD sections which the SILO boot loader cannot handle. Signed-off-by: David S. Miller <davem@davemloft.net>
* define new percpu interface for shared dataFenghua Yu2007-07-191-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | per cpu data section contains two types of data. One set which is exclusively accessed by the local cpu and the other set which is per cpu, but also shared by remote cpus. In the current kernel, these two sets are not clearely separated out. This can potentially cause the same data cacheline shared between the two sets of data, which will result in unnecessary bouncing of the cacheline between cpus. One way to fix the problem is to cacheline align the remotely accessed per cpu data, both at the beginning and at the end. Because of the padding at both ends, this will likely cause some memory wastage and also the interface to achieve this is not clean. This patch: Moves the remotely accessed per cpu data (which is currently marked as ____cacheline_aligned_in_smp) into a different section, where all the data elements are cacheline aligned. And as such, this differentiates the local only data and remotely accessed data cleanly. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: <linux-arch@vger.kernel.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sparc64: fix alignment bug in linker definition scriptSam Ravnborg2007-05-291-5/+6
| | | | | | | | | | | | | | | | The RO_DATA section were hardcoded to a specific alignment in include/asm-generic/vmlinux.h. But for sparc64 this did not match the PAGE_SIZE. Introduce a new section definition named: RO_DATA that takes actual alignment as parameter. RODATA are provided for backward compatibility. On top of this avoid hardcoding alignment for sparc64 in reset of the script Fix is build-tested on sparc64 + x86_64. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
* all-archs: consolidate .data section definition in asm-genericSam Ravnborg2007-05-191-1/+1
| | | | | | | With this consolidation we can now modify the .data section definition in one spot for all archs. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
* all-archs: consolidate .text section definition in asm-genericSam Ravnborg2007-05-191-1/+1
| | | | | | Move definition of .text section to asm-generic. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
* [PATCH] disable init/initramfs.c: architecturesJean-Paul Saman2007-02-111-0/+4
| | | | | | | | | | | | Update all arch/*/kernel/vmlinux.lds.S to not include space for initramfs when CONFIG_BLK_DEV_INITRAMFS is not selected. This saves another 4 kbytes on most platfoms (some reserve PAGE_SIZE for initramfs). Signed-off-by: Jean-Paul Saman <jean-paul.saman@nxp.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] relocatable kernel: Kallsyms generate relocatable symbolsEric W. Biederman2006-12-071-0/+1
| | | | | | | | | | | | | | | Print the addresses of non-absolute symbols relative to _text so that ld will generate relocations. Allowing a relocatable kernel to relocate them. We can't actually use the symbol names because kallsyms includes static symbols that are not exported from their object files. Add the _text symbol definitions to the architectures which don't define it otherwise linker will fail. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
* [PATCH] vmlinux.lds: consolidate initcall sectionsAndrew Morton2006-10-271-7/+1
| | | | | | | | | | | | | | Add a vmlinux.lds.h helper macro for defining the eight-level initcall table, teach all the architectures to use it. This is a prerequisite for a patch which performs initcall synchronisation for multithreaded-probing. Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> [ Added AVR32 as well ] Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [SPARC64]: Rename gl_{1,2}insn_patch --> sun4v_{1,2}insn_patchDavid S. Miller2006-03-201-6/+6
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Initial sun4v TLB miss handling infrastructure.David S. Miller2006-03-201-0/+3
| | | | | | | | | | Things are a little tricky because, unlike sun4u, we have to: 1) do a hypervisor trap to do the TLB load. 2) do the TSB lookup calculations by hand Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Sanitize %pstate writes for sun4v.David S. Miller2006-03-201-0/+3
| | | | | | | | If we're just switching between different alternate global sets, nop it out on sun4v. Also, get rid of all of the alternate global save/restore in the OBP CIF trampoline code. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add initial code to twiddle %gl on trap entry/exit.David S. Miller2006-03-201-0/+3
| | | | | | | Instead of setting/clearing PSTATE_AG we have to change the %gl register value on sun4v. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Refine code sequences to get the cpu id.David S. Miller2006-03-201-0/+3
| | | | | | | | | | | | | | | | | | On uniprocessor, it's always zero for optimize that. On SMP, the jmpl to the stub kills the return address stack in the cpu branch prediction logic, so expand the code sequence inline and use a code patching section to fix things up. This also always better and explicit register selection, which will be taken advantage of in a future changeset. The hard_smp_processor_id() function is big, so do not inline it. Fix up tests for Jalapeno to also test for Serrano chips too. These tests want "jbus Ultra-IIIi" cases to match, so that is what we should test for. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Access TSB with physical addresses when possible.David S. Miller2006-03-201-0/+4
| | | | | | | | | | | | | This way we don't need to lock the TSB into the TLB. The trick is that every TSB load/store is registered into a special instruction patch section. The default uses virtual addresses, and the patch instructions use physical address load/stores. We can't do this on all chips because only cheetah+ and later have the physical variant of the atomic quad load. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Increase swapper_tsb size to 32K.David S. Miller2006-03-201-3/+0
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Move away from virtual page tables, part 1.David S. Miller2006-03-201-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We now use the TSB hardware assist features of the UltraSPARC MMUs. SMP is currently knowingly broken, we need to find another place to store the per-cpu base pointers. We hid them away in the TSB base register, and that obviously will not work any more :-) Another known broken case is non-8KB base page size. Also noticed that flush_tlb_all() is not referenced anywhere, only the internal __flush_tlb_all() (local cpu only) is used by the sparc64 port, so we can get rid of flush_tlb_all(). The kernel gets it's own 8KB TSB (swapper_tsb) and each address space gets it's own private 8K TSB. Later we can add code to dynamically increase the size of per-process TSB as the RSS grows. An 8KB TSB is good enough for up to about a 4MB RSS, after which the TSB starts to incur many capacity and conflict misses. We even accumulate OBP translations into the kernel TSB. Another area for refinement is large page size support. We could use a secondary address space TSB to handle those. Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC]: Use STABS_DEBUG and DWARF_DEBUG macros in vmlinux.lds.SDavid S. Miller2005-12-281-14/+4
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [SPARC64]: Add CONFIG_DEBUG_PAGEALLOC support.David S. Miller2005-09-251-2/+1
| | | | | | | | | | | | | | | | | | | | | | The trick is that we do the kernel linear mapping TLB miss starting with an instruction sequence like this: ba,pt %xcc, kvmap_load xor %g2, %g4, %g5 succeeded by an instruction sequence which performs a full page table walk starting at swapper_pg_dir. We first take over the trap table from the firmware. Then, using this constant PTE generation for the linear mapping area above, we build the kernel page tables for the linear mapping. After this is setup, we patch that branch above into a "nop", which will cause TLB misses to fall through to the full page table walk. With this, the page unmapping for CONFIG_DEBUG_PAGEALLOC is trivial. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] Kprobes: prevent possible race conditions sparc64 changesPrasanna S Panchamukhi2005-09-071-0/+1
| | | | | | | | | This patch contains the sparc64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [SPARC64]: Add __read_mostly support.David S. Miller2005-07-101-0/+2
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-161-0/+106
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
OpenPOWER on IntegriCloud