summaryrefslogtreecommitdiffstats
path: root/arch/ia64/kernel
Commit message (Collapse)AuthorAgeFilesLines
* [IA64] Put the space for cpu0 per-cpu area into .data sectionTony Luck2008-09-292-7/+10
| | | | | | | | | Initial fix for making sure that we can access percpu variables in all C code (commit: 10617bbe84628eb18ab5f723d3ba35005adde143) inadvertantly allocated the memory in the "percpu" section of the vmlinux ELF executable. This confused kexec/dump. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] kexec fails on systems with blocks of uncached memoryJay Lan2008-09-221-2/+3
| | | | | | | | | | | | | | | | | | Currently a memory segment in memory map with attribute of EFI_MEMORY_UC is denoted as "System RAM" in /proc/iomem, while memory of attribute (EFI_MEMORY_WB|EFI_MEMORY_UC) is also labeled the same. The kexec utility then includes uncached memory as part of vmcore. The kdump kernel MCA'ed when it tries to save the vmcore to a disk. A normal "cached" access may cause MCAs. This patch would label memory with attribute of EFI_MEMORY_UC only as "Uncached RAM" so that kexec would know not to include it in the vmcore. I will submit a separate kexec-tools patch to the kexec list. Signed-off-by: Jay Lan <jlan@sgi.com> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Ski simulator doesn't need check_sal_cache_flushAlex Chiang2008-09-221-0/+2
| | | | | | | | | | | | | Peter Chubb reported that commit 3463a93def55c309f3c0d0a8aaf216be3be42d64 (Update check_sal_cache_flush to use platform_send_ipi()) broke Ski because it does not implement IPIs. Tony Luck suggested we just #ifndef out the call (since the simulator does not have the SAL bug that this code is attempting to detect and workaround) Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] prevent ia64 from invoking irq handlers on offline CPUsPaul E. McKenney2008-09-101-3/+1
| | | | | | | | | | | Make ia64 refrain from clearing a given to-be-offlined CPU's bit in the cpu_online_mask until it has processed pending irqs. This change prevents other CPUs from being blindsided by an apparently offline CPU nevertheless changing globally visible state. Also remove the existing redundant cpu_clear(cpu, cpu_online_map). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] fix compile failure with non modular buildsJames Bottomley2008-09-101-21/+0
| | | | | | | | | | | Broke the non modular builds by moving an essential function into modules.c. Fix this by moving it out again and into asm/sections.h as an inline. To do this, the definitions of struct fdesc and struct got_val have been lifted out of modules.c and put in asm/elf.h where they belong. Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* lib: Correct printk %pF to work on all architecturesJames Bottomley2008-09-091-0/+12
| | | | | | | | | | | | | | | | | | | It was introduced by "vsprintf: add support for '%pS' and '%pF' pointer formats" in commit 0fe1ef24f7bd0020f29ffe287dfdb9ead33ca0b2. However, the current way its coded doesn't work on parisc64. For two reasons: 1) parisc isn't in the #ifdef and 2) parisc has a different format for function descriptors Make dereference_function_descriptor() more accommodating by allowing architecture overrides. I put the three overrides (for parisc64, ppc64 and ia64) in arch/kernel/module.c because that's where the kernel internal linker which knows how to deal with function descriptors sits. Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Tony Luck <tony.luck@intel.com> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [IA64] Fix ia64 build failure when CONFIG_SFC=mRobin Holt2008-08-251-0/+1
| | | | | | | | | | | | | | | | | | CONFIG_SFC=m uses topology_core_siblings() which, for ia64, expects cpu_core_map to be exported. It is not. This patch exports the needed symbol. Maintainers note: This really looks like the wrong thing to do ... it would be much better for the kernel to export an API to provide drivers like this with data they need (which in the case of this driver seems to be an estimate of the effective parallelism available on the platform). But x86 has exported this forever ... so go with the flow until such an API is defined. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Matthew Wilcox <willy@linux.intel.com> Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Shrink shadow_flush_counts to a short array to save 8k of per_cpu area.Robin Holt2008-08-181-4/+4
| | | | | | | | | | | | Making allmodconfig will break the current build. This patch shrinks the per_cpu__shadow_flush_counts from 16k to 8k which frees enough space to allow allmodconfig to successfully complete. Fixes http://bugzilla.kernel.org/show_bug.cgi?id=11338 Signed-off-by: Robin Holt <holt@sgi.com> Acked-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Ensure cpu0 can access per-cpu variables in early boot codeTony Luck2008-08-124-9/+40
| | | | | | | | | | | | | | | | | | | | ia64 handles per-cpu variables a litle differently from other architectures in that it maps the physical memory allocated for each cpu at a constant virtual address (0xffffffffffff0000). This mapping is not enabled until the architecture specific cpu_init() function is run, which causes problems since some generic code is run before this point. In particular when CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to per-cpu memory at the first printk() call so the boot will fail without the kernel printing anything to the console. Fix this by allocating percpu memory for cpu0 in the kernel data section and doing all initialization to enable percpu access in head.S before calling any generic code. Other cpus must take care not to access per-cpu variables too early, but their code path from start_secondary() to cpu_init() is all in arch/ia64 Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Cleanup generated file not ignored by .gitignoreRobin Holt2008-08-041-0/+1
| | | | | | | | arch/ia64/kernel/vmlinux.lds is a generated file. Tell git to ignore it. Signed-off-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] pv_ops: fix ivt.S paravirtualizationIsaku Yamahata2008-08-041-2/+2
| | | | | | | | | | | | | | | | | Recent kernels are not booting on some HP systems (though it does boot on others). James and Willy reported the problem. James did the bisection to find the commit that caused the problem: 498c5170472ff0c03a29d22dbd33225a0be038f4. [IA64] pvops: paravirtualize ivt.S Two instructions were wrongly paravirtualized such that _FROM_ macro had been used where _TO_ was intended Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: "Wilcox, Matthew R" <matthew.r.wilcox@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Move include/asm-ia64 to arch/ia64/include/asmTony Luck2008-08-016-10/+10
| | | | | | | | | | | | | | | After moving the the include files there were a few clean-ups: 1) Some files used #include <asm-ia64/xyz.h>, changed to <asm/xyz.h> 2) Some comments alerted maintainers to look at various header files to make matching updates if certain code were to be changed. Updated these comments to use the new include paths. 3) Some header files mentioned their own names in initial comments. Just deleted these self references. Signed-off-by: Tony Luck <tony.luck@intel.com>
* tracehook: wait_task_inactiveRoland McGrath2008-07-261-2/+2
| | | | | | | | | | | | | | This extends wait_task_inactive() with a new argument so it can be used in a "soft" mode where it will check for the task changing state unexpectedly and back off. There is no change to existing callers. This lays the groundwork to allow robust, noninvasive tracing that can try to sample a blocked thread but back off safely if it wakes up. Signed-off-by: Roland McGrath <roland@redhat.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Reviewed-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'release' of ↵Linus Torvalds2008-07-251-0/+6
|\ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Wire up new system calls
| * [IA64] Wire up new system callsTony Luck2008-07-251-0/+6
| | | | | | | | | | | | | | Six new system calls: signalfd4, eventfd2, epoll_create1, dup3, pipe2 and inotify_init1. Signed-off-by: Tony Luck <tony.luck@intel.com>
* | kprobes: improve kretprobe scalability with hashed lockingSrinivasa D S2008-07-251-4/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently list of kretprobe instances are stored in kretprobe object (as used_instances,free_instances) and in kretprobe hash table. We have one global kretprobe lock to serialise the access to these lists. This causes only one kretprobe handler to execute at a time. Hence affects system performance, particularly on SMP systems and when return probe is set on lot of functions (like on all systemcalls). Solution proposed here gives fine-grain locks that performs better on SMP system compared to present kretprobe implementation. Solution: 1) Instead of having one global lock to protect kretprobe instances present in kretprobe object and kretprobe hash table. We will have two locks, one lock for protecting kretprobe hash table and another lock for kretporbe object. 2) We hold lock present in kretprobe object while we modify kretprobe instance in kretprobe object and we hold per-hash-list lock while modifying kretprobe instances present in that hash list. To prevent deadlock, we never grab a per-hash-list lock while holding a kretprobe lock. 3) We can remove used_instances from struct kretprobe, as we can track used instances of kretprobe instances using kretprobe hash table. Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system with return probes set on all systemcalls looks like this. cacheline non-cacheline Un-patched kernel aligned patch aligned patch =============================================================================== real 9m46.784s 9m54.412s 10m2.450s user 40m5.715s 40m7.142s 40m4.273s sys 2m57.754s 2m58.583s 3m17.430s =========================================================== Time duration for kernel compilation ("make -j 8) on the same system, when kernel is not probed. ========================= real 9m26.389s user 40m8.775s sys 2m7.283s ========================= Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com> Signed-off-by: Jim Keniston <jkenisto@us.ibm.com> Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* flag parameters: pipeUlrich Drepper2008-07-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the new syscall pipe2 which is like pipe but it also takes an additional parameter which takes a flag value. This patch implements the handling of O_CLOEXEC for the flag. I did not add support for the new syscall for the architectures which have a special sys_pipe implementation. I think the maintainers of those archs have the chance to go with the unified implementation but that's up to them. The implementation introduces do_pipe_flags. I did that instead of changing all callers of do_pipe because some of the callers are written in assembler. I would probably screw up changing the assembly code. To avoid breaking code do_pipe is now a small wrapper around do_pipe_flags. Once all callers are changed over to do_pipe_flags the old do_pipe function can be removed. The following test must be adjusted for architectures other than x86 and x86-64 and in case the syscall numbers changed. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <sys/syscall.h> #ifndef __NR_pipe2 # ifdef __x86_64__ # define __NR_pipe2 293 # elif defined __i386__ # define __NR_pipe2 331 # else # error "need __NR_pipe2" # endif #endif int main (void) { int fd[2]; if (syscall (__NR_pipe2, fd, 0) != 0) { puts ("pipe2(0) failed"); return 1; } for (int i = 0; i < 2; ++i) { int coe = fcntl (fd[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if (coe & FD_CLOEXEC) { printf ("pipe2(0) set close-on-exit for fd[%d]\n", i); return 1; } } close (fd[0]); close (fd[1]); if (syscall (__NR_pipe2, fd, O_CLOEXEC) != 0) { puts ("pipe2(O_CLOEXEC) failed"); return 1; } for (int i = 0; i < 2; ++i) { int coe = fcntl (fd[i], F_GETFD); if (coe == -1) { puts ("fcntl failed"); return 1; } if ((coe & FD_CLOEXEC) == 0) { printf ("pipe2(O_CLOEXEC) does not set close-on-exit for fd[%d]\n", i); return 1; } } close (fd[0]); close (fd[1]); puts ("OK"); return 0; } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Ulrich Drepper <drepper@redhat.com> Acked-by: Davide Libenzi <davidel@xmailserver.org> Cc: Michael Kerrisk <mtk.manpages@googlemail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sysdev: Pass the attribute to the low level sysdev show/store functionAndi Kleen2008-07-211-7/+15
| | | | | | | | | | | | | | | | | | | | | | This allow to dynamically generate attributes and share show/store functions between attributes. Right now most attributes are generated by special macros and lots of duplicated code. With the attribute passed it's instead possible to attach some data to the attribute and then use that in shared low level functions to do different things. I need this for the dynamically generated bank attributes in the x86 machine check code, but it'll allow some further cleanups. I converted all users in tree to the new show/store prototype. It's a single huge patch to avoid unbisectable sections. Runtime tested: x86-32, x86-64 Compiled only: ia64, powerpc Not compile tested/only grep converted: sh, arm, avr32 Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* [IA64] Avoid overflowing ia64_cpu_to_sapicid in acpi_map_lsapic()Alex Chiang2008-07-171-3/+2
| | | | | | | | | | acpi_map_lsapic tries to stuff a long into ia64_cpu_to_sapicid[], which can only hold ints, so let's fix that. We need to update the signature of acpi_map_cpu2node() too. Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] adding parameter check to module_free()Akiyama, Nobuyuki2008-07-171-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | module_free() refers the first parameter before checking. But it is called like below(in kernel/kprobes). The first parameter is always NULL. This happens when many probe points(>1024) are set by kprobes. I encountered this with using SystemTap. It can set many probes easily. static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) { ... if (kip->nused == 0) { hlist_del(&kip->hlist); if (hlist_empty(&kprobe_insn_pages)) { ... } else { module_free(NULL, kip->insns); //<<< 1st param always NULL kfree(kip); } return 1; } return 0; } Signed-off-by: Akiyama, Nobuyuki <akiyama.nobuyuk@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] improper printk format in acpi-cpufreqDenis V. Lunev2008-07-171-2/+2
| | | | | | | | | | | | | When dprintk is enabled the following warnings are generated: arch/ia64/kernel/cpufreq/acpi-cpufreq.c: In function 'processor_set_pstate': arch/ia64/kernel/cpufreq/acpi-cpufreq.c:54: warning: format '%x' expects type 'unsigned int', but argumen t 3 has type 's64' arch/ia64/kernel/cpufreq/acpi-cpufreq.c: In function 'processor_get_pstate': arch/ia64/kernel/cpufreq/acpi-cpufreq.c:76: warning: format '%x' expects type 'unsigned int', but argumen t 2 has type 's64' Signed-off-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Pull pvops into release branchTony Luck2008-07-1715-303/+954
|\
| * [IA64] pv_ops: move some functions in ivt.S to avoid lack of space.Isaku Yamahata2008-05-281-128/+133
| | | | | | | | | | | | | | | | | | | | | | | | | | | | move interrupt, page_fault, non_syscall, dispatch_unaligned_handler and dispatch_to_fault_handler to avoid lack of instructin space. The change set 4dcc29e1574d88f4465ba865ed82800032f76418 bloated SAVE_MIN_WITH_COVER, SAVE_MIN_WITH_COVER_R19 so that it bloated the functions which uses those macros. In the native case, only dispatch_illegal_op_fault had to be moved. When paravirtualized case the all functions which use the macros need to be moved to avoid the lack of space. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add to hooks, pv_time_ops, for steal time accounting.Isaku Yamahata2008-05-272-0/+38
| | | | | | | | | | | | | | | | | | | | Introduce pv_time_ops which adds hook to steal time accounting. On virtualized environment, cpus are shared by many guests and steal time is the time which is used for other guests. On virtualized environtment, streal time should be accounted. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add hooks, pv_irq_ops, to paravirtualized irq related operations.Isaku Yamahata2008-05-272-5/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | introduce pv_irq_ops which adds hooks to paravirtualize irq related operations. On virtualized environment, interruption may be replaced by something virtualization friendly. So the irq related operation also may need paravirtualization. This patch adds necessary hooks to paravirtualize irq related operations. Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add hooks, pv_iosapic_ops, to paravirtualize iosapic.Isaku Yamahata2008-05-272-16/+54
| | | | | | | | | | | | | | | | | | | | add hooks to paravirtualize iosapic which is a real hardware resource. On virtualized environment it may be replaced something virtualized friendly. Define pv_iosapic_ops and add the hooks. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: define initialization hooks, pv_init_ops, for paravirtualized ↵Isaku Yamahata2008-05-273-0/+19
| | | | | | | | | | | | | | | | | | | | | | environment. define pv_init_ops hooks which represents various initialization hooks for paravirtualized environment. and add hooks. Signed-off-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize NR_IRQSIsaku Yamahata2008-05-272-0/+57
| | | | | | | | | | | | | | | | | | | | | | Make NR_IRQ overridable by each pv instances. Pv instance may need each own number of irqs so that NR_IRQS should be the maximum number of nr_irqs each pv instances need. Cc: Jes Sorensen <jes@sgi.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize entry.SIsaku Yamahata2008-05-274-44/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | paravirtualize ia64_swtich_to, ia64_leave_syscall and ia64_leave_kernel. They include sensitive or performance critical privileged instructions so that they need paravirtualization. To paravirtualize them by single source and multi compile they are converted into indirect jump. And define each pv instances. Cc: Keith Owens <kaos@ocs.com.au> Cc: "Dong, Eddie" <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize ivt.SIsaku Yamahata2008-05-271-127/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | | paravirtualize ivt.S which implements fault handler in hand written assembly code. They includes sensitive or performance critical privileged instructions. So they need paravirtualization. Cc: Keith Owens <kaos@ocs.com.au> Cc: tgingold@free.fr Cc: Akio Takebe <takebe_akio@jp.fujitsu.com> Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: paravirtualize minstate.h.Isaku Yamahata2008-05-272-6/+36
| | | | | | | | | | | | | | | | | | | | | | | | paravirtualize minstate.h which are hand written assembly code. They include sensitive or performance critical privileged instructions. So that they are appropriate for paravirtualization. Cc: Keith Owens <kaos@ocs.com.au> Cc: Akio Takebe <takebe_akio@jp.fujitsu.com> Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: preparation for paravirtulization of hand written assembly code.Isaku Yamahata2008-05-271-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | Preparation for paravirtualization of hand written assembly code. They are paravirtualized by single source code and compiled multi times. To tell those files for target (including native), add one defines. Cc: "Dong, Eddie" <eddie.dong@intel.com> Cc: Keith Owens <kaos@ocs.com.au> Cc: tgingold@free.fr Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: introduce pv_cpu_ops to paravirtualize privileged instructions.Isaku Yamahata2008-05-271-0/+247
| | | | | | | | | | | | | | | | | | | | | | introduce pv_cpu_ops to paravirtualize privleged instructions which are defined by ia64 intrinsics. make them indirect C function calls by introducing function tables, pv_cpu_ops. Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: add an early setup hook for pv_ops.Isaku Yamahata2008-05-271-0/+41
| | | | | | | | | | | | | | | | | | | | This patch adds a setup hook in the very early boot sequence before start_kernel() to initialize paravirtualization stuff. The hook will be set by each pv loader code or by using multi entry point. Signed-off-by: Qing He <qing.he@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: introduce pv_info which describes some random info.Isaku Yamahata2008-05-272-0/+43
| | | | | | | | | | | | | | | | | | introduce pv_info which describes some randome info about underlying execution environment. Cc: Jes Sorensen <jes@sgi.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: preparation: move the constants, LOAD_OFFSET, to a header file.Isaku Yamahata2008-05-271-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the LOAD_OFFSET definition from vmlinux.lds.S into system.h. On paravirtualized environments, it is necessary to detect the execution environment. One of the solutions is the multi entry point. The multi entry point allows a boot loader to start the kernel execution from the entry point which is different from the ELF entry point. The non standard entry point will defined as the specialized elf note which contains the LMA of the entry point symbol. The constant, LOAD_OFFSET, is necessary to calculate the symbol's LMA. Move the definition into the public header file to make it available to the multi entry point support. Cc: "He, Qing" <qing.he@intel.com> Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [IA64] pvops: preparation: remove extern in irq_ia64.cIsaku Yamahata2008-05-271-1/+0
| | | | | | | | | | | | | | | | | | remove extern declaration of handle_IPI() in irq_ia64.c. Instead, declare it in asm-ia64/smp.h. Later handle_IPI() will be referenced from another file. Signed-off-by: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | ACPI : Create "idle=nomwait" bootparamZhao Yakui2008-07-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "idle=nomwait" disables the use of the MWAIT instruction from both C1 (C1_FFH) and deeper (C2C3_FFH) C-states. When MWAIT is unavailable, the BIOS and OS generally negotiate to use the HALT instruction for C1, and use IO accesses for deeper C-states. This option is useful for power and performance comparisons, and also to work around BIOS bugs where broken MWAIT support is advertised. http://bugzilla.kernel.org/show_bug.cgi?id=10807 http://bugzilla.kernel.org/show_bug.cgi?id=10914 Signed-off-by: Zhao Yakui <yakui.zhao@intel.com> Signed-off-by: Li Shaohua <shaohua.li@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com>
* | ACPI: Create "idle=halt" bootparamZhao Yakui2008-07-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | "idle=halt" limits the idle loop to using the halt instruction. No MWAIT, no IO accesses, no C-states deeper than C1. If something is broken in the idle code, "idle=halt" is a less severe workaround than "idle=poll" which disables all power savings. Signed-off-by: Zhao Yakui <yakui.zhao@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com>
* | Merge branch 'generic-ipi' into generic-ipi-for-linusIngo Molnar2008-07-157-253/+28
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/powerpc/Kconfig arch/s390/kernel/time.c arch/x86/kernel/apic_32.c arch/x86/kernel/cpu/perfctr-watchdog.c arch/x86/kernel/i8259_64.c arch/x86/kernel/ldt.c arch/x86/kernel/nmi_64.c arch/x86/kernel/smpboot.c arch/x86/xen/smp.c include/asm-x86/hw_irq_32.h include/asm-x86/hw_irq_64.h include/asm-x86/mach-default/irq_vectors.h include/asm-x86/mach-voyager/irq_vectors.h include/asm-x86/smp.h kernel/Makefile Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | on_each_cpu(): kill unused 'retry' parameterJens Axboe2008-06-263-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | smp_call_function: get rid of the unused nonatomic/retry argumentJens Axboe2008-06-266-8/+7
| | | | | | | | | | | | | | | | | | | | | | | | It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * | ia64: convert to generic helpers for IPI function callsJens Axboe2008-06-262-239/+15
| | | | | | | | | | | | | | | | | | | | | | | | This converts ia64 to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | | [IA64] export account_system_vtimeDoug Chapman2008-06-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The symbol account_system_vtime is used by the kvm module but not exported. This breaks building with CONFIG_VIRT_CPU_ACCOUNTING and CONFIG_KVM=m. Signed-off-by: Doug Chapman <doug.chapman@hp.com> Acked-by: Hidetosho Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | [IA64] Bugfix for system with 32 cpusTony Luck2008-06-301-1/+2
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a system where there are no hot pluggable cpus "additional_cpus" is still set to -1 at the point where we call per_cpu_scan_finalize(). If we didn't find an SRAT table and so pick the default "32" for the number of cpus, when we get to: high_cpu = min(high_cpu + reserve_cpus, NR_CPUS); we will end up initializing for just 31 cpus ... and so we will die horribly when bringing up cpu#32. Problem introduced by: 2c6e6db41f01b6b4eb98809350827c9678996698 "Minimize per_cpu reservations." Acked-by: Robin Holt <holt@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Eliminate NULL test after alloc_bootmem in iosapic_alloc_rte()Julia Lawall2008-06-241-2/+0
| | | | | | | | | | | | | | | | | | As noted by Akinobu Mita alloc_bootmem and related functions never return NULL and always return a zeroed region of memory. Thus a NULL test or memset after calls to these functions is unnecessary. Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | [IA64] Fix boot failure on ia64/sn2Jes Sorensen2008-06-241-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Call check_sal_cache_flush() after platform_setup() as check_sal_cache_flush() now relies on being able to call platform vector code. Problem was introduced by: 3463a93def55c309f3c0d0a8aaf216be3be42d64 "Update check_sal_cache_flush to use platform_send_ipi()" Signed-off-by: Jes Sorensen <jes@sgi.com> Tested-by: Alex Chiang: <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | Merge branch 'release' of ↵Linus Torvalds2008-06-162-9/+8
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64] Fix CONFIG_IA64_SGI_UV build error [IA64] Update check_sal_cache_flush to use platform_send_ipi() [IA64] perfmon: fix async exit bug
| * | [IA64] Update check_sal_cache_flush to use platform_send_ipi()Alex Chiang2008-06-111-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check_sal_cache_flush is used to detect broken firmware that drops pending interrupts. The old implementation schedules a timer interrupt for itself in the future by getting the current value of the Interval Timer Counter + 1000 cycles, waits for the interrupt to be pended, calls SAL_CACHE_FLUSH, and finally checks to see if the interrupt is still pending. This implementation can cause problems for virtual machine code if the process of scheduling the timer interrupt takes more than 1000 cycles; the virtual machine can end up sleeping for several hundred years while waiting for the ITC to wrap around. The fix is to use platform_send_ipi. The processor will still send an interrupt to itself, using the IA64_IPI_DM_INT delivery mode, which causes the IPI to look like an external interrupt. The rest of the SAL_CACHE_FLUSH + checking to see if the interrupt is still pending remains unchanged. This fix has been boot tested successfully on: - intel tiger2 - hp rx6600 - hp rx5670 The rx5670 has known buggy firmware, where SAL_CACHE_FLUSH drops pending interrupts. A boot test on this machine showed this message on the console: SAL: SAL_CACHE_FLUSH drops interrupts; PAL_CACHE_FLUSH will be used instead Which proves that the self-inflicted IPI approach is viable. And as expected, the other tested platforms correctly did not display the warning. Signed-off-by: Alex Chiang <achiang@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | [IA64] perfmon: fix async exit bugstephane eranian2008-06-111-5/+5
| |/ | | | | | | | | | | | | | | | | Move the cleanup of the async queue to the close callback from the flush callback. This avoids losing asynchronous overflow notifications when the file descriptor is shared by multiple processes and one terminates. Signed-off-by: Stephane Eranian <eranian@gmail.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
OpenPOWER on IntegriCloud