summaryrefslogtreecommitdiffstats
path: root/core/cpu.c
Commit message (Collapse)AuthorAgeFilesLines
...
* cpu: Make init_hid() local to cpu.cBenjamin Herrenschmidt2017-06-261-1/+3
| | | | | | | | | No point doing that from init on the main CPU while it's done already inside cpu.c for secondaries. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Add a space to #threads messageBenjamin Herrenschmidt2017-06-261-3/+3
| | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Disable nap on P8 Mambo, public release has bugsStewart Smith2017-06-071-0/+4
| | | | | Fixes: 9567e18728d0559bc5f79ea927d684dc3b1e3555 Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Improve cpu_idle when PM is disabledNicholas Piggin2017-06-061-11/+49
| | | | | | | | | | | | | Split cpu_idle() into cpu_idle_delay() and cpu_idle_job() rather than requesting the idle type as a function argument. Have those functions provide a default polling (non-PM) implentation which spin at the lowest SMT priority. This moves all the decrementer delay code into the CPU idle code rather than the caller. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Introduce smt_lowest()Nicholas Piggin2017-06-061-1/+1
| | | | | | | | | | | | Recent CPUs have introduced a lower SMT priority. This uses the Linux pattern of executing priority nops in descending order to get a simple portable way to put the CPU into lowest SMT priority. Introduce smt_lowest() and use it in place of smt_very_low and smt_low ; smt_very_low sequences. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Add iterators for "present" CPUsBenjamin Herrenschmidt2017-01-051-0/+14
| | | | | | | | | Some code path want to look at all the CPUs that are "present", which means they have been enabled by HB/Cronus and can be accessed via XSCOMs, even if they haven't called in yet. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* core/cpu.c: Use a device-tree node to detect nest mmu presenceAlistair Popple2016-12-161-14/+22
| | | | | | | | | | | | | | | | | | | The nest mmu address scom was hardcoded which could lead to boot failure on POWER9 systems without a nest mmu. For example Mambo doesn't model the nest mmu which results in the following failure when calling opal_nmmu_set_ptcr() during kernel load: WARNING: 20856759: (20856757): Invalid address 0x0000000028096258 in XSCOM range, SCOM=0x00280962b WARNING: 20856759: (20856757): Attempt to store non-existent address 0x00001A0028096258 20856759: (20856757): 0x000000003002DA08 : stdcix r26,r0,r3 FATAL ERROR: 20856759: (20856757): Check Stop for 0:0: Machine Check with ME bit of MSR off This patch instead reads the address from the device-tree and makes opal_nmmu_set_ptcr return OPAL_UNSUPPORTED on systems without a nest mmu. Signed-off-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* run pollers in cpu_process_local_jobs() if running job synchonouslyStewart Smith2016-11-241-0/+1
| | | | | | | | | | | | | In the event we only have 1 CPU thread, we run asynchronous jobs synchronously, and while we wait for them to finish, we run pollers. However, if the jobs themselves don't call pollers (e.g. by time_wait()) then we'll end up in long periods of not running pollers at all. To work around this, explicitly run pollers when we're the only CPU thread (i.e. when we run the job synchronously). Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Don't enable nap mode/PM mode on non-P8Benjamin Herrenschmidt2016-11-151-0/+3
| | | | | | | We don't support it yet on others. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* core/cpu.c: Add OPAL call to setup Nest MMUAlistair Popple2016-09-061-0/+32
| | | | | | | | | | | | | | | POWER9 has an off core MMU called the Nest MMU which allows other units within a chip to perform address translations. The context and setup for translations is handled by the requesting agents, however the Nest MMU does need to know where in system memory the page tables are located. This patch adds a call to setup the Nest MMU page table pointer on a per-chip basis. Signed-off-by: Alistair Popple <alistair@popple.id.au> Reviewed-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Make endian switch message more informativeBenjamin Herrenschmidt2016-08-221-1/+6
| | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Display number of started CPUs during bootBenjamin Herrenschmidt2016-08-221-2/+4
| | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Add support for nap mode on P8Benjamin Herrenschmidt2016-08-221-0/+93
| | | | | | | | | | | This allows us to send threads to nap mode when either idle (waiting for a job) or when in a sleep delay (time_wait*). We only enable the functionality after the 0x100 vector has been patched, and we disable it before transferring control to Linux. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Remove pollers calling heuristics from cpu_wait_jobBenjamin Herrenschmidt2016-08-221-8/+4
| | | | | | | | | | | | | This will be handled by time_wait_ms(). Also remove a useless smt_medium(). Note that this introduce a difference in behaviour: time_wait will only call the pollers on the boot CPU while cpu_wait_job() could call them on any. However, I can't think of a case where this is a problem. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Remove global job queueBenjamin Herrenschmidt2016-08-221-26/+87
| | | | | | | | | | | | | | Instead, target a specific CPU for a global job at queuing time. This will allow us to wake up the target using an interrupt when implementing nap mode. The algorithm used is to look for idle primary threads first, then idle secondaries, and finally the less loaded thread. If nothing can be found, we fallback to a synchronous call. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Add cpu_idle() which we call when waiting for a jobBenjamin Herrenschmidt2016-08-221-0/+18
| | | | | | | For now a simple generic implementation using cpu_relax() Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Add cpu_check_jobs()Benjamin Herrenschmidt2016-08-221-2/+6
| | | | | | | | Wrapper around list_empty_nocheck() to see if there's any job pending on a given CPU Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Remove unused cpu_free_job()Benjamin Herrenschmidt2016-08-221-9/+0
| | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Disable mcount on some early functionsBenjamin Herrenschmidt2016-08-181-1/+1
| | | | | | | | It doesn't work well to call it before the boot CPU structure is initialized. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* list: Use list_empty_nocheck() when checking a list racilyBenjamin Herrenschmidt2016-08-181-1/+2
| | | | | | | Otherwise we might trigger an assertion when list debug is enabled Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Use additional checks in skiboot for pointersBalbir Singh2016-08-171-0/+6
| | | | | | | | | | | The checks validate pointers sent in using opal_addr_valid() in opal_call API's provided via the console, cpu, fdt, flash, i2c, interrupts, nvram, opal-msg, opal, opal-pci, xscom and cec modules Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Adjust top_of_ram when we know cpu_max_pir for the processor generationStewart Smith2016-08-171-0/+7
| | | | | | | | | | This allows opal_addr_valid to perform the < top_of_mem check and pass for early xscom_reads that read into cpu stack. Arguably the more correct solution is to split opal_xscom_read from xscom_read and only validate for the OPAL call. Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* log_level: Reduce the in memory console log_level to lower priorityPridhiviraj Paidipeddi2016-08-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | Below are the in-memory console log messages observed with error level(PR_ERROR) [54460318,3] HBRT: Mem region 'ibm,homer-image' not found ! [54465404,3] HBRT: Mem region 'ibm,homer-image' not found ! [54470372,3] HBRT: Mem region 'ibm,homer-image' not found ! [54475369,3] HBRT: Mem region 'ibm,homer-image' not found ! [11540917382,3] NVRAM: Layout appears sane [11694529822,3] OPAL: Trying a CPU re-init with flags: 0x2 [61291003267,3] OPAL: Trying a CPU re-init with flags: 0x1 [61394005956,3] OPAL: Trying a CPU re-init with flags: 0x2 Lowering the log level of mem region not found messages to PR_WARNING and remaining messages to PR_INFO level [54811683,4] HBRT: Mem region 'ibm,homer-image' not found ! [10923382751,6] NVRAM: Layout appears sane [55533988976,6] OPAL: Trying a CPU re-init with flags: 0x1 Signed-off-by: Pridhiviraj Paidipeddi <ppaidipe@linux.vnet.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* OPAL_REINIT_CPUS: clarify that for P9 and above, we can have other flagsStewart Smith2016-07-141-1/+3
| | | | | | | | On P8 we got it a bit wrong and would fall into a workaround for P8 DD1 HILE setting if other bits were set in the flags to OPAL_REINIT_CPUS, limiting our opportunity to extend it in the future. Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Don't call time_wait with lock heldBenjamin Herrenschmidt2016-07-131-7/+17
| | | | | | | | | Also make the locking around re-init safer, properly block the OS from restarting a thread that was caught for re-init. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: supply ibm,dec-bits via devicetreeOliver O'Halloran2016-07-011-0/+48
| | | | | | | | | | | | | ISAv3 adds a mode to increase the size of the decrementer from 32 bits. The enlarged decrementer can be between 32 and 64 bits wide with the exact value being implementation dependent. This patch adds support for detecting the size of the large decrementer and populating each CPU node with the "ibm,dec-bits" property. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> [stewart@linux.vnet.ibm.com: rename enable_ld() to enable_large_dec()] Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* ATTN: Set attn bit instead of hile bit in enable/disable attn functionVasant Hegde2016-06-201-2/+2
| | | | | | | | | | | Commit 3ff35034 (Abstract HILE and attn enable bit definitions for HID0) enabled HILE bit instead of ATTN bit in enable/disable_attn fuction. Hence OPAL assert is failing. Fixes: 3ff35034 (Abstract HILE and attn enable bit definitions for HID0) CC: Michael Neuling <mikey@neuling.org> Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* core/cpu: Introduce DEBUG_SERIALIZE_CPU_JOBSGavin Shan2016-06-141-0/+5
| | | | | | | | | | | | Currently, the PHB reset and PCI enumeration are done concurrently on multiple CPU cores. The output messages are interleaved and not readable enough. This adds a option to do the jobs in serialized fashion for debugging purpose only. The serialized mode should be always disabled in field. Suggested-by: Stewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Add base POWER9 supportMichael Neuling2016-05-101-0/+12
| | | | | | | | | | | Add PVR detection, chip id and other misc bits for POWER9. POWER9 changes the location of the HILE and attn enable bits in the HID0 register, so add these definitions also. Signed-off-by: Michael Neuling <mikey@neuling.org> [stewart@linux.vnet.ibm.com: Fix Numbus typo, hdata_to_dt build fixes] Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Abstract HILE and attn enable bit definitions for HID0Michael Neuling2016-05-101-4/+10
| | | | | | | | Abstract HILE and attn enable bits definitions for HID0 in case these locations randomly change in future chip revisions. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Disable attn instruction on bootMichael Neuling2016-05-101-0/+16
| | | | | | | | | | | | Currently we don't touch the attn enable bit in HID0 on boot. When attn is enabled, it's available everywhere including HV=0. This is very dangerous for the host kernel. This explicitly disables the attn instruction on all CPUs on boot. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Make trigger_attn() enable attn alsoMichael Neuling2016-05-101-0/+16
| | | | | | | | This changes trigger_attn() to also enable attn via HID0, so callers don't have to do it themselves. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Add helper function to return number of cores available in the chipShilpasri G Bhat2016-02-231-0/+11
| | | | | | | | | get_available_nr_cores_in_chip() takes 'chip_id' as an argument and returns the number of available cores in the chip. Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Fix printf format warningStewart Smith2015-10-071-1/+1
| | | | | Fixes: 55ae15b Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Ensure we run pollers in cpu_wait_job()Stewart Smith2015-10-071-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | In root causing a bug on AST BMC Alistair found that pollers weren't being run for around 3800ms. This was due to a wonderful accident that's probably about a year or more old where: In cpu_wait_job we have: unsigned long ticks = usecs_to_tb(5); ... time_wait(ticks); While in time_wait(), deciding on if to run pollers: unsigned long period = msecs_to_tb(5); ... if (remaining >= period) { Obviously, this means we never run pollers. Not ideal. This patch ensures we run pollers every 5ms in cpu_wait_job() as well as displaying how long we waited for a job if that wait was >1second. Reported-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* verify that PIR in init_all_cpus() is within our bounds for cpu_stacks[pir]Stewart Smith2015-07-081-0/+1
| | | | Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu_remove_node() : Fix potential null dereferenceKamalesh Babulal2015-06-241-0/+2
| | | | | | | | | | Fix potential NULL dereference of pointer returned from dt_find_property() in cpu_remove_node(). Fixes coverity defect#97849. Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Merge branch 'stable'Stewart Smith2015-06-161-2/+1
|\
| * cpu: Fix hang issue in opal_reinit_cpus()Hari Bathini2015-06-161-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | Commit 87690bd19dbb introduced a label "again" so as to avoid holding the lock while waiting. But this leads to hang in scenarios like kdump, where 'cpu_state_os' will be the state for all offline cpus. Actaully, the wait loop doesn't really take off with the goto statement in it. This patch tries to fix this problem by removing the goto statement and unlocking/locking within the wait loop instead. Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* | Move prlog(PR_TRACE) in cpu job to be before freeing CPU jobStewart Smith2015-06-151-1/+1
| | | | | | | | | | | | Use-after-free bug. Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* | Make cpu_relax() inlineStewart Smith2015-05-291-12/+0
| | | | | | | | | | | | | | | | | | | | This modifies code output when built with GCOV so that the store of counter data is out of some of the loops, thus improving things when built with gcov. Also, likely makes us do even less work when relaxing, so probably a good thing. Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* | Make cpu_relax() nop instructions in one asm blockStewart Smith2015-05-281-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With SKIBOOT_GCOV=1 and multiple asm blocks for nop instruction, we end up with this: 1a5f8: 60 00 00 00 nop 1a5fc: 60 00 00 00 nop 1a600: 60 00 00 00 nop 1a604: 60 00 00 00 nop 1a608: 3d 42 00 0e addis r10,r2,14 1a60c: e9 2a 20 f8 ld r9,8440(r10) 1a610: 39 29 00 01 addi r9,r9,1 1a614: f9 2a 20 f8 std r9,8440(r10) 1a618: 60 00 00 00 nop 1a61c: 60 00 00 00 nop 1a620: 60 00 00 00 nop 1a624: 60 00 00 00 nop Which is not the desired code output for relaxing the cpu. By batching up the instructions into one asm block, we get the desired effect: just one set of mcount update
* | Fix synchronous running of CPU jobs for NRCPUs=1Stewart Smith2015-05-071-8/+9
| | | | | | | | | | | | i.e. currently only mambo Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* | Add global CPU job queueStewart Smith2015-05-071-10/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we have multiple systems trying to start concurrent jobs on different CPUs, they typically pick the first available (operating) CPU to schedule the job on. This works fine when there's only one set of jobs or when we want to bind jobs to specific CPUs. When we have jobs such as asynchronously loading LIDs and scanning PHBs, we don't care which CPUs they run on, we care more that they are not scheduled on CPUs that have existing tasks. This patch adds a global queue of jobs which secondary CPUs will look at for work (if idle). This leads to simplified callers, which just need to queue jobs to NULL (no specific CPU) and then call a magic function that will run the CPU job queue if we don't have secondary CPUs. Additionally, we add a const char *name to cpu_job just to aid with debugging. Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* | Adjust skiboot_cpu_stacks region size according to real max PIRStewart Smith2015-04-301-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In skiboot, CPU stacks are indexed by PIR. During boot, we have two ideas about what the actual maximum PIR is: 1) detect CPU type (P7 or P8): we know max PIR is max for that proc (e.g. 1024, 8192) 2) start all CPUs (go through device tree for CPUs that exist). We now know the *actual* CPUs we have and the max PIR. e.g 1, 64, 3319 or whatever Each CPU stack is 16KB. So max CPU stacks size for P7 is 16MB, for P8 is 128MB. The *actual* max for the machine we're booting on is based on max PIR we detect during boot. I have found the following: Mambo: 16kb max (one CPU) P7: 64, meaning 64*16k = 1MB P8: 3320, meaning 3320*16k = 51MB So, currently, we were not reseting the size of the skiboot_cpu_stacks memory region correctly before boot (we construct that part of the device tree as the very last thing before booting the payload), even though the comment in mem_region.c would suggest we were, we weren't. Because code comments are evil and are nothing but filty, filthy lies. With this patch, we now properly adjust the CPU stacks memory region size after we've detected CPU type and after we've found the real max PIR. This saves between about 77MB and 128MB-16kB of memory from being in a reserved region and it'll now be available to the OS to use for things such as cat pictures rather than being firmware stack space waiting for a CPU that will never appear. You can see the difference in skiboot log, "Reserved regions:": Before: ALL: 0x000031a00000..0000399fffff : ibm,firmware-stacks AFTER: Mambo: 0x000031a00000..000031a1ffff : ibm,firmware-stacks P7: 0x000031a00000..000031afffff : ibm,firmware-stacks. P8: 0x000031a00000..000034ddffff : ibm,firmware-stacks Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* | Add Naples chip supportBenjamin Herrenschmidt2015-04-091-7/+22
|/ | | | | | | | | | This adds the PVR and CFAM ID for the Naples chip. Otherwise treated as a Venice. This doesn't add the definitions for the new PHB revision yet Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* Remove redundant includes of opal-api.hMichael Ellerman2015-04-011-1/+0
| | | | | | | | Now that opal.h includes opal-api.h, there are a bunch of files that include both but don't need to. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* opal: Handle TB residue and HDEC parity HMI errors on split core.Mahesh Salgaonkar2015-03-261-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In case of split core, some of the Timer facility errors needs cleanup to be done before we proceed with the error recovery. Certain TB/HDEC errors leaves dirty data in timebase and HDEC registers, which need to cleared before we initiate clear_tb_errors through TFMR[24]. The cleanup has to be done by any one thread from core or subcore. In split core mode, it is required to clear the dirty data from TB/HDEC register by all subcores (active partitions) before we clear tb errors through TFMR[24]. The HMI recovery would fail even if one subcore do not cleanup the respective TB/HDEC register. Dirty data can be cleaned by writing zero's to TB/HDEC register. For un-split core, any one thread can do the cleanup. For split core, any one thread from each subcore can do the cleanup. Errors that required pre-recovery cleanup: - SPR_TFMR_TB_RESIDUE_ERR - SPR_TFMR_HDEC_PARITY_ERROR This patch implements pre-recovery steps to clean dirty data from TB/HDEC register for above mentioned timer facility errors. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* sparse: fix Using plain integer as NULL pointer warningCédric Le Goater2015-02-261-1/+1
| | | | | Signed-off-by: Cédric Le Goater <clg@fr.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
* cpu: Don't hold lock while waitingBenjamin Herrenschmidt2015-02-181-3/+7
| | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
OpenPOWER on IntegriCloud