| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If we reserve any memory after mem_region_add_dt_reserved, that
reservation won't appear in the device tree. Ensure that we can't
add new regions after this point.
Also, add a testcase for the finalise, including some basic
reserved-ranges property checks.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Comments in the run-mem_region test imply that it uses skiboot's own
malloc for the malloc implementation, but this isn't true; a malloc
inside the mem_region code itself will use the glibc malloc.
This change implements the intention of the test, and uses skiboot
malloc for the file-under-test. real_malloc() is available for actual
glibc mallocs.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, this test doesn't do locking during region changes or
allocations. This change adds the appropriate locking.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change adds asserts to the mem_region calls that should have the
per-region lock held.
To keep the tests working, they need the lock_held_by_me() function. The
run-mem_region.c test has a bogus implementation of this, as it doesn't
do any locking at the moment. This will be addressed in a later change.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, we have a single lock for the entire mem_region system; this
protects both the global region list, and the allocations from each
region.
This means we can't allocate memory while traversing the global region
list, as any malloc/realloc/free will try to acquire the mem_region lock
again.
This change separates the locking into different functions. We keep the
mem_region_lock to protect the regions list, and introduce a per-region
lock to protect allocations from the regions' free_lists.
Then we remove the open-coded invocations of mem_alloc, where we'd
avoided malloc() due to the above locking issue.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
core/flash.c:271:14: error: variable 'flash' is used uninitialized whenever 'for' loop exits because its condition is false
[-Werror,-Wsometimes-uninitialized]
for (i = 0; i < ARRAY_SIZE(flashes); i++) {
^~~~~~~~~~~~~~~~~~~~~~~
core/flash.c:284:7: note: uninitialized use occurs here
if (!flash) {
^~~~~
core/flash.c:271:14: note: remove the condition if it is always true
for (i = 0; i < ARRAY_SIZE(flashes); i++) {
^~~~~~~~~~~~~~~~~~~~~~~
core/flash.c:257:21: note: initialize the variable 'flash' to silence this warning
struct flash *flash;
^
= NULL
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| | |
(missed this from pr_fmt commit, whoops)
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
(13:31:46) benh: stewart: flash_load_resources()
(13:31:53) benh: stewart: you hit the unlock at the bottom of the loop
(13:31:59) benh: stewart: at that point the list may be empty
(13:32:09) benh: stewart: and so another concurrent load can restart the thread
(13:32:15) benh: stewart: you end up with duplicate threads
(13:32:26) benh: stewart: in which case you can hit the assert
<patch goes here>
(13:34:27) benh: ie, never drop the lock with the queue empty
(13:34:29) benh: unless you know you will exit the job
(13:34:32) benh: otherwise you can have a duplicate job
(13:34:41) benh: -> kaboom
(13:36:29) benh: yeah the decision to exit the loop must be atomic with
the popping of the last element in the list
(13:36:43) benh: to match the decision to start the thread which is atomic
with the queuing of the first element
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| | |
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In lock_recursive(), the condition, used to check if the lock has
been held by current CPU, can be replaced with lock_held_by_me()
to simplify the code.
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| | |
i.e. currently only mambo
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This means VPD LID is already loaded before we start preloading
kernel and initramfs LIDs, thus ensuring VPD doesn't have to wait
for them to finish being read from FSP.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we have multiple systems trying to start concurrent jobs on different
CPUs, they typically pick the first available (operating) CPU to schedule
the job on. This works fine when there's only one set of jobs or when we
want to bind jobs to specific CPUs.
When we have jobs such as asynchronously loading LIDs and scanning PHBs,
we don't care which CPUs they run on, we care more that they are not
scheduled on CPUs that have existing tasks.
This patch adds a global queue of jobs which secondary CPUs will look
at for work (if idle).
This leads to simplified callers, which just need to queue jobs to NULL
(no specific CPU) and then call a magic function that will run the
CPU job queue if we don't have secondary CPUs.
Additionally, we add a const char *name to cpu_job just to aid with
debugging.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of synchronously waiting for CAPP microcode during PCI probe,
start preload of CAPP microcode early in boot so that it's present
when we need it during PCI probing.
On some platforms (astbmc), flash access is serialized, and prior to
this patch, the async preload of BOOTKERNEL would have to finish before
loading CAPP ucode would start, needlessly slowing boot.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This should help us capture (in skiboot log) how long we spend waiting
for resources to load from flash/FSP.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
Reviewed-by: Joel Stanley <joel@jms.id.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Implement start_preload_resource and resource_loaded platform functions
for astbmc machines (palmetto, habanero, firestone).
This means we start loading kernel and initramfs from flash much earlier
in boot, doing things like PCI init concurrently so that by the time
we go to boot the payload, it's already loaded.
Implementation is a simple queue with a job running on another CPU doing
the libflash calls.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This means we will load kernel and initramfs LIDs from FSP/flash
as we init PCI, hopefully reducing boot time.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
Reviewed-by: Joel Stanley <joel@jms.id.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The opal eeh interrupt handlers raise an opal event
(OPAL_EVENT_PCI_ERROR) whenever there is some processing required from
the OS. The OS then needs to call opal_pci_next_error(...) in a loop
passing each phb in turn to clear the event.
However opal_pci_next_error(...) clears the event unconditionally
meaning it would be possible for eeh events to be cleared without
processing them leading to missed events.
This patch fixes the problem by keeping track of eeh events on a
per-phb basis and only clearing the opal event once all phb eeh events
have been cleared.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On FSP based machine, attention LED location code is passed to OPAL
via HDAT. We want to populate this information in device tree under
led node, so that LED driver can use this information.
Presently we are creating '/ibm,opal' node after parsing hdata
information. This patch validates '/ibm,opal' node before creating.
So on FSP based machine we can create this node in hdata itself
without breaking.
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In skiboot, CPU stacks are indexed by PIR.
During boot, we have two ideas about what the actual maximum PIR is:
1) detect CPU type (P7 or P8): we know max PIR is max for that proc
(e.g. 1024, 8192)
2) start all CPUs (go through device tree for CPUs that exist).
We now know the *actual* CPUs we have and the max PIR.
e.g 1, 64, 3319 or whatever
Each CPU stack is 16KB.
So max CPU stacks size for P7 is 16MB, for P8 is 128MB.
The *actual* max for the machine we're booting on is based on max PIR
we detect during boot. I have found the following:
Mambo: 16kb max (one CPU)
P7: 64, meaning 64*16k = 1MB
P8: 3320, meaning 3320*16k = 51MB
So, currently, we were not reseting the size of the skiboot_cpu_stacks
memory region correctly before boot (we construct that part of the device
tree as the very last thing before booting the payload), even though the
comment in mem_region.c would suggest we were, we weren't. Because code
comments are evil and are nothing but filty, filthy lies.
With this patch, we now properly adjust the CPU stacks memory region
size after we've detected CPU type and after we've found the real
max PIR.
This saves between about 77MB and 128MB-16kB of memory from being in a
reserved region and it'll now be available to the OS to use for things
such as cat pictures rather than being firmware stack space waiting for
a CPU that will never appear.
You can see the difference in skiboot log, "Reserved regions:":
Before:
ALL: 0x000031a00000..0000399fffff : ibm,firmware-stacks
AFTER:
Mambo: 0x000031a00000..000031a1ffff : ibm,firmware-stacks
P7: 0x000031a00000..000031afffff : ibm,firmware-stacks.
P8: 0x000031a00000..000034ddffff : ibm,firmware-stacks
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A performance issue on HPC workloads was identified with some network
adapters due to the specific DMA access patterns they use which hits
a worst-case scenario in the PHB.
Disabling the write scope group feature in the PHB works around this,
so let's do that when we detect such an adapter in a PCIe direct slot.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Reassure the user that boot is still working when the payload load is
slow.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The presence detect bit in the standard root complex config space is
not properly implemented on some IBM PHBs. Using it during probe is
incorrect.
We already have a workaround using the hotplug override "AB" detect
bits in the PHB3 code but it somewhat relies on the standard presence
detect bit returning false positives, which happened on Venice/Murano
but no longer happens in Naples.
Similarly, all the slot control stuff in the generic pci_enable_bridge()
isn't going to work properly on the PHB root complex and is unnecessary
as this code is only called after the upper layers have verified the
presence of a valid link on the PHB (the slot power control for the PHB
is handled separately).
This fixes it all by removing the AB detect flag, and unconditionally
using those bits in PHB3 presence detect along with making sure the
code in pci_enable_bridge() that manipulates the slot controls is
only executed on downstream ports.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Not all ipmi related functions check for a valid backend before
attempting to use it. Under normal circumstances this should not
happen as the platform should always register an ipmi backend. However
a system should be able to boot without a functional ipmi backend,
which is sometimes the case during system bringup.
This patch adds presence checks for an ipmi backend before attempting
to use it, thus allowing a system with a non-functional backend to
boot without ipmi.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|/
|
|
|
|
|
|
|
|
| |
This adds the PVR and CFAM ID for the Naples chip. Otherwise treated as
a Venice.
This doesn't add the definitions for the new PHB revision yet
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
| |
Now that opal.h includes opal-api.h, there are a bunch of files that
include both but don't need to.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Currently when running on mambo OPAL_CEC_POWER_DOWN doesn't work, the
simulator keeps running.
We can use the magic mambo support instruction with the right opcode to
ask mambo to stop the simulation.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
| |
And print some informations about GPR state, backtrace, etc...
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
| |
Linux no longer calls it, it never worked on LE and generally
speaking never really did anything useful anyway.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
| |
Display an assertion and a backtrace
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
| |
Some platforms fail to initialize it and bad things ensure
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of split core, some of the Timer facility errors needs cleanup to be
done before we proceed with the error recovery.
Certain TB/HDEC errors leaves dirty data in timebase and HDEC registers,
which need to cleared before we initiate clear_tb_errors through TFMR[24].
The cleanup has to be done by any one thread from core or subcore.
In split core mode, it is required to clear the dirty data from TB/HDEC
register by all subcores (active partitions) before we clear tb errors
through TFMR[24]. The HMI recovery would fail even if one subcore do
not cleanup the respective TB/HDEC register. Dirty data can be cleaned by
writing zero's to TB/HDEC register.
For un-split core, any one thread can do the cleanup.
For split core, any one thread from each subcore can do the cleanup.
Errors that required pre-recovery cleanup:
- SPR_TFMR_TB_RESIDUE_ERR
- SPR_TFMR_HDEC_PARITY_ERROR
This patch implements pre-recovery steps to clean dirty data from TB/HDEC
register for above mentioned timer facility errors.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handle TFMR parity errors reported through HMER[bit 5] and TFMR bit 60
i.e tx_tfmr_corrupt. For recovery, write '1' to TFMR bit 60 to clear it.
Once we clear this error, check for timebase machine state in TFMR [28:31]
and clear TB errors if timebase machine state is in error (9) state. Once
we reset the timebase machine state continue loading TOD into core TB.
To inject TFMR parity error issue:
$ putscom pu.ex 10013281 0001080000000000 -all
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
| |
The OPAL_SENSOR_READ call in Linux currently only tests for
OPAL_ASYNC_COMPLETION value. It is safe to change this return value to
give some extra information on the platform sensor support.
Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
This patch simply adds sensors nodes for the core temperatures. It
uses the core PIR as a resource identifier to fit in the sensor model.
The device tree nodes use the new layout.
Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a new sensor family for Digital Temperature Sensors
and a new resource class to capture the core temperatures.
Each core has four DTS located in different zones (LSU, ISU, FXU, L3).
The max of the four temperatures is computed and returned for the core
as well as a global trip point value. This is based on the meltbox tool.
Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces an initial framework to define a sensor_read
operation per platform. It also proposes a few helper routines to
work on the sensor 'handler' which identifies a sensor and attribute
in the OPAL_SENSOR_READ call.
Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Call abort if OCC LID preload fails.
Related discussion:
https://lists.ozlabs.org/pipermail/skiboot/2015-March/000636.html
Suggested-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves away from using fsp_sync_msg in fsp_fetch_data and instead
using the platform hooks for start_preload_resource() to actually queue
up a load and having the plumbing for checking if a resource is loaded yet.
This gets rid of the "pollers called with locks held" warning we got
heaps of previously. You can now boot some FSP systems without getting
this warning at all.
This also sets the stage for starting load of LIDs much earlier to when
they're needed, improving boot time.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
| |
No functional changes in what happens, just have two calls, one for
queueing preload the other for waiting until it has loaded.
future patches will introduce platform specific queueing.
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
| |
Detect recursive opal poller call from same CPU.
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
On Malfunction Alert check if it has driven by NX. The NX status register
HMI Active bit will be set if HMI is caused due to NX checkstop.
Read all NX FIRs to identify reason for check stop and update a HMI event
with relevant error information.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
On Malfunction Alert read all CORE FIRs to identify reason for
core check stop and update a HMI event with relevant error information.
This patch changes the HMI event version to 2 (OpalHMIEvt_V2).
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enhance the HMI event structure to accommodate CORE/NX check stop error
information and bump up the HMI event version to V2.
/* version 2 and later */
union {
/*
* checkstop info (Core/NX).
* Valid for OpalHMI_ERROR_MALFUNC_ALERT.
*/
struct {
uint8_t xstop_type; /* enum OpalHMI_XstopType */
uint8_t reserved_1[3];
__be32 xstop_reason;
union {
__be32 pir; /* for CHECKSTOP_TYPE_CORE */
__be32 chip_id; /* for CHECKSTOP_TYPE_NX */
} u;
} xstop_error;
} u;
This patch just adds new fields to HMI event structure. The subsequent
patches will implement the logic to identify reason for CORE/NX checkstop.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.vnet.ibm.com>
|
|\ |
|
| |
| |
| |
| |
| |
| | |
We have the logic inverted here.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The boot count sensor is a discrete sensor that is set once the system
is up and running.
On successful boot, the BMC expects the sensor to be set to 2.
Signed-off-by: Joel Stanley <joel@jms.id.au>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We set the IPMI firmware progress sensor to indicate the boot progress
of the system. The x86-centric IPMI names don't fit perfectly into what
skiboot does, but they do give some indication of the system state.
Signed-off-by: Joel Stanley <joel@jms.id.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
BMC based systems contain a PNOR to provide flash storage. The host
normally has exclusive access to the PNOR, however the BMC may use IPMI
to request access to perform functions such as update the firmware.
Indicate to users of the flash that the device is busy by taking the
lock, and setting a per-flash busy flag, which causes flash operations
to return OPAL_BUSY.
Minor changes from Jeremy Kerr
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
|