| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A QEMU PowerNV machine does not necessarily have a BT device. It needs
to be defined on the command line with :
-device ipmi-bmc-sim,id=bmc0 -device isa-ipmi-bt,bmc=bmc0,irq=10
When the QEMU platform is initialized by skiboot, we need to check
that such a device is present and if not, skip the AST initialization.
Fixes: 8340a9642bba ("plat/qemu: use the common OpenPOWER routines to initialize")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an i2c request cannot go through the first time, because the bus is
found in error and need a reset or it's locked by the OCC for example,
the underlying i2c implementation is using timers to manage the
request. However during opal init, opal pollers may not be called, it
depends in the context in which the i2c request is made. If the
pollers are not called, the timers are not checked and we can end up
with an i2c request which will not move foward and skiboot hangs.
Fix it by explicitly checking the timers if we are waiting for an i2c
request to complete and it seems to be taking a while.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Tested-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enables HBRT to use HYP special wakeup register in openBMC
which until now was only used in FSP based machines.
This patch also adds a capability check for opal-prd so that HBRT can
decide if the host special wakeup register can be used.
Fixes: 49999302251b("opal-prd: Add support for runtime OCC reset in ZZ")
Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We do this by assuming filenames with '.ecc' in them are already ECC
protected.
This solves a practical problem in transitioning op-build to use ffspart
for pnor assembly rather than three perl scripts and a lot of XML.
We also update the ffspart tests to take into account ECC requirements.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Reviewed-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
| |
Otherwise we saw failures in CI and the ~221 character paths Jankins
likes to have.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Reviewed-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
Do 4096 byte chunks not 8 byte chunks. A ffspart invocation constructing
a 64MB PNOR goes from a couple of seconds to ~0.1seconds with this
patch.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Reviewed-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
| |
This reverts commit d8b161f4b361f70a7bb43be47d4a32b8f937287a.
As discussed on list, a bit premature to merge, removing for now.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
| |
Okay, so maybe the static analysis warning is all useless, and maybe
having the ifdef around a call is actually useful. I'll take the less
noise in my CI static analysis thing.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
| |
Caught by static analysis. The previous if() condition was ensuring lxr
was not null, so we don't need this additional check.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
| |
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
| |
Basically to shut up static analysis of using a boolean in a non-boolean
context (bitwise).
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
| |
Again, this makes things look slightly different so I don't keep seeing
the static analysis warning.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
| |
The static analysis tool is arguably wrong and should go away.
But... I'm sick of keeping coming back to it and reviewing the false
positives enough to make a slight change to where ifdefs are.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a dumb warning from a certain static analysis tool that a
function has no effect when the ifdef that would make it have an effect
isn't defined and we replace it with a no-op impl.
Putting the #ifdef around the call just so I don't have to discount this
damn static analysis false positive every time I go and look at the
results.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
A certain finicky static analysis tool did point out that we were
operating on a value that could be null (and since first_cpu() calls
next_cpu(NULL) to get the first one, it also gets to be complained about
as next_cpu() could act on that NULL pointer).
So, rework things to shut the static analysis tool up, when in fact this
was never a problem.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a GPU is passed through to a guest and the guest unexpectedly terminates,
there can be cache lines in CPUs that belong to the GPU. So purge the caches
as part of the reset sequence. L1 is write through, so doesn't need to be purged.
The sequence to purge the L2 and L3 caches from the hw team:
"L2 purge:
(1) initiate purge
putspy pu.ex EXP.L2.L2MISC.L2CERRS.PRD_PURGE_CMD_TYPE L2CAC_FLUSH -all
putspy pu.ex EXP.L2.L2MISC.L2CERRS.PRD_PURGE_CMD_TRIGGER ON -all
(2) check this is off in all caches to know purge completed
getspy pu.ex EXP.L2.L2MISC.L2CERRS.PRD_PURGE_CMD_REG_BUSY -all
(3) putspy pu.ex EXP.L2.L2MISC.L2CERRS.PRD_PURGE_CMD_TRIGGER OFF -all
L3 purge:
1) Start the purge:
putspy pu.ex EXP.L3.L3_MISC.L3CERRS.L3_PRD_PURGE_TTYPE FULL_PURGE -all
putspy pu.ex EXP.L3.L3_MISC.L3CERRS.L3_PRD_PURGE_REQ ON -all
2) Ensure that the purge has completed by checking the status bit:
getspy pu.ex EXP.L3.L3_MISC.L3CERRS.L3_PRD_PURGE_REQ -all
You should see it say OFF if it's done:
p9n.ex k0:n0:s0:p00:c0
EXP.L3.L3_MISC.L3CERRS.L3_PRD_PURGE_REQ
OFF"
Suggested-by: Alistair Popple <alistair@popple.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Rashmica Gupta <rashmica.g@gmail.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Back in 2016, we did not have a large support of the PowerNV devices
under QEMU and we were using our own custom ones. This has changed and
we can now use all the common init routines of the OpenPOWER
platforms.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each XTS MMIO ATSD# register is accompanied by another register -
XTS MMIO ATSD0 LPARID# - which controls LPID filtering for ATSD
transactions.
When a host system passes a GPU through to a guest, we need to enable
some ATSD for an LPAR. At the moment the host assigns one ATSD to
a NVLink bridge and this maps it to an LPAR when GPU is assigned to
the LPAR. The link number is used for an ATSD index.
ATSD6&7 stay mapped to the host (LPAR=0) all the time which seems to be
acceptable price for the simplicity.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current kernel calls OPAL_PCI_EEH_FREEZE_STATUS with an uninitialized
@pci_error_type parameter and then analyzes it even if the OPAL call
returned OPAL_SUCCESS. This is results in unexpected EEH events and NPU
freezes.
This initializes @pci_error_type and @severity to known safe values.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The P9 NPU workbook says that only 4K/64K/16M/256M page size are supported
and in fact npu2_map_pe_dma_window() supports just these but in absence of
the "ibm,supported-tce-sizes" property Linux assumes the default P9 PHB4
page sizes - 4K/64K/2M/1G - so when Linux tries 2M/1G TCEs, we get lots of
"Unexpected TCE size" from npu2_tce_kill().
This advertises TCE page sizes so Linux could handle it correctly, i.e.
fall back to 4K/64K TCEs.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
| |
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
_ _ _ _ _ _
( ) ( ) ___ ___ ___ _ _ _ __(_) |_ _ _ ( ) ( )
\| \| / __|/ _ \/ __| | | | '__| | __| | | | |/ |/
\__ \ __/ (__| |_| | | | | |_| |_| |
|___/\___|\___|\__,_|_| |_|\__|\__, |
|___/
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
| |
This updates the Ubuntu 'latest' to use ubuntu:rolling, which is
the most recent release. It turns out that ubuntu:latest is actually the
latest LTS (18.04).
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test checks that the partitions are correctly laid out when the
eraseblock size is greater than the start of the first partition.
Currently ffspart fails to create a valid image in this case.
There are two tests. The second is expected to fail but it is marked as
passing for now.
This test requires pflash to work. Currently we leave that as an
exercise for the user.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
This test specifies a toc in the configuration file.
There are no tests or documentation for the toc syntax, so this exists
to describe how specify a toc.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
If the link trains in degraded mode, log the ODL endpoint information
register for debug. Its content is specific to the DLx and TLx
implementation, so this is really information useful for the hardware
team.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
There's no status readily available to tell the effective link
width. Instead, we have to look at the individual status of each lane,
on the transmit and receive direction. All relevant information is in
the ODL status register.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
| |
Log the link training status register in case of failure to train.
It can have useful information for the hardware team.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
| |
On P8 this is called when we exit fastsleep, and we shouldn't measure
the "time" spent in the call for what (in retrospect) is an obvious
reason.
Fixes: 50ea35c2d07874755c03e6ae2bdf7a33ad2c768a
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We also had some rogue "IBM Confidential" strings that we failed to
remove with the original change of Copyright headers for open sourcing.
Do this by synchronising with the hostboot copy of the code, which
removed the Confidential string when their copyright headers changed for
initial open sourcing of the code back in 2014. See hostboot commit
3bcf5b7982bb8a2d9227dbff7be4ff2ce5fec05c where the HWP copyright headers
were updated.
We likely missed this as we did a similar process inside the skiboot
repository, but likely only on the (C) headers themselves.
The libpore changes that we were missing *look* minor, but we need to
throw some testing at them at least, as there *are* changes that we were
missing.
We also have to make a minor modification (being sent upstream) to avoid
a compiler warning of always false comparison (<0 on unsigned int)
Reported-by: Dawn Sylvia <ddzubak@us.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
| |
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
(cherry picked from commit f4afd85a84ab090ddda7aea18c5153755777f103)
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In dtc v1.4.5 to at least v1.4.7 there have been a few bugs introduced
that change the layout of what's produced in the dts. In order to be
immune from them, we should use the (provided) dtdiff utility, but we
also need to run the dts we're diffing against through a dtb cycle in
order to ensure we get the same format as what the hdat_to_dt to dts
conversion will.
This fixes a bunch of unit test failures on the version of dtc shipped
with recent Linux distros such as Fedora 29.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the bus alloc and free methods were removed we missed a case in the
Firenze platform slot code that relied on the the bus-specific method to
the bus pointer in the request structure. This results in a
branch-to-null during boot and a crash. This patch fixes it by
initialising it manually here.
Fixes: 801462feb7d6 ("core/i2c: Remove bus specific alloc and free callbacks")
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Libflash currently merges contiguous ECC-protected ranges, but doesn't
check that the ECC bytes at the end of the first and start of the second
range actually match sanely. More importantly, if blocklevel_read() is
called with a position at the start of a partition that is contained
somewhere within a region that has been merged it will update the
position assuming ECC wasn't being accounted for. This results in the
position being somewhere well after the actual start of the partition
which is incorrect.
For now, remove the code merging ranges. This means more ranges must be
held and checked however it prevents incorrectly reading ECC-correct
regions like below:
[ 174.334119453,7] FLASH: CAPP partition has ECC
[ 174.437349574,3] ECC: uncorrectable error: ffffffffffffffff ff
[ 174.437426306,3] FLASH: failed to read the first 0x1000 from CAPP partition, rc 14
[ 174.439919343,3] CAPP: Error loading ucode lid. index=201d1
Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
| |
This fell out in f58be46 "libflash/test: Rewrite Makefile.check to
improve scalability". Add it back in as test-blocklevel.
Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Acked-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
| |
Measure entry/exit time for OPAL calls and warn appropriately if the
calls take too long (>100ms gets us a DEBUG log, > 1000ms gets us a
warning).
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
| |
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On a plain boot, this reduces the time spent in OPAL by ~170ms on
p9dsu. This is due to hiomap (currently) using synchronous IPMI
messages.
It will also *significantly* reduce latency on runtime flash
operations, as we'll spend typically 10-20ms in OPAL rather than
100-200ms. It's not an ideal solution to that, but it's a quick
and obvious win for jitter.
Cc: stable
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
When an opencapi device is used via the Acorn adapter, the link used
is connected to the "middle" group of lanes of the obus. We were using
the wrong set of lanes. The link was somehow still training, likely
because the default settings at power-on were good enough, but it's
still wrong.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The I2C read to find out if a device on the GPU slot is an opencapi
adapter or nvidia card is reporting an "arbitration loss" error if no
device is connected on the GPU slot. That I2C read is actually useless
if we already know there's no device connected, so let's skip it. It
will avoid logging an harmless error.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
| |
Suggested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Rashmica Gupta <rashmica.g@gmail.com>
Reviewed-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
| |
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
(cherry picked from commit e550528a74af7e632c359cd29e4ba295743bdb84)
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This isn't *necessarily* an error that we should complain loudly about.
If, for example, the BMC enforces the Read Only flag on a FFS partition,
opening a write window *should* fail, and we do indeed test this in
op-test.
Thus we deal with the error in a well known path: returning an error
code and then it's eventually a userspace problem.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
It seems that newer toolchains get us multiple ctors sections to link in
rather than just one. If we discard them (as we were doing), then we
don't have a working gcov build (and we get the "doesn't look sane"
warning on boot).
So, include ctors* and all is well.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
| |
Just pass the container structure rather than bus_id and xscom_base to
tpm_i2c_request_send(). Rename xscom_base to i2c_addr while we're here
since that's just plain wrong.
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the fix of ORing in the BMC state - we only want to retain state
covered by the ack mask as this is something we still need to handle.
Critically, we must not retain state not covered by the ack mask as this
may lead to host firmware attempting to communicate with a dead daemon
or attempting to access the PNOR whilst the daemon is not in control of
the flash.
Further, add unit tests to capture the desired (and now implemented)
behaviour.
Fixes: 34cffed2ccf3 ("libflash/ipmi-hiomap: Improve event handling")
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
Lay the ground work for unit testing the ipmi-hiomap implementation. The
design hooks a subset of the IPMI interface to move through a
data-driven "scenario" of IPMI message exchanges. Two basic tests are
added exercising the initialsation path of the protocol implementation.
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
libflash/ipmi-hiomap.c: In function ‘hiomap_window_move’:
libflash/ipmi-hiomap.c:17:21: error: format ‘%llu’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘uint64_t’ {aka ‘long unsigned int’} [-Werror=format=]
#define pr_fmt(fmt) "HIOMAP: " fmt
^~~~~~~~~~
include/skiboot.h:93:41: note: in expansion of macro ‘pr_fmt’
#define prlog(l, f, ...) do { _prlog(l, pr_fmt(f), ##__VA_ARGS__); } while(0)
^~~~~~
include/skiboot.h:94:30: note: in expansion of macro ‘prlog’
#define prerror(fmt...) do { prlog(PR_ERR, fmt); } while(0)
^~~~~
libflash/ipmi-hiomap.c:291:3: note: in expansion of macro ‘prerror’
prerror("Invalid window properties: len: %llu, size: %llu\n",
^~~~~~~
libflash/ipmi-hiomap.c:291:47: note: format string is defined here
prerror("Invalid window properties: len: %llu, size: %llu\n",
~~~^
%lu
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current implementation makes it hard to expand the list of tests if
we want to build anything that doesn't link to mbox-server. This is a
consequence of embedding the $(LIBFLASH_TEST_EXTRA) variable inside the
recipes for building test executables, which makes the makefile a bit of
a maze to navigate.
To address this we could go the route of duplicating the
$(LIBFLASH_TEST), $(LIBFLASH_TEST_EXTRA) and the corresponding make
directives (targets/prerequisites/recipes) each time we want to link a
binary against a new set of objects, but that seems ham-fisted.
Further, $(LIBFLASH_TEST_EXTRA) is defined in terms of the relevant
object (.o) files, but the recipes it is used in otherwise use source
(.c) paths for compilation. These other paths are typically to non-test
code that needs to be compiled into the test executable, but we can't
use object files at the usual path because we will typically have a
conflict of architectures (PPC64 for the skiboot object, x86_64 for the
test object). This in turn means that we will compile source files
multiple times (once for each test binary it is required in) rather than
re-using an existing object file.
Further, the current structure of the Makefile requires we #include the
.c file under test directly into the test source if we want it in a
specific test case due to the relationship of the prerequisites to the
build (only the first source prerequisite is included in the build). The
include-the-c-file approach can have some annoying side-effects with
respect to macros, typically errors regarding redefinition. While it is
useful for testing static functions in the source under test, it would
be nice if this approach was optional rather than required.
This change attempts to address all of these issues. The outcome is we
have precise control of which objects get linked into each test binary,
we avoid the architecture clash problem, we re-use existing compiled
objects (avoiding recompilation), and we make the include-the-c-file
approach optional.
The general approach is to generate a new directory hierarchy of object
files under a `$(HOSTCC) -dumpmachine` directory in the repository root
and use these for linking the test cases. Objects that land in this
segregated tree are described by a _SOURCES variable for each test,
similar in structure and behaviour to automake's _SOURCES variables.
Again similar to automake, a check_PROGRAMS variable is used that
describes the path of each test binary to be built.
The test binary paths are mapped to the corresponding _SOURCES variable
by some secondary-evaluation wizardry that no-one has to pay any
attention to once it is written. Whilst the implementation is perhaps
slightly tricky, it allows us to avoid the recipe headache of
unconditionally linking in objects defined in variables that don't
directly participate in the target's prerequisites, and so prevents the
explosion of variables as we implement tests that require disjoint sets
of dependencies.
This is initially intended as an isolated experiment with the libflash
test makefile, but it's feasible that the scope of the concept could be
expanded to other test Makefiles.
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
|