| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| | |
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
| |
| |
| |
| |
| |
| |
| | |
Also add a few definitions that were missing.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Accordingly to discussion [1] and followed up documentation the DMA controller
driver shouldn't start any DMA operations when dmaengine_submit() is called.
This patch fixes the workflow in dw_dmac driver to follow the documentation.
[1] http://www.spinics.net/lists/arm-kernel/msg125987.html
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
It would be useful to know when the first descriptor in the queue is started
along with its cookie.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We have a duplicate code which starts first descriptor in the queue. Let's make
this as a separate helper that can be used in future as well.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The pl330_chan_ctrl() function has 3 internal code paths which, except for the
locking, do not share any code outside of their sections. One code path is never
exercised and can be removed. The other two are mostly just forwards to the
_start() and _stop() calls. This patch modifies the code to instead of going via
pl330_chan_ctrl() to call _start() and _stop() directly. This allows to
completely remove pl330_chan_ctrl().
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of storing a special instruction in the command buffer to mark a request
as currently unused just set the descriptor field to NULL.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The pl330_req struct is embedded into the dma_pl330_desc struct. But half of the
pl330_req struct are pointers to other fields of the dma_pl330_desc struct it is
embedded to. By directly embedding the fields from the pl330_req struct into the
dma_pl330_desc struct and reworking the code to work with the dma_pl330_desc
struct those pointers can be eliminated. This slightly simplifies the code.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Both the dma_pl330_dmac and the pl330_dmac struct have the same lifetime and the
separation of them is a relict of this having been two different drivers in the
past. Merging them into one struct makes the code a bit simpler as it for
example allows to remove the pointers going back and forth between the two
structs.
While we are at it also directly embed the pl330_info struct into the
pl330_dmac struct as this allows to remove some more redundant fields.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Since we keep a pointer to the manager thread it is fairly easy to check if a
thread is the manager thread.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We know that we do not create invalid ccr settings in this driver. There is no
need to validate them.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The pl330_chid field of the dma_pl330_chan struct always holds a pointer to the
thread that is associated with the channel. Changing its type form void * to
struct pl330_thread makes things more type safe and removes the need for
unnecessary typecasts. While we are at it also rename the field from the cryptic
pl330_chid to thread.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The xfer_cb callback of the pl330_req struct is always set to the same function.
This adds an unnecessary step of indirection. Instead just call the callback
function directly.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
The mc_len is initialized but its value is never read again, so remove it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
The next field is always NULL, so we can remove it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
The field is completely unused, remove it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
The dmac_reset() callaback of the pl330_info struct is always set to NULL, so
remove it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
The pl330_chanstatus struct is completely unused, so remove it.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The settings for destination and source cache control are exactly the same. This
patch removes the duplicated enum and uses the same for both destination and
source cache control.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The pl330 driver has the custom pl330_reqtype enum which has the same possible
settings as the generic dma_transfer_direction enum. Switching over to the
generic enum internally makes it possible to directly initialize it from the
transfer request direction.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
To be able to see debug messages during boot, enable the debug settings
from Kconfig also for drivers in subdirectories.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
Use the zeroing function instead of dma_alloc_coherent & memset(,0,)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This patch adds support for end of transaction (EOT) and notify when done (NWD)
hardware descriptor flags.
The EOT flag requests that the peripheral assert an end of transaction interrupt
when that descriptor is complete. It also results in special signaling protocol
that is used between the attached peripheral and the core using the DMA
controller. Clients will specify DMA_PREP_INTERRUPT to enable this flag.
The NWD flag requests that the peripheral wait until the data has been fully
processed by the peripheral before moving on to the next descriptor. Clients
will specify DMA_PREP_FENCE to enable this flag.
Signed-off-by: Andy Gross <agross@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix the potential risk when enable config NET_DMA and ASYNC_TX. Async_tx is
lack of support in current release process of dma descriptor, all descriptors
will be released whatever is acked or no-acked by async_tx, so there is a
potential race condition when dma engine is uesd by others clients (e.g. when
enable NET_DMA to offload TCP).
In our case, a race condition which is raised when use both of talitos and
dmaengine to offload xor is because napi scheduler will sync all pending
requests in dma channels, it affects the process of raid operations due to
ack_tx is not checked in fsl dma. The no-acked descriptor is freed which is
submitted just now, as a dependent tx, this freed descriptor trigger
BUG_ON(async_tx_test_ack(depend_tx)) in async_tx_submit().
TASK = ee1a94a0[1390] 'md0_raid5' THREAD: ecf40000 CPU: 0
GPR00: 00000001 ecf41ca0 ee44/921a94a0 0000003f 00000001 c00593e4 00000000 00000001
GPR08: 00000000 a7a7a7a7 00000001 045/920000002 42028042 100a38d4 ed576d98 00000000
GPR16: ed5a11b0 00000000 2b162000 00000200 046/920000000 2d555000 ed3015e8 c15a7aa0
GPR24: 00000000 c155fc40 00000000 ecb63220 ecf41d28 e47/92f640bb0 ef640c30 ecf41ca0
NIP [c02b048c] async_tx_submit+0x6c/0x2b4
LR [c02b068c] async_tx_submit+0x26c/0x2b4
Call Trace:
[ecf41ca0] [c02b068c] async_tx_submit+0x26c/0x2b448/92 (unreliable)
[ecf41cd0] [c02b0a4c] async_memcpy+0x240/0x25c
[ecf41d20] [c0421064] async_copy_data+0xa0/0x17c
[ecf41d70] [c0421cf4] __raid_run_ops+0x874/0xe10
[ecf41df0] [c0426ee4] handle_stripe+0x820/0x25e8
[ecf41e90] [c0429080] raid5d+0x3d4/0x5b4
[ecf41f40] [c04329b8] md_thread+0x138/0x16c
[ecf41f90] [c008277c] kthread+0x8c/0x90
[ecf41ff0] [c0011630] kernel_thread+0x4c/0x68
Another modification in this patch is the change of completed descriptors,
there is a potential risk which caused by exception interrupt, all descriptors
in ld_running list are seemed completed when an interrupt raised, it works fine
under normal condition, but if there is an exception occured, it cannot work as
our excepted. Hardware should not be depend on s/w list, the right way is to
read current descriptor address register to find the last completed descriptor.
If an interrupt is raised by an error, all descriptors in ld_running should not
be seemed finished, or these unfinished descriptors in ld_running will be
released wrongly.
A simple way to reproduce:
Enable dmatest first, then insert some bad descriptors which can trigger
Programming Error interrupts before the good descriptors. Last, the good
descriptors will be freed before they are processsed because of the exception
intrerrupt.
Note: the bad descriptors are only for simulating an exception interrupt. This
case can illustrate the potential risk in current fsl-dma very well.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Qiang Liu <qiang.liu@freescale.com>
Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
This patch adds suspend and resume functions for Freescale DMA driver.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The usage of spin_lock_irqsave() is a stronger locking mechanism than is
required throughout the driver. The minimum locking required should be used
instead. Interrupts will be turned off and context will be saved, it is
unnecessary to use irqsave.
This patch changes all instances of spin_lock_irqsave() to spin_lock_bh(). All
manipulation of protected fields is done using tasklet context or weaker, which
makes spin_lock_bh() the correct choice.
Signed-off-by: Hongbo Zhang <hongbo.zhang@freescale.com>
Signed-off-by: Qiang Liu <qiang.liu@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I received a report this morning from one of the Novena developers that
the behaviour of the iMX6 ASoC codec driver (using imx-pcm-dma.c) was
sub-optimal under high system load.
While there are issues relating to system load remaining, upon reviewing
the ASoC imx-pcm-dma.c driver, it was noticed that it not using the
residue support, because SDMA doesn't support it. This has the effect
that SDMA has to make multiple calls into the ASoC and ALSA code, one
for each period.
Since ALSA's snd_pcm_elapsed() does not need to be called multiple times
and it is entirely sufficient to call it once to update ALSA with the
current buffer position via the pointer method, we can do better here.
We can also avoid stopping the DMA entirely, just like real cyclic DMA
implementations behave. While this means that we replay some old samples,
this is a nicer behaviour than having audio stop and restart.
The changes to achieve this are relatively minor - imx-sdma.c can track
where the DMA is to the nearest descriptor boundary - it does this
already when deciding how many callbacks to issue. In doing this,
buf_tail always points at the descriptor which will complete next.
The residue is defined by the bytes remaining to the end of the buffer,
when the buffer is viewed as a single block of memory [start...end].
So, when we start out, there's a full buffer worth of residue, and this
counts down as we approach the end of the buffer, eventually becoming
zero at the end, before returning to the full buffer worth when we
wrap back to the start.
Moving the walking of the descriptors into the interrupt handler means
that we can update the BD_DONE flag at interrupt time, thus avoiding
a delayed tasklet stopping the cyclic DMA.
This means that the residue can be calculated from (total descriptors -
buf_tail) * descriptor size. This is what the change below does. We
update imx-pcm-dma.c to remove the NO_RESIDUE flag since we now provide
the residue.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a 0-length packet is received on the bus, desc->pd0 yields 1,
which confuses the driver's users. This information is clearly wrong
and not in accordance to the datasheet, but it's been observed on an
AM335x board, very reproducible.
Fix this by looking at bit 19 in PD2 of the completed packet. This bit
will tell us if a zero-length packet was received on a queue. If it's
set, ignore the value in PD0 and report a total length of 0 instead.
Signed-off-by: Daniel Mack <zonque@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull slave-dmaengine updates from Vinod Koul:
- new Xilixn VDMA driver from Srikanth
- bunch of updates for edma driver by Thomas, Joel and Peter
- fixes and updates on dw, ste_dma, freescale, mpc512x, sudmac etc
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (45 commits)
dmaengine: sh: don't use dynamic static allocation
dmaengine: sh: fix print specifier warnings
dmaengine: sh: make shdma_prep_dma_cyclic static
dmaengine: Kconfig: Update MXS_DMA help text to include MX6Q/MX6DL
of: dma: Grammar s/requests/request/, s/used required/required/
dmaengine: shdma: Enable driver compilation with COMPILE_TEST
dmaengine: rcar-hpbdma: Include linux/err.h
dmaengine: sudmac: Include linux/err.h
dmaengine: sudmac: Keep #include sorted alphabetically
dmaengine: shdmac: Include linux/err.h
dmaengine: shdmac: Keep #include sorted alphabetically
dmaengine: s3c24xx-dma: Add cyclic transfer support
dmaengine: s3c24xx-dma: Process whole SG chain
dmaengine: imx: correct sdmac->status for cyclic dma tx
dmaengine: pch: fix compilation for alpha target
dmaengine: dw: check return code of dma_async_device_register()
dmaengine: dw: fix regression in dw_probe() function
dmaengine: dw: enable clock before access
dma: pch_dma: Fix Kconfig dependencies
dmaengine: mpc512x: add support for peripheral transfers
...
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is the driver for the AXI Video Direct Memory Access (AXI
VDMA) core, which is a soft Xilinx IP core that provides high-
bandwidth direct memory access between memory and AXI4-Stream
type video target peripherals. The core provides efficient two
dimensional DMA operations with independent asynchronous read
and write channel operation.
This module works on Zynq (ARM Based SoC) and Microblaze platforms.
Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Acked-by: Jassi Brar <jassisinghbrar@gmail.com>
Reviewed-by: Levente Kurusa <levex@linux.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
dma_async_device_register() may return non-zero error code. In such case we
have to follow error path.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The commit dbde5c29 "dw_dmac: use devm_* functions to simplify code" turns
probe function to use devm_* helpers and simultaneously brings a regression.
We have to 1) call clk_disable_unprepare() on error path, and 2) check error
code of clk_enable_prepare(). First part was done in the original code, second
one is an update.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
hclk signal is a bus clock. So, it means we have to have it enabled during
access to the DMA controller. This patch makes sure that we enable clock before
access to the device, though it currently works on Intel hardware.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
dynamic stack allocation in kernel is considered bad as kernel stack is low and
we get warns on few archs as reported by kbuild test robot
>> drivers/dma/sh/shdma-base.c:671:32: sparse: Variable length array is used.
>> drivers/dma/sh/shdma-base.c:701:1: warning: 'shdma_prep_dma_cyclic' uses
>> dynamic stack allocation [enabled by default]
Fix this by making a static array of 32 which should be sufficient for
shdma_prep_dma_cyclic which only user in kernel is audio and 32 periods for
audio seems quite sufficient atm
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
As documented in Documentation/printk-formats.txt we should use %zu/%zx
specifiers for size_t type variables for the code to compile on different
architectures. This is uncovered as COMPILE_TEST has been enabled recently for
this driver
drivers/dma/sh/shdma-base.c: In function 'shdma_prep_dma_cyclic':
>> drivers/dma/sh/shdma-base.c:683:4: warning: format '%d' expects argument of
>> type 'int', but argument 4 has type 'size_t' [-Wformat=]
__func__, buf_len, period_len, slave_id);
>> drivers/dma/sh/shdma-base.c:683:4: warning: format '%d' expects argument of
>> type 'int', but argument 5 has type 'size_t' [-Wformat=]
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
kbuild test robot reports that shdma_prep_dma_cyclic should be static, since
symbol is not declared, quick check revails that is the case
>> drivers/dma/sh/shdma-base.c:660:32: sparse: symbol 'shdma_prep_dma_cyclic'
>> was not declared. Should it be static?
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The APBX-DMA block is also found on MX6Q/MX6DL chips.
Update the help text accordingly.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This helps increasing build testing coverage.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Acked-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
linux/err.h isn't implicitly included by the current headers on all
platforms, resulting in compilation failures due to implicit
declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
linux/err.h isn't implicitly included by the current headers on all
platforms, resulting in compilation failures due to implicit
declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This helps detecting duplicate includes.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
linux/err.h isn't implicitly included by the current headers on all
platforms, resulting in compilation failures due to implicit
declarations of IS_ERR and PTR_ERR. Fix this by including linux/err.h.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This helps detecting duplicate includes.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Many audio interface drivers require support of cyclic transfers to work
correctly, for example Samsung ASoC DMA driver. This patch adds support
for cyclic transfers to the s3c24xx-dma driver
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Due to redundant 'break' in loop driver processed only first chunk.
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In cyclic dma tx's handler sdma_handle_channel_loop(),
SDMA channel statue is set to either DMA_ERROR or DMA_IN_PROGRESS
based on each period's status. This has the following issues:
1) If one period's status is BD_RROR, then channel status
will be set to DMA_ERROR, but it will be overwritten to DMA_IN_PROGRESS
if the following periods are OK.
2) DMA client may call sdma_control(DMA_TERMINATE_ALL) to stop the cyclic dma
operation, sdma channel status will be set to DMA_ERROR,
but if after this handler is called, then again the channel status will be overwritten
to DMA_IN_PROGRESS. Then the following dmaengine_prep_dma_cyclic() will always fail,
as channel status is DMA_IN_PROGRESS.
As in cyclic dma tx, channel status will be initially set to DMA_IN_PROGRESS,
driver only needs to change it to DMA_ERROR, when something wrong happens
(one period status is wrong, or stoped by client explicitly).
Signed-off-by: Jiada Wang <jiada_wang@mentor.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
commit 4828b493 introduced COMPILE_TEST for this driver and this cause compile
failure on alpha as kzalloc wasnt availble for this arch in included header, so
explictly add slab.h
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|