| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
At least on my i830M here it reliably results in hard system hangs
nowadays. This is much worse than falling back to software rendering,
so I think we should simply rip this out.
After all we don't have any gpu reset for gen3 either, and there are a
lot more of those still around.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In truly crazy circumstances shmem might give us the wrong type of
page. So be a bit paranoid and double check this.
Reviewer: Damien Lespiau <damien.lespiau@intel.com>
Cc: Rob Clark <robdclark@gmail.com>
References: http://lkml.org/lkml/2011/7/11/238
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The PIPEA quirk is specifically for the issue with the PIPEB PLL on
830gm being slaved to the PIPEA PLL, and so to use PIPEB requires PIPEA
running. i845 doesn't even have the second PLL or pipe, and enabling
the quirk results in a blank DVO LVDS.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The policy's max frequency is not equal to the CPU's max frequency. The
ring frequency is derived from the CPU frequency, and not the policy
frequency.
One example of how this may differ through sysfs. If the sysfs max
frequency is modified, that will be used for the max ring frequency
calculation.
(/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq). As far as I
know, no current governor uses anything but max as the default, but in
theory, they could. Similarly distributions might set policy as part of
their init process.
It's ideal to use the real frequency because when we're currently scaled
up on the GPU. In this case we likely want to race to idle, and using a
less than max ring frequency is non-optimal for this situation.
AFAIK, this patch should have no impact on a majority of people.
This behavior hasn't been changed since it was first introduced:
commit 23b2f8bb92feb83127679c53633def32d3108e70
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Tue Jun 28 13:04:16 2011 -0700
drm/i915: load a ring frequency scaling table v3
CC: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Flush the primary plane changes when enabling/disabling the primary
plane in response to sprite visibility.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| | |
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
disable}_primary_plane
The new names make it clearer which plane we're talking about.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Resolve small conflict with the haswell_crtc_disable_planes
extraction.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The intel_flush_primary_plane name actually tells us which plane
we're talking about.
Also reorganize the internals a bit and add a missing POSTING_READ()
to make sure the hardware has seen the changes by the time we
return from the function.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
IPS should be OK as long as one plane is enabled on the pipe, but
it does seem to cause problems when going between primary only and
sprite only.
This needs more investigations, but for now just disable IPS whenever
the primary plane is disabled.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Disable fbc before disabling the primary plane, and enable fbc after
the primary plane has been enabled again.
Also use intel_disable_fbc() to disable FBC to avoid the pointless
overhead of intel_update_fbc(), and especially avoid having to clean
up and set up the stolen mem compressed buffer again.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the setplane operation fails, we shouldn't save the user's requested
plane coordinates. Since we adjust the coordinates during the clipping
process, make a copy of the originals, and once the operation has
succeeded save them for later reuse when the plane gets re-enabled.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Move the variable initialization to where the variables are declared,
and kill a pointless to_intel_crtc() cast when we already have the
casted pointer.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Let's not use goto when a simple if suffices. This is not error handling
code or anything, so the goto looks out of place.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We used to call the entire intel specific update_plane hook while
holding struct_mutex. Actually we only need to hold struct_mutex while
pinning/unpinning the obj. The plane state itself is protected by the
kms locks, and as the object is pinned we can dig out the offset and
tiling information from it without fearing that it would change
underneath us.
So now we don't need to drop and reacquire the lock around the
wait_for_vblank. Also we will need another wait_for_vblank in the IVB
specific update_plane hook, and this way we don't need to worry about
struct_mutex there either.
Also move the intel_plane->obj=NULL assignment outside strut_mutex in
disable_plane to make it clear that it's not protected by struct_mutex.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We allow cursors to be set up when the pipe is disabled. Do the same for
sprites as well.
We need to be somewhat careful with the primary disable logic as we
don't want to accidentally enable the primary plane on a disabled pipe.
v2: Skip primary enable/disable and plane registers
writes on disabled pipe
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the primary gets marked as disabled while the pipe is off for
instance, we should still re-enable it when the pipe is turned on,
unless the sprite covers it fully also in that configuration.
Unfortunately we do the plane visibility checks only in the sprite code,
which is executed after the primary enabling when turning the pipe off.
Ideally we should compute the plane visibility before touching the
hardware at all, but for now just set the primary_disabld flag
in intel_{enable,disable}_plane.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If channel equalization succeeds, there's no indication something went
wrong in clock recovery (unless debug is enabled). We should shout about
the failures and fix them instead of hiding them under the carpet.
This has allowed bugs like [1] stay dormant for a long time.
[1] https://bugs.freedesktop.org/show_bug.cgi?id=70117
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The VGACNTRL register contains a bunch of other stuff besides
the VGA_DISP_DISABLE bit. When we write the register we always set those
other bits to zero, so normally the current check would work.
However on HSW disabling and re-enabling the power well will reset the
VGACNTRL register to its default value, which has several of the other
bits set as well.
So only look at the VGA_DISP_DISABLE bit when checking whether the VGA
plane needs re-disabling.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Everyone else uses intel_PLL_is_valid(), so make VLV use it as well.
We don't have any special p and m limits on VLV, so skip those tests,
and we also need to skip the m1<=m2 test line PNV.
Reorganize the function a bit to move the n check alongside the rest of
the test for the non-derived dividers, and check the derived values
afterwards.
Note that this changes vlv_find_best_dpll() in two ways:
- The .vco comparison is now >max instead of >=max, and since we round
down when calculating that stuff, we may now allow frequencies slightly
above the max as we do on other platforms. The previous method
disallowed exactly max and anything above it.
- We now check the .dot frequency against the data rate limits, which we
didn't do before.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If vlv_find_best_dpll() couldn't find suitable PLL settings,
just say so instead of lying to caller.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
After aligning the p1 divider limits, and removing the unused p and m
limits, intel_limits_vlv_dac and intel_limits_vlv_hdmi are identical.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We don't use .dot_limit for anything on VLV, so don't populate it.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We never check the p and m limits (which according to comments are
based on someone's guesswork), so just remove them.
VLV2_DPLL_mphy_hsdpll_frequency_table_ww6_rev1p1.xlsm has no p and m
limits listed.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
VLV2_DPLL_mphy_hsdpll_frequency_table_ww6_rev1p1.xlsm tells us that the
minimum p2 divider is 2. Use that limit on the code.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
According to VLV2_DPLL_mphy_hsdpll_frequency_table_ww6_rev1p1.xlsm p1
can be 2-3 always.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For some reason there's a sort of off by one issue with the p1 divider.
The actual p1 limits according to
VLV2_DPLL_mphy_hsdpll_frequency_table_ww6_rev1p1.xlsm is 2-3, so we should
just say that instead of saying 1-3 and avoiding the 1 via the choice of
comparison operator.
I don't know why we're using different p1 limits for intel_limits_vlv_dac
and intel_limits_vlv_hdmi, but let's preserve that for now.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We limit the maximum n divider value in order to make sure the PLL's
reference inout is at least 19.2 MHz. I assume that is done to satisfy
some hardware requirement.
However we never check whether that calculated limit is below the
maximum supoorted N divider value (7). In practice that is always true
since we only support 100 MHz reference clock, but making the code
safe against higher reference clocks seems like a reasoanble thing to
do.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The p2 divider on VLV needs to be even when it's > 10. The current code
to make that happen is rather weird. Just make the step size adjustement
in the for loop decrement step.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Rewrite vlv_find_best_dpll() to use intel_clock_t rather than
an army of local variables.
Also extract the code to calculate the derived values into
vlv_clock().
v2: Split up the earlier fixes, extract vlv_clock()
v3: Initialize best_clock
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We do 'bestppm - 10' in vlv_find_best_dpll() but never check whether
that might underflow. Add such a check.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Use div_u64() to make the ppm calculation in vlv_find_best_dpll() safe
against interger overflows.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| | |
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| | |
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The interface uses an unsigned long, and we can use the unsigned counter
throughout our code, so do so. In the process, we notice one instance
where the shrink count is based on a heuristic rather than the result,
and another where we ask for too many pages to be purged.
v2: nr_to_scan needs to be promoted to a long as well, so just use
sc->nr_to_scan directly.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since we are waiting upon IO completion, inform the kernel through use
of the io_schedule() call rather than the regular schedule(). This
should allow the kernel to make better decisions regarding scheduling
and power management.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To disable a monitor, a Spice client sends a monitor config with the
monitor resolution to 0x0.
However, before qxl_crtc_disable() is reached after the hotplug event,
it can happen that another monitor is reconfigured, and
qxl_send_monitors_config() is called with the old config, which will
re-enable the monitor on the client.
Reset config if monitor is found disconnected, during
drm_helper_hpd_irq_event().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
All hard-coded resolutions are passing this check.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
By default, 1024x768 is the preferred resolution. However, when a
monitor config is given, it should be the only preferred resolution.
Note that the monitor config resolution is passed to
qxl_add_common_modes() to avoid adding a duplicate mode without the
preferred resolution. That would discard the previous monitor config
preferred bit.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In commit 38d5487db7f289be1d56ac7df704ee49ed3213b9, Keith explained:
This patch simply merges the two mode type bits together; that seems
reasonable to me, but perhaps only a subset of the bits should be
used? None of these can be user defined as they all come from
looking at just the hardware.
However, merging the bits means that a flag becomes sticky. It is not
possible, for example to update the mode type to remove the
DRM_MODE_TYPE_PREFERRED bit.
After a brief discussion with Dave Airlie on irc, it was agreed to
propose that change, instead of introducing another function to remove a
bit from exisiting modes type.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
drm_helper_hpd_irq_event() only notifies when the connector status
changed. However, Spice monitor config can change while the connector is
connected, to support arbitrary resolution. Do an hotplug event if it
wasn't done by drm_helper_hpd_irq_event().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
The caller may want to know whether the configuration was changed, and
if an hotplug event was sent.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
Fix a little spelling of drm_crtc_convert_umode() comment.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://ftp.arm.linux.org.uk/~rmk/linux-cubox into drm-next
Fix build on non-ARM
* 'drm-tda998x-3.12-fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-cubox:
DRM: Armada: depend on ARM
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Armada DRM uses relaxed accessors which are not available on other
platforms. Limit it to just ARM.
Acked-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\ \ \
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://ftp.arm.linux.org.uk/~rmk/linux-cubox into drm-next
This adds support for the Armada 510 display subsystem found on the
Marvell Dove devices. This IP is re-used across several different Marvell
SoCs with various tweaks, and this driver has been structured to allow
the other IPs to re-use the bulk of this code; further work in this area
is expected from interested parties.
This has been extensively tested on the SolidRun Cubox platform and
appears to work well there.
[airlied: update for api changes merged previous to this]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add support for TDA998x output via the slave driver in the kernel.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch adds ARGB hardware cursor support to the DRM driver for the
Marvell Armada SoCs. ARGB cursors are supported at either 32x64 or
64x32 resolutions.
Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch adds support for the pair of LCD controllers on the Marvell
Armada 510 SoCs. This driver supports:
- multiple contiguous scanout buffers for video and graphics
- shm backed cacheable buffer objects for X pixmaps for Vivante GPU
acceleration
- dual lcd0 and lcd1 crt operation
- video overlay on each LCD crt via DRM planes
- page flipping of the main scanout buffers
- DRM prime for buffer export/import
This driver is trivial to extend to other Armada SoCs.
Included in this commit is the core driver with no output support; output
support is platform and encoder driver dependent.
Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|