| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6
|
| | |
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6
* 'tty-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6:
n_gsm: gsm_data_alloc buffer allocation could fail and it is not being checked
n_gsm: Fix message length handling when building header
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
gsm_data_alloc buffer allocation could fail and it is not being checked.
Add check for allocated buffer and return if the buffer allocation
fails.
Signed-off-by: Ken Mills <ken.k.mills@intel.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix message length handling when building header
When the message length is greater than 127, the length field in the header
is built incorrectly. According to the spec, when the length is less than 128
the length field is a single byte formatted as: bbbbbbb1. When it is greater
than 127 then the field is two bytes of the format: bbbbbbb0 bbbbbbbb.
Signed-off-by: Ken Mills <ken.k.mills@intel.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb-2.6
* 'usb-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb-2.6:
Revert "USB: gadget: Allow function access to device ID data during bind()"
USB: misc: uss720.c: add another vendor/product ID
USB: usb-storage: unusual_devs entry for the Samsung YP-CP3
USB: gadget: Remove suspended sysfs file before freeing cdev
USB: core: Add input prompt and help text for USB_OTG config
USB: ftdi_sio: Add D.O.Tec PID
xhci: Fix issue with port array setup and buggy hosts.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This reverts commit 1ab83238740ff1e1773d5c13ecac43c60cf4aec4.
Turns out this doesn't allow for the device ids to be overridden
properly, so we need to revert the thing.
Reported-by: Jef Driesen <jefdriesen@telenet.be>
Cc: Robert Lukassen <Robert.Lukassen@tomtom.com>
Cc: stable <stable@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fabio Battaglia report that he has another cable that works with this
driver, so this patch adds its vendor/product ID.
Signed-off-by: Thomas Sailer <t.sailer@alumni.ethz.ch>
Cc: stable <stable@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add an unusual_devs entry for the Samsung YP-CP3 MP4 player.
User was getting the following errors in dmesg:
usb 2-6: reset high speed USB device using ehci_hcd and address 2
usb 2-6: reset high speed USB device using ehci_hcd and address 2
usb 2-6: reset high speed USB device using ehci_hcd and address 2
usb 2-6: USB disconnect, address 2
sd 3:0:0:0: [sdb] Assuming drive cache: write through
sdb:<2>ldm_validate_partition_table(): Disk read failed.
Dev sdb: unable to read RDB block 0
unable to read partition table
Signed-off-by: Vitaly Kuznetsov <vitty@altlinux.ru>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
CC: Matthew Dharm <mdharm-usb@one-eyed-alien.net>
CC: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
cdev struct is accessed in suspended sysfs show function. So
remove sysfs file before freeing the cdev in composite_unbind.
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
bd6882 commit (usb: gadget: fix Kconfig warning) removes
the duplicate USB_OTG config from gadget/Kconfig. But
does not copy the input prompt and help text to the original
config defined in core/Kconfig. Add them now.
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add FTDI PID to identify D.O.Tec devices correctly.
Signed-off-by: Florian Faber <faberman@linuxproaudio.org>
Cc: stable <stable@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix two bugs with the port array setup.
The first bug will only show up with broken xHCI hosts with Extended
Capabilities registers that have duplicate port speed entries for the same
port. The idea with the original code was to set the port_array entry to
-1 if the duplicate port speed entry said the port was a different speed
than the original port speed entry. That would mean that later, the port
would not be exposed to the USB core. Unfortunately, I forgot a continue
statement, and the port_array entry would just be overwritten in the next
line.
The second bug would happen if there are conflicting port speed registers
(so that some entry in port_array is -1), or one of the hardware port
registers was not described in the port speed registers (so that some
entry in port_array is 0). The code that sets up the usb2_ports array
would accidentally claim those ports. That wouldn't really cause any
user-visible issues, but it is a bug.
This patch should go into the stable trees that have the port array and
USB 3.0 port disabling prevention patches.
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Cc: stable@kernel.org
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
ceph: handle partial result from get_user_pages
ceph: mark user pages dirty on direct-io reads
ceph: fix null pointer dereference in ceph_init_dentry for nfs reexport
ceph: fix direct-io on non-page-aligned buffers
ceph: fix msgr_init error path
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The get_user_pages() helper can return fewer than the requested pages.
Error out in that case, and clean up the partial result.
Signed-off-by: Henry C Chang <henry_c_chang@tcloudcomputing.com>
Signed-off-by: Sage Weil <sage@newdream.net>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
For read operation, we have to set the argument _write_ of get_user_pages
to 1 since we will write data to pages. Also, we need to SetPageDirty before
releasing these pages.
Signed-off-by: Henry C Chang <henry_c_chang@tcloudcomputing.com>
Signed-off-by: Sage Weil <sage@newdream.net>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The fh_to_dentry etc. methods use ceph_init_dentry(), which assumes that
d_parent is defined. It isn't for those callers, so check!
Signed-off-by: Sage Weil <sage@newdream.net>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The user buffer may be 512-byte aligned, not page-aligned. We were
assuming the buffer was page-aligned and only accounting for
non-page-aligned io offsets.
Signed-off-by: Henry C Chang <henry_c_chang@tcloudcomputing.com>
Signed-off-by: Sage Weil <sage@newdream.net>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
create_workqueue() returns NULL on failure.
Signed-off-by: Sage Weil <sage@newdream.net>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
.. caused by a missing semi-colon, introduced in commit 0fc13c8995cd
("cciss: fix cciss_revalidate panic").
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reported-by: Thiago Farina <tfransosi@gmail.com>
Cc: Jens Axboe <jaxboe@fusionio.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6
* 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6:
[media] gspca - sonixj: Better handling of the bridge registers 0x01 and 0x17
[media] gspca - sonixj: Add the bit definitions of the bridge reg 0x01 and 0x17
[media] gspca - sonixj: Set the flag for some devices
[media] gspca - sonixj: Add a flag in the driver_info table
[media] gspca - sonixj: Fix a bad probe exchange
[media] gspca - sonixj: Move bridge init to sd start
[media] bttv: remove unneeded locking comments
[media] bttv: fix mutex use before init (BZ#24602)
[media] Don't export format_by_forcc on two different drivers
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The initial values of the registers 0x01 and 0x17 are taken from the sensor
table at capture start and updated according to the flag PDN_INV.
Their values are updated at each step of the capture initialization and
memorized for reuse in capture stop.
This patch also fixed automatically some bad hardcoded values of these
registers.
Signed-off-by: Jean-François Moine <moinejf@free.fr>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jean-François Moine <moinejf@free.fr>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The flag PDN_INV indicates that the sensor pin S_PWR_DN has not the same
value as other webcams with the same sensor. For now, only two webcams have
been so detected: the Microsoft's VX1000 and VX3000.
Signed-off-by: Jean-François Moine <moinejf@free.fr>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jean-François Moine <moinejf@free.fr>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jean-François Moine <moinejf@free.fr>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Jean-François Moine <moinejf@free.fr>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
After Mauro's "bttv: Fix locking issues due to BKL removal code" there
are a number of comments that are no longer needed about lock ordering.
Remove them.
Signed-off-by: Brandon Philips <bphilips@suse.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fix a regression where bttv driver causes oopses when loading, since it
were using some non-initialized mutexes. While it would be possible to
fix the issue, there are some other lock troubles, like to the presence of
lock code at free_btres_lock().
It is possible to fix, but the better is to just use the core-assisted
locking schema. This way, V4L2 core will serialize access to all
ioctl's/open/close/mmap/read/poll operations, avoiding to have two
processes accessing the hardware at the same time. Also, as there's just
one lock, instead of 3, there's no risk of dead locks.
The net result is a cleaner code, with just one lock.
Reported-by: Dan Carpenter <error27@gmail.com>
Reported-by: Brandon Philips<brandon@ifup.org>
Reported-by: Chris Clayton <chris2553@googlemail.com>
Reported-by: Torsten Kaiser <just.for.lkml@googlemail.com>
Tested-by: Chris Clayton <chris2553@googlemail.com>
Tested-by: Torsten Kaiser <just.for.lkml@googlemail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Drivers should append their name on exported symbols, to avoid
conflicts with allyesconfig:
drivers/staging/built-in.o: In function `format_by_fourcc':
/home/v4l/work_trees/linus/drivers/staging/cx25821/cx25821-video.c:96: multiple definition of `format_by_fourcc'
drivers/media/built-in.o:/home/v4l/work_trees/linus/drivers/media/common/saa7146_video.c:88: first defined here
Let's rename both occurences with a small shellscript:
for i in drivers/staging/cx25821/*.[ch]; do sed s,format_by_fourcc,cx25821_format_by_fourcc,g <$i >a && mv a $i; done
for i in drivers/media/common/saa7146*.[ch]; do sed s,format_by_fourcc,saa7146_format_by_fourcc,g <$i >a && mv a $i; done
for i in include/media/saa7146*.[ch]; do sed s,format_by_fourcc,saa7146_format_by_fourcc,g <$i >a && mv a $i; done
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
|
| |\ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6
* 'rmobile-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6:
ARM: mach-shmobile: INTC interrupt priority level demux fix
ARM: mach-shmobile: fix compile warning in mm/init.c
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Fix interrupt priority level handling on SH-Mobile ARM.
SH-Mobile ARM platforms using multiple interrupt priority
levels need this patch to fix a potential dead lock that
may occur if multiple interrupts with different levels
are pending simultaneously.
The default INTC configuration is to use the same priority
level for all interrupts, so this issue does not trigger by
default. It is however common for board code to override the
interrupt priority for certain interrupt sources depending
on the application. Without this fix such boards may lock up.
In detail, this patch updates the INTC code in entry-macro.S
to make sure that the INTLVLA register gets set as expected.
To trigger this bug modify the board specific code to adjust
the interrupt priority level for the ethernet chip. After
changing the priority level simply use flood ping to drown
the board with interrupts.
This patch applies to INTCA-based processors such as sh7372,
sh7377 and sh7372. GIC-based processors are not affected.
Suitable for v2.6.37-rc and stable from v2.6.34 to v2.6.36.
Cc: stable@kernel.org
Signed-off-by: Magnus Damm <damm@opensource.se>
Tested-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Turn down the warning noise from the compiler,
basically a SH-Mobile specific version of the
patch located in the RMK patch tracker:
6484/1: "fix compile warning in mm/init.c",
Without this patch the following warning triggers:
CC arch/arm/kernel/sys_arm.o
arch/arm/mm/init.c: In function 'mem_init':
arch/arm/mm/init.c:606: warning: format '%08lx' expects type 'long unsigned int', but argument 12 has type 'unsigned int'
CC arch/arm/kernel/traps.o
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |\ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6
* 'sh-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6:
clocksource: sh_cmt: Remove nested spinlock fix
|
| | |/ / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
There are control flow that sh_cmt_set_next() does double
spin-lock. The callers sh_cmt_{start,stop}() already have
lock. But another callers sh_cmt_clock_event_{start,next}()
does not.
Now sh_cmt_set_next() does not lock by itself. All the
callers should hold spin-lock before calling it.
[damm@opensource.se: use __sh_cmt_set_next() to simplify code]
[damm@opensource.se: added stable, suitable for v2.6.35 + v2.6.36]
Cc: stable@kernel.org
Signed-off-by: Takashi YOSHII <takashi.yoshii.zj@renesas.com>
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
| |\ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/lethal/fbdev-2.6
* 'fbdev-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/fbdev-2.6:
OMAP: OMAPFB: disable old omapfb for OMAP4 builds
OMAP: DSS: VRAM: Align start & size of vram to 2M
|
| | |\ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
fbdev-fixes-for-linus
* 'for-paul-rc' of git://gitorious.org/linux-omap-dss2/linux:
OMAP: OMAPFB: disable old omapfb for OMAP4 builds
OMAP: DSS: VRAM: Align start & size of vram to 2M
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Build fails when OMAP4 and FB_OMAP are defined:
drivers/built-in.o: In function `omapfb_do_probe':
drivers/video/omap/omapfb_main.c:1773: undefined reference to `omap2_int_ctrl'
Old omapfb does not work on OMAP4, and never will. Change the omapfb
build dependency so that old omapfb depends on OMAP1/2/3, fixing the
build for plain OMAP4 builds.
Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@nokia.com>
|
| | | | |_|_|/
| | | |/| | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Align the start address and size of VRAM area to 2M as per comments from
Russell King:
> > So, why SZ_2M?
>
> Firstly, that's the granularity which we allocate page tables - one
> Linux page table covers 2MB of memory. We want to avoid creating page
> tables for the main memory mapping as that increases TLB pressure through
> the use of additional TLB entries, and more page table walks.
>
> Plus, we never used to allow the kernel's direct memory mapping to be
> mapped at anything less than section size - this restriction has since
> been lifted due to OMAP SRAM problems, but I'd rather we stuck with it
> to ensure that we have proper behaviour from all parts of the system.
>
> Secondly, we don't want to end up with lots of fragmentation at the end
> of the memory mapping as that'll reduce performance, not only by making
> the pfn_valid() search more expensive.
>
> Emsuring a minimum allocation size and alignment makes sure that the
> regions can be coalesced together into one block, and minimises run-time
> expenses.
>
> So please, 2MB, or if you object, at the _very_ _least_ 1MB. But
> definitely not PAGE_SIZE.
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@nokia.com>
Acked-by: Tony Lindgren <tony@atomide.com>
|
| |\ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung
* 's5p-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
ARM: S5PV210: update MAX8998 platform data to get rid of WARN()
ARM S3C24XX: Fix compilation of PM code for S3C2416
ARM: S3C24XX: Fix CONFIG_S3C_DEV_NAND Kconfig entry
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This patch adds new entries required by the new version of MAX8998
driver. Without them, the driver fails to init. See commit 50f19a4596
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
S3C2416 PM code uses low-level sleep routines from S3C2412 code,
but these routines are compiled only for S3C2412 SoC.
Split S3C2412_PM to two parts: S3C2412_PM, S3C2412_PM_SLEEP and
select last in S3C2416's Kconfig.
Signed-off-by: Yauhen Kharuzhy <jekhor@gmail.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
|
| | |/ / / / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Should be CONFIG_S3C_DEV_NAND instead of CONFIG_S3C_DEVICE_NAND.
Cc: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
|
| |\ \ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
* 'for-linus' of git://git.kernel.dk/linux-2.6-block:
cciss: fix cciss_revalidate panic
block: max hardware sectors limit wrapper
block: Deprecate QUEUE_FLAG_CLUSTER and use queue_limits instead
blk-throttle: Correct the placement of smp_rmb()
blk-throttle: Trim/adjust slice_end once a bio has been dispatched
block: check for proper length of iov entries earlier in blk_rq_map_user_iov()
drbd: fix for spin_lock_irqsave in endio callback
drbd: don't recvmsg with zero length
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
If you delete a logical drive, and then run BLKRRPART (e.g. via fdisk)
on a logical drive which is "after" the deleted logical drive in the h->drv[]
array, then cciss_revalidate panics because it will access the null pointer
h->drv[x] when x hits the deleted drive.
Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Implement blk_limits_max_hw_sectors() and make
blk_queue_max_hw_sectors() a wrapper around it.
DM needs this to avoid setting queue_limits' max_hw_sectors and
max_sectors directly. dm_set_device_limits() now leverages
blk_limits_max_hw_sectors() logic to establish the appropriate
max_hw_sectors minimum (PAGE_SIZE). Fixes issue where DM was
incorrectly setting max_sectors rather than max_hw_sectors (which
caused dm_merge_bvec()'s max_hw_sectors check to be ineffective).
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@kernel.org
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
When stacking devices, a request_queue is not always available. This
forced us to have a no_cluster flag in the queue_limits that could be
used as a carrier until the request_queue had been set up for a
metadevice.
There were several problems with that approach. First of all it was up
to the stacking device to remember to set queue flag after stacking had
completed. Also, the queue flag and the queue limits had to be kept in
sync at all times. We got that wrong, which could lead to us issuing
commands that went beyond the max scatterlist limit set by the driver.
The proper fix is to avoid having two flags for tracking the same thing.
We deprecate QUEUE_FLAG_CLUSTER and use the queue limit directly in the
block layer merging functions. The queue_limit 'no_cluster' is turned
into 'cluster' to avoid double negatives and to ease stacking.
Clustering defaults to being enabled as before. The queue flag logic is
removed from the stacking function, and explicitly setting the cluster
flag is no longer necessary in DM and MD.
Reported-by: Ed Lin <ed.lin@promise.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
o I was discussing what are the variable being updated without spin lock and
why do we need barriers and Oleg pointed out that location of smp_rmb()
should be between read of td->limits_changed and tg->limits_changed. This
patch fixes it.
o Following is one possible sequence of events. Say cpu0 is executing
throtl_update_blkio_group_read_bps() and cpu1 is executing
throtl_process_limit_change().
cpu0 cpu1
tg->limits_changed = true;
smp_mb__before_atomic_inc();
atomic_inc(&td->limits_changed);
if (!atomic_read(&td->limits_changed))
return;
if (tg->limits_changed)
do_something;
If cpu0 has updated tg->limits_changed and td->limits_changed, we want to
make sure that if update to td->limits_changed is visible on cpu1, then
update to tg->limits_changed should also be visible.
Oleg pointed out to ensure that we need to insert an smp_rmb() between
td->limits_changed read and tg->limits_changed read.
o I had erroneously put smp_rmb() before atomic_read(&td->limits_changed).
This patch fixes it.
Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
o During some testing I did following and noticed throttling stops working.
- Put a very low limit on a cgroup, say 1 byte per second.
- Start some reads, this will set slice_end to a very high value.
- Change the limit to higher value say 1MB/s
- Now IO unthrottles and finishes as expected.
- Try to do the read again but IO is not limited to 1MB/s as expected.
o What is happening.
- Initially low value of limit sets slice_end to a very high value.
- During updation of limit, slice_end is not being truncated.
- Very high value of slice_end leads to keeping the existing slice
valid for a very long time and new slice does not start.
- tg_may_dispatch() is called in blk_throtle_bio(), and trim_slice()
is not called in this path. So slice_start is some old value and
practically we are able to do huge amount of IO.
o There are many ways it can be fixed. I have fixed it by trying to
adjust/cleanup slice_end in trim_slice(). Generally we extend slices if bio
is big and can't be dispatched in one slice. After dispatch of bio, readjust
the slice_end to make sure we don't end up with huge values.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
commit 9284bcf checks for proper length of iov entries in
blk_rq_map_user_iov(). But if the map is unaligned, kernel
will break out the loop without checking for the proper length.
So we need to check the proper length before the unalign check.
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Cc: stable@kernel.org
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|