summaryrefslogtreecommitdiffstats
path: root/Documentation
diff options
context:
space:
mode:
authorGreg KH <gregkh@suse.de>2005-09-12 12:45:04 -0700
committerGreg Kroah-Hartman <gregkh@suse.de>2005-09-12 12:45:04 -0700
commitd58dde0f552a5c5c4485b962d8b6e9dd54fefb30 (patch)
treed9a7e35eb88fea6265d5aadcc3d4ed39122b052a /Documentation
parent877599fdef5ea4a7dd1956e22fa9d6923add97f8 (diff)
parent2ade81473636b33aaac64495f89a7dc572c529f0 (diff)
downloadblackbird-op-linux-d58dde0f552a5c5c4485b962d8b6e9dd54fefb30.tar.gz
blackbird-op-linux-d58dde0f552a5c5c4485b962d8b6e9dd54fefb30.zip
Merge ../torvalds-2.6/
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/00-INDEX4
-rw-r--r--Documentation/CodingStyle3
-rw-r--r--Documentation/DMA-API.txt2
-rw-r--r--Documentation/DMA-ISA-LPC.txt151
-rw-r--r--Documentation/DocBook/journal-api.tmpl4
-rw-r--r--Documentation/DocBook/kernel-hacking.tmpl310
-rw-r--r--Documentation/DocBook/usb.tmpl2
-rw-r--r--Documentation/MSI-HOWTO.txt2
-rw-r--r--Documentation/RCU/RTFP.txt36
-rw-r--r--Documentation/RCU/UP.txt79
-rw-r--r--Documentation/RCU/checklist.txt23
-rw-r--r--Documentation/RCU/rcu.txt48
-rw-r--r--Documentation/RCU/rcuref.txt74
-rw-r--r--Documentation/RCU/whatisRCU.txt902
-rw-r--r--Documentation/applying-patches.txt439
-rw-r--r--Documentation/cpu-freq/cpufreq-stats.txt2
-rw-r--r--Documentation/cpusets.txt2
-rw-r--r--Documentation/crypto/descore-readme.txt2
-rw-r--r--Documentation/dvb/bt8xx.txt89
-rw-r--r--Documentation/dvb/ci.txt9
-rw-r--r--Documentation/fb/cyblafb/bugs14
-rw-r--r--Documentation/fb/cyblafb/credits7
-rw-r--r--Documentation/fb/cyblafb/documentation17
-rw-r--r--Documentation/fb/cyblafb/fb.modes155
-rw-r--r--Documentation/fb/cyblafb/performance80
-rw-r--r--Documentation/fb/cyblafb/todo32
-rw-r--r--Documentation/fb/cyblafb/usage206
-rw-r--r--Documentation/fb/cyblafb/whycyblafb85
-rw-r--r--Documentation/fb/intel810.txt56
-rw-r--r--Documentation/fb/modedb.txt73
-rw-r--r--Documentation/feature-removal-schedule.txt8
-rw-r--r--Documentation/filesystems/files.txt123
-rw-r--r--Documentation/filesystems/fuse.txt315
-rw-r--r--Documentation/filesystems/proc.txt42
-rw-r--r--Documentation/filesystems/v9fs.txt95
-rw-r--r--Documentation/filesystems/vfs.txt435
-rw-r--r--Documentation/ioctl/cdrom.txt2
-rw-r--r--Documentation/kbuild/makefiles.txt14
-rw-r--r--Documentation/kdump/kdump.txt16
-rw-r--r--Documentation/kernel-parameters.txt10
-rw-r--r--Documentation/mono.txt2
-rw-r--r--Documentation/networking/bonding.txt4
-rw-r--r--Documentation/networking/wan-router.txt4
-rw-r--r--Documentation/pci.txt2
-rw-r--r--Documentation/powerpc/eeh-pci-error-recovery.txt2
-rw-r--r--Documentation/s390/s390dbf.txt2
-rw-r--r--Documentation/scsi/ibmmca.txt2
-rw-r--r--Documentation/sound/alsa/ALSA-Configuration.txt2
-rw-r--r--Documentation/sparse.txt2
-rw-r--r--Documentation/sysrq.txt2
-rw-r--r--Documentation/uml/UserModeLinux-HOWTO.txt2
-rw-r--r--Documentation/usb/gadget_serial.txt2
-rw-r--r--Documentation/video4linux/CARDLIST.bttv4
-rw-r--r--Documentation/video4linux/CARDLIST.saa71343
-rw-r--r--Documentation/video4linux/CARDLIST.tuner1
-rw-r--r--Documentation/video4linux/Zoran2
-rw-r--r--Documentation/x86_64/boot-options.txt5
57 files changed, 3580 insertions, 431 deletions
diff --git a/Documentation/00-INDEX b/Documentation/00-INDEX
index f28a24e0279b..433cf5e9ae04 100644
--- a/Documentation/00-INDEX
+++ b/Documentation/00-INDEX
@@ -46,6 +46,8 @@ SubmittingPatches
- procedure to get a source patch included into the kernel tree.
VGA-softcursor.txt
- how to change your VGA cursor from a blinking underscore.
+applying-patches.txt
+ - description of various trees and how to apply their patches.
arm/
- directory with info about Linux on the ARM architecture.
basic_profiling.txt
@@ -275,7 +277,7 @@ tty.txt
unicode.txt
- info on the Unicode character/font mapping used in Linux.
uml/
- - directory with infomation about User Mode Linux.
+ - directory with information about User Mode Linux.
usb/
- directory with info regarding the Universal Serial Bus.
video4linux/
diff --git a/Documentation/CodingStyle b/Documentation/CodingStyle
index f25b3953f513..22e5f9036f3c 100644
--- a/Documentation/CodingStyle
+++ b/Documentation/CodingStyle
@@ -236,6 +236,9 @@ ugly), but try to avoid excess. Instead, put the comments at the head
of the function, telling people what it does, and possibly WHY it does
it.
+When commenting the kernel API functions, please use the kerneldoc format.
+See the files Documentation/kernel-doc-nano-HOWTO.txt and scripts/kernel-doc
+for details.
Chapter 8: You've made a mess of it
diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index 6ee3cd6134df..1af0f2d50220 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -121,7 +121,7 @@ pool's device.
dma_addr_t addr);
This puts memory back into the pool. The pool is what was passed to
-the the pool allocation routine; the cpu and dma addresses are what
+the pool allocation routine; the cpu and dma addresses are what
were returned when that routine allocated the memory being freed.
diff --git a/Documentation/DMA-ISA-LPC.txt b/Documentation/DMA-ISA-LPC.txt
new file mode 100644
index 000000000000..705f6be92bdb
--- /dev/null
+++ b/Documentation/DMA-ISA-LPC.txt
@@ -0,0 +1,151 @@
+ DMA with ISA and LPC devices
+ ============================
+
+ Pierre Ossman <drzeus@drzeus.cx>
+
+This document describes how to do DMA transfers using the old ISA DMA
+controller. Even though ISA is more or less dead today the LPC bus
+uses the same DMA system so it will be around for quite some time.
+
+Part I - Headers and dependencies
+---------------------------------
+
+To do ISA style DMA you need to include two headers:
+
+#include <linux/dma-mapping.h>
+#include <asm/dma.h>
+
+The first is the generic DMA API used to convert virtual addresses to
+physical addresses (see Documentation/DMA-API.txt for details).
+
+The second contains the routines specific to ISA DMA transfers. Since
+this is not present on all platforms make sure you construct your
+Kconfig to be dependent on ISA_DMA_API (not ISA) so that nobody tries
+to build your driver on unsupported platforms.
+
+Part II - Buffer allocation
+---------------------------
+
+The ISA DMA controller has some very strict requirements on which
+memory it can access so extra care must be taken when allocating
+buffers.
+
+(You usually need a special buffer for DMA transfers instead of
+transferring directly to and from your normal data structures.)
+
+The DMA-able address space is the lowest 16 MB of _physical_ memory.
+Also the transfer block may not cross page boundaries (which are 64
+or 128 KiB depending on which channel you use).
+
+In order to allocate a piece of memory that satisfies all these
+requirements you pass the flag GFP_DMA to kmalloc.
+
+Unfortunately the memory available for ISA DMA is scarce so unless you
+allocate the memory during boot-up it's a good idea to also pass
+__GFP_REPEAT and __GFP_NOWARN to make the allocater try a bit harder.
+
+(This scarcity also means that you should allocate the buffer as
+early as possible and not release it until the driver is unloaded.)
+
+Part III - Address translation
+------------------------------
+
+To translate the virtual address to a physical use the normal DMA
+API. Do _not_ use isa_virt_to_phys() even though it does the same
+thing. The reason for this is that the function isa_virt_to_phys()
+will require a Kconfig dependency to ISA, not just ISA_DMA_API which
+is really all you need. Remember that even though the DMA controller
+has its origins in ISA it is used elsewhere.
+
+Note: x86_64 had a broken DMA API when it came to ISA but has since
+been fixed. If your arch has problems then fix the DMA API instead of
+reverting to the ISA functions.
+
+Part IV - Channels
+------------------
+
+A normal ISA DMA controller has 8 channels. The lower four are for
+8-bit transfers and the upper four are for 16-bit transfers.
+
+(Actually the DMA controller is really two separate controllers where
+channel 4 is used to give DMA access for the second controller (0-3).
+This means that of the four 16-bits channels only three are usable.)
+
+You allocate these in a similar fashion as all basic resources:
+
+extern int request_dma(unsigned int dmanr, const char * device_id);
+extern void free_dma(unsigned int dmanr);
+
+The ability to use 16-bit or 8-bit transfers is _not_ up to you as a
+driver author but depends on what the hardware supports. Check your
+specs or test different channels.
+
+Part V - Transfer data
+----------------------
+
+Now for the good stuff, the actual DMA transfer. :)
+
+Before you use any ISA DMA routines you need to claim the DMA lock
+using claim_dma_lock(). The reason is that some DMA operations are
+not atomic so only one driver may fiddle with the registers at a
+time.
+
+The first time you use the DMA controller you should call
+clear_dma_ff(). This clears an internal register in the DMA
+controller that is used for the non-atomic operations. As long as you
+(and everyone else) uses the locking functions then you only need to
+reset this once.
+
+Next, you tell the controller in which direction you intend to do the
+transfer using set_dma_mode(). Currently you have the options
+DMA_MODE_READ and DMA_MODE_WRITE.
+
+Set the address from where the transfer should start (this needs to
+be 16-bit aligned for 16-bit transfers) and how many bytes to
+transfer. Note that it's _bytes_. The DMA routines will do all the
+required translation to values that the DMA controller understands.
+
+The final step is enabling the DMA channel and releasing the DMA
+lock.
+
+Once the DMA transfer is finished (or timed out) you should disable
+the channel again. You should also check get_dma_residue() to make
+sure that all data has been transfered.
+
+Example:
+
+int flags, residue;
+
+flags = claim_dma_lock();
+
+clear_dma_ff();
+
+set_dma_mode(channel, DMA_MODE_WRITE);
+set_dma_addr(channel, phys_addr);
+set_dma_count(channel, num_bytes);
+
+dma_enable(channel);
+
+release_dma_lock(flags);
+
+while (!device_done());
+
+flags = claim_dma_lock();
+
+dma_disable(channel);
+
+residue = dma_get_residue(channel);
+if (residue != 0)
+ printk(KERN_ERR "driver: Incomplete DMA transfer!"
+ " %d bytes left!\n", residue);
+
+release_dma_lock(flags);
+
+Part VI - Suspend/resume
+------------------------
+
+It is the driver's responsibility to make sure that the machine isn't
+suspended while a DMA transfer is in progress. Also, all DMA settings
+are lost when the system suspends so if your driver relies on the DMA
+controller being in a certain state then you have to restore these
+registers upon resume.
diff --git a/Documentation/DocBook/journal-api.tmpl b/Documentation/DocBook/journal-api.tmpl
index 1ef6f43c6d8f..341aaa4ce481 100644
--- a/Documentation/DocBook/journal-api.tmpl
+++ b/Documentation/DocBook/journal-api.tmpl
@@ -116,7 +116,7 @@ filesystem. Almost.
You still need to actually journal your filesystem changes, this
is done by wrapping them into transactions. Additionally you
-also need to wrap the modification of each of the the buffers
+also need to wrap the modification of each of the buffers
with calls to the journal layer, so it knows what the modifications
you are actually making are. To do this use journal_start() which
returns a transaction handle.
@@ -128,7 +128,7 @@ and its counterpart journal_stop(), which indicates the end of a transaction
are nestable calls, so you can reenter a transaction if necessary,
but remember you must call journal_stop() the same number of times as
journal_start() before the transaction is completed (or more accurately
-leaves the the update phase). Ext3/VFS makes use of this feature to simplify
+leaves the update phase). Ext3/VFS makes use of this feature to simplify
quota support.
</para>
diff --git a/Documentation/DocBook/kernel-hacking.tmpl b/Documentation/DocBook/kernel-hacking.tmpl
index 49a9ef82d575..6367bba32d22 100644
--- a/Documentation/DocBook/kernel-hacking.tmpl
+++ b/Documentation/DocBook/kernel-hacking.tmpl
@@ -8,8 +8,7 @@
<authorgroup>
<author>
- <firstname>Paul</firstname>
- <othername>Rusty</othername>
+ <firstname>Rusty</firstname>
<surname>Russell</surname>
<affiliation>
<address>
@@ -20,7 +19,7 @@
</authorgroup>
<copyright>
- <year>2001</year>
+ <year>2005</year>
<holder>Rusty Russell</holder>
</copyright>
@@ -64,7 +63,7 @@
<chapter id="introduction">
<title>Introduction</title>
<para>
- Welcome, gentle reader, to Rusty's Unreliable Guide to Linux
+ Welcome, gentle reader, to Rusty's Remarkably Unreliable Guide to Linux
Kernel Hacking. This document describes the common routines and
general requirements for kernel code: its goal is to serve as a
primer for Linux kernel development for experienced C
@@ -96,13 +95,13 @@
<listitem>
<para>
- not associated with any process, serving a softirq, tasklet or bh;
+ not associated with any process, serving a softirq or tasklet;
</para>
</listitem>
<listitem>
<para>
- running in kernel space, associated with a process;
+ running in kernel space, associated with a process (user context);
</para>
</listitem>
@@ -114,11 +113,12 @@
</itemizedlist>
<para>
- There is a strict ordering between these: other than the last
- category (userspace) each can only be pre-empted by those above.
- For example, while a softirq is running on a CPU, no other
- softirq will pre-empt it, but a hardware interrupt can. However,
- any other CPUs in the system execute independently.
+ There is an ordering between these. The bottom two can preempt
+ each other, but above that is a strict hierarchy: each can only be
+ preempted by the ones above it. For example, while a softirq is
+ running on a CPU, no other softirq will preempt it, but a hardware
+ interrupt can. However, any other CPUs in the system execute
+ independently.
</para>
<para>
@@ -130,10 +130,10 @@
<title>User Context</title>
<para>
- User context is when you are coming in from a system call or
- other trap: you can sleep, and you own the CPU (except for
- interrupts) until you call <function>schedule()</function>.
- In other words, user context (unlike userspace) is not pre-emptable.
+ User context is when you are coming in from a system call or other
+ trap: like userspace, you can be preempted by more important tasks
+ and by interrupts. You can sleep, by calling
+ <function>schedule()</function>.
</para>
<note>
@@ -153,7 +153,7 @@
<caution>
<para>
- Beware that if you have interrupts or bottom halves disabled
+ Beware that if you have preemption or softirqs disabled
(see below), <function>in_interrupt()</function> will return a
false positive.
</para>
@@ -168,10 +168,10 @@
<hardware>keyboard</hardware> are examples of real
hardware which produce interrupts at any time. The kernel runs
interrupt handlers, which services the hardware. The kernel
- guarantees that this handler is never re-entered: if another
+ guarantees that this handler is never re-entered: if the same
interrupt arrives, it is queued (or dropped). Because it
disables interrupts, this handler has to be fast: frequently it
- simply acknowledges the interrupt, marks a `software interrupt'
+ simply acknowledges the interrupt, marks a 'software interrupt'
for execution and exits.
</para>
@@ -188,60 +188,52 @@
</sect1>
<sect1 id="basics-softirqs">
- <title>Software Interrupt Context: Bottom Halves, Tasklets, softirqs</title>
+ <title>Software Interrupt Context: Softirqs and Tasklets</title>
<para>
Whenever a system call is about to return to userspace, or a
- hardware interrupt handler exits, any `software interrupts'
+ hardware interrupt handler exits, any 'software interrupts'
which are marked pending (usually by hardware interrupts) are
run (<filename>kernel/softirq.c</filename>).
</para>
<para>
Much of the real interrupt handling work is done here. Early in
- the transition to <acronym>SMP</acronym>, there were only `bottom
+ the transition to <acronym>SMP</acronym>, there were only 'bottom
halves' (BHs), which didn't take advantage of multiple CPUs. Shortly
after we switched from wind-up computers made of match-sticks and snot,
- we abandoned this limitation.
+ we abandoned this limitation and switched to 'softirqs'.
</para>
<para>
<filename class="headerfile">include/linux/interrupt.h</filename> lists the
- different BH's. No matter how many CPUs you have, no two BHs will run at
- the same time. This made the transition to SMP simpler, but sucks hard for
- scalable performance. A very important bottom half is the timer
- BH (<filename class="headerfile">include/linux/timer.h</filename>): you
- can register to have it call functions for you in a given length of time.
+ different softirqs. A very important softirq is the
+ timer softirq (<filename
+ class="headerfile">include/linux/timer.h</filename>): you can
+ register to have it call functions for you in a given length of
+ time.
</para>
<para>
- 2.3.43 introduced softirqs, and re-implemented the (now
- deprecated) BHs underneath them. Softirqs are fully-SMP
- versions of BHs: they can run on as many CPUs at once as
- required. This means they need to deal with any races in shared
- data using their own locks. A bitmask is used to keep track of
- which are enabled, so the 32 available softirqs should not be
- used up lightly. (<emphasis>Yes</emphasis>, people will
- notice).
- </para>
-
- <para>
- tasklets (<filename class="headerfile">include/linux/interrupt.h</filename>)
- are like softirqs, except they are dynamically-registrable (meaning you
- can have as many as you want), and they also guarantee that any tasklet
- will only run on one CPU at any time, although different tasklets can
- run simultaneously (unlike different BHs).
+ Softirqs are often a pain to deal with, since the same softirq
+ will run simultaneously on more than one CPU. For this reason,
+ tasklets (<filename
+ class="headerfile">include/linux/interrupt.h</filename>) are more
+ often used: they are dynamically-registrable (meaning you can have
+ as many as you want), and they also guarantee that any tasklet
+ will only run on one CPU at any time, although different tasklets
+ can run simultaneously.
</para>
<caution>
<para>
- The name `tasklet' is misleading: they have nothing to do with `tasks',
+ The name 'tasklet' is misleading: they have nothing to do with 'tasks',
and probably more to do with some bad vodka Alexey Kuznetsov had at the
time.
</para>
</caution>
<para>
- You can tell you are in a softirq (or bottom half, or tasklet)
+ You can tell you are in a softirq (or tasklet)
using the <function>in_softirq()</function> macro
(<filename class="headerfile">include/linux/interrupt.h</filename>).
</para>
@@ -288,11 +280,10 @@
<term>A rigid stack limit</term>
<listitem>
<para>
- The kernel stack is about 6K in 2.2 (for most
- architectures: it's about 14K on the Alpha), and shared
- with interrupts so you can't use it all. Avoid deep
- recursion and huge local arrays on the stack (allocate
- them dynamically instead).
+ Depending on configuration options the kernel stack is about 3K to 6K for most 32-bit architectures: it's
+ about 14K on most 64-bit archs, and often shared with interrupts
+ so you can't use it all. Avoid deep recursion and huge local
+ arrays on the stack (allocate them dynamically instead).
</para>
</listitem>
</varlistentry>
@@ -339,7 +330,7 @@ asmlinkage long sys_mycall(int arg)
<para>
If all your routine does is read or write some parameter, consider
- implementing a <function>sysctl</function> interface instead.
+ implementing a <function>sysfs</function> interface instead.
</para>
<para>
@@ -417,7 +408,10 @@ cond_resched(); /* Will sleep */
</para>
<para>
- You will eventually lock up your box if you break these rules.
+ You should always compile your kernel
+ <symbol>CONFIG_DEBUG_SPINLOCK_SLEEP</symbol> on, and it will warn
+ you if you break these rules. If you <emphasis>do</emphasis> break
+ the rules, you will eventually lock up your box.
</para>
<para>
@@ -515,8 +509,7 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
success).
</para>
</caution>
- [Yes, this moronic interface makes me cringe. Please submit a
- patch and become my hero --RR.]
+ [Yes, this moronic interface makes me cringe. The flamewar comes up every year or so. --RR.]
</para>
<para>
The functions may sleep implicitly. This should never be called
@@ -587,10 +580,11 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
</variablelist>
<para>
- If you see a <errorname>kmem_grow: Called nonatomically from int
- </errorname> warning message you called a memory allocation function
- from interrupt context without <constant>GFP_ATOMIC</constant>.
- You should really fix that. Run, don't walk.
+ If you see a <errorname>sleeping function called from invalid
+ context</errorname> warning message, then maybe you called a
+ sleeping allocation function from interrupt context without
+ <constant>GFP_ATOMIC</constant>. You should really fix that.
+ Run, don't walk.
</para>
<para>
@@ -639,16 +633,16 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
</sect1>
<sect1 id="routines-udelay">
- <title><function>udelay()</function>/<function>mdelay()</function>
+ <title><function>mdelay()</function>/<function>udelay()</function>
<filename class="headerfile">include/asm/delay.h</filename>
<filename class="headerfile">include/linux/delay.h</filename>
</title>
<para>
- The <function>udelay()</function> function can be used for small pauses.
- Do not use large values with <function>udelay()</function> as you risk
+ The <function>udelay()</function> and <function>ndelay()</function> functions can be used for small pauses.
+ Do not use large values with them as you risk
overflow - the helper function <function>mdelay()</function> is useful
- here, or even consider <function>schedule_timeout()</function>.
+ here, or consider <function>msleep()</function>.
</para>
</sect1>
@@ -698,8 +692,8 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
These routines disable soft interrupts on the local CPU, and
restore them. They are reentrant; if soft interrupts were
disabled before, they will still be disabled after this pair
- of functions has been called. They prevent softirqs, tasklets
- and bottom halves from running on the current CPU.
+ of functions has been called. They prevent softirqs and tasklets
+ from running on the current CPU.
</para>
</sect1>
@@ -708,10 +702,16 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<filename class="headerfile">include/asm/smp.h</filename></title>
<para>
- <function>smp_processor_id()</function> returns the current
- processor number, between 0 and <symbol>NR_CPUS</symbol> (the
- maximum number of CPUs supported by Linux, currently 32). These
- values are not necessarily continuous.
+ <function>get_cpu()</function> disables preemption (so you won't
+ suddenly get moved to another CPU) and returns the current
+ processor number, between 0 and <symbol>NR_CPUS</symbol>. Note
+ that the CPU numbers are not necessarily continuous. You return
+ it again with <function>put_cpu()</function> when you are done.
+ </para>
+ <para>
+ If you know you cannot be preempted by another task (ie. you are
+ in interrupt context, or have preemption disabled) you can use
+ smp_processor_id().
</para>
</sect1>
@@ -722,19 +722,14 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<para>
After boot, the kernel frees up a special section; functions
marked with <type>__init</type> and data structures marked with
- <type>__initdata</type> are dropped after boot is complete (within
- modules this directive is currently ignored). <type>__exit</type>
+ <type>__initdata</type> are dropped after boot is complete: similarly
+ modules discard this memory after initialization. <type>__exit</type>
is used to declare a function which is only required on exit: the
function will be dropped if this file is not compiled as a module.
See the header file for use. Note that it makes no sense for a function
marked with <type>__init</type> to be exported to modules with
<function>EXPORT_SYMBOL()</function> - this will break.
</para>
- <para>
- Static data structures marked as <type>__initdata</type> must be initialised
- (as opposed to ordinary static data which is zeroed BSS) and cannot be
- <type>const</type>.
- </para>
</sect1>
@@ -762,9 +757,8 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<para>
The function can return a negative error number to cause
module loading to fail (unfortunately, this has no effect if
- the module is compiled into the kernel). For modules, this is
- called in user context, with interrupts enabled, and the
- kernel lock held, so it can sleep.
+ the module is compiled into the kernel). This function is
+ called in user context with interrupts enabled, so it can sleep.
</para>
</sect1>
@@ -779,6 +773,34 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
reached zero. This function can also sleep, but cannot fail:
everything must be cleaned up by the time it returns.
</para>
+
+ <para>
+ Note that this macro is optional: if it is not present, your
+ module will not be removable (except for 'rmmod -f').
+ </para>
+ </sect1>
+
+ <sect1 id="routines-module-use-counters">
+ <title> <function>try_module_get()</function>/<function>module_put()</function>
+ <filename class="headerfile">include/linux/module.h</filename></title>
+
+ <para>
+ These manipulate the module usage count, to protect against
+ removal (a module also can't be removed if another module uses one
+ of its exported symbols: see below). Before calling into module
+ code, you should call <function>try_module_get()</function> on
+ that module: if it fails, then the module is being removed and you
+ should act as if it wasn't there. Otherwise, you can safely enter
+ the module, and call <function>module_put()</function> when you're
+ finished.
+ </para>
+
+ <para>
+ Most registerable structures have an
+ <structfield>owner</structfield> field, such as in the
+ <structname>file_operations</structname> structure. Set this field
+ to the macro <symbol>THIS_MODULE</symbol>.
+ </para>
</sect1>
<!-- add info on new-style module refcounting here -->
@@ -821,7 +843,7 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
There is a macro to do this:
<function>wait_event_interruptible()</function>
- <filename class="headerfile">include/linux/sched.h</filename> The
+ <filename class="headerfile">include/linux/wait.h</filename> The
first argument is the wait queue head, and the second is an
expression which is evaluated; the macro returns
<returnvalue>0</returnvalue> when this expression is true, or
@@ -847,10 +869,11 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<para>
Call <function>wake_up()</function>
- <filename class="headerfile">include/linux/sched.h</filename>;,
+ <filename class="headerfile">include/linux/wait.h</filename>;,
which will wake up every process in the queue. The exception is
if one has <constant>TASK_EXCLUSIVE</constant> set, in which case
- the remainder of the queue will not be woken.
+ the remainder of the queue will not be woken. There are other variants
+ of this basic function available in the same header.
</para>
</sect1>
</chapter>
@@ -863,7 +886,7 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
first class of operations work on <type>atomic_t</type>
<filename class="headerfile">include/asm/atomic.h</filename>; this
- contains a signed integer (at least 24 bits long), and you must use
+ contains a signed integer (at least 32 bits long), and you must use
these functions to manipulate or read atomic_t variables.
<function>atomic_read()</function> and
<function>atomic_set()</function> get and set the counter,
@@ -882,13 +905,12 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<para>
Note that these functions are slower than normal arithmetic, and
- so should not be used unnecessarily. On some platforms they
- are much slower, like 32-bit Sparc where they use a spinlock.
+ so should not be used unnecessarily.
</para>
<para>
- The second class of atomic operations is atomic bit operations on a
- <type>long</type>, defined in
+ The second class of atomic operations is atomic bit operations on an
+ <type>unsigned long</type>, defined in
<filename class="headerfile">include/linux/bitops.h</filename>. These
operations generally take a pointer to the bit pattern, and a bit
@@ -899,7 +921,7 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<function>test_and_clear_bit()</function> and
<function>test_and_change_bit()</function> do the same thing,
except return true if the bit was previously set; these are
- particularly useful for very simple locking.
+ particularly useful for atomically setting flags.
</para>
<para>
@@ -907,12 +929,6 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
than BITS_PER_LONG. The resulting behavior is strange on big-endian
platforms though so it is a good idea not to do this.
</para>
-
- <para>
- Note that the order of bits depends on the architecture, and in
- particular, the bitfield passed to these operations must be at
- least as large as a <type>long</type>.
- </para>
</chapter>
<chapter id="symbols">
@@ -932,11 +948,8 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<filename class="headerfile">include/linux/module.h</filename></title>
<para>
- This is the classic method of exporting a symbol, and it works
- for both modules and non-modules. In the kernel all these
- declarations are often bundled into a single file to help
- genksyms (which searches source files for these declarations).
- See the comment on genksyms and Makefiles below.
+ This is the classic method of exporting a symbol: dynamically
+ loaded modules will be able to use the symbol as normal.
</para>
</sect1>
@@ -949,7 +962,8 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
symbols exported by <function>EXPORT_SYMBOL_GPL()</function> can
only be seen by modules with a
<function>MODULE_LICENSE()</function> that specifies a GPL
- compatible license.
+ compatible license. It implies that the function is considered
+ an internal implementation issue, and not really an interface.
</para>
</sect1>
</chapter>
@@ -962,12 +976,13 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
<filename class="headerfile">include/linux/list.h</filename></title>
<para>
- There are three sets of linked-list routines in the kernel
- headers, but this one seems to be winning out (and Linus has
- used it). If you don't have some particular pressing need for
- a single list, it's a good choice. In fact, I don't care
- whether it's a good choice or not, just use it so we can get
- rid of the others.
+ There used to be three sets of linked-list routines in the kernel
+ headers, but this one is the winner. If you don't have some
+ particular pressing need for a single list, it's a good choice.
+ </para>
+
+ <para>
+ In particular, <function>list_for_each_entry</function> is useful.
</para>
</sect1>
@@ -979,14 +994,13 @@ printk(KERN_INFO "my ip: %d.%d.%d.%d\n", NIPQUAD(ipaddress));
convention, and return <returnvalue>0</returnvalue> for success,
and a negative error number
(eg. <returnvalue>-EFAULT</returnvalue>) for failure. This can be
- unintuitive at first, but it's fairly widespread in the networking
- code, for example.
+ unintuitive at first, but it's fairly widespread in the kernel.
</para>
<para>
- The filesystem code uses <function>ERR_PTR()</function>
+ Using <function>ERR_PTR()</function>
- <filename class="headerfile">include/linux/fs.h</filename>; to
+ <filename class="headerfile">include/linux/err.h</filename>; to
encode a negative error number into a pointer, and
<function>IS_ERR()</function> and <function>PTR_ERR()</function>
to get it back out again: avoids a separate pointer parameter for
@@ -1040,7 +1054,7 @@ static struct block_device_operations opt_fops = {
supported, due to lack of general use, but the following are
considered standard (see the GCC info page section "C
Extensions" for more details - Yes, really the info page, the
- man page is only a short summary of the stuff in info):
+ man page is only a short summary of the stuff in info).
</para>
<itemizedlist>
<listitem>
@@ -1091,7 +1105,7 @@ static struct block_device_operations opt_fops = {
</listitem>
<listitem>
<para>
- Function names as strings (__FUNCTION__)
+ Function names as strings (__func__).
</para>
</listitem>
<listitem>
@@ -1164,63 +1178,35 @@ static struct block_device_operations opt_fops = {
<listitem>
<para>
Usually you want a configuration option for your kernel hack.
- Edit <filename>Config.in</filename> in the appropriate directory
- (but under <filename>arch/</filename> it's called
- <filename>config.in</filename>). The Config Language used is not
- bash, even though it looks like bash; the safe way is to use only
- the constructs that you already see in
- <filename>Config.in</filename> files (see
- <filename>Documentation/kbuild/kconfig-language.txt</filename>).
- It's good to run "make xconfig" at least once to test (because
- it's the only one with a static parser).
- </para>
-
- <para>
- Variables which can be Y or N use <type>bool</type> followed by a
- tagline and the config define name (which must start with
- CONFIG_). The <type>tristate</type> function is the same, but
- allows the answer M (which defines
- <symbol>CONFIG_foo_MODULE</symbol> in your source, instead of
- <symbol>CONFIG_FOO</symbol>) if <symbol>CONFIG_MODULES</symbol>
- is enabled.
+ Edit <filename>Kconfig</filename> in the appropriate directory.
+ The Config language is simple to use by cut and paste, and there's
+ complete documentation in
+ <filename>Documentation/kbuild/kconfig-language.txt</filename>.
</para>
<para>
You may well want to make your CONFIG option only visible if
<symbol>CONFIG_EXPERIMENTAL</symbol> is enabled: this serves as a
warning to users. There many other fancy things you can do: see
- the various <filename>Config.in</filename> files for ideas.
+ the various <filename>Kconfig</filename> files for ideas.
</para>
- </listitem>
- <listitem>
<para>
- Edit the <filename>Makefile</filename>: the CONFIG variables are
- exported here so you can conditionalize compilation with `ifeq'.
- If your file exports symbols then add the names to
- <varname>export-objs</varname> so that genksyms will find them.
- <caution>
- <para>
- There is a restriction on the kernel build system that objects
- which export symbols must have globally unique names.
- If your object does not have a globally unique name then the
- standard fix is to move the
- <function>EXPORT_SYMBOL()</function> statements to their own
- object with a unique name.
- This is why several systems have separate exporting objects,
- usually suffixed with ksyms.
- </para>
- </caution>
+ In your description of the option, make sure you address both the
+ expert user and the user who knows nothing about your feature. Mention
+ incompatibilities and issues here. <emphasis> Definitely
+ </emphasis> end your description with <quote> if in doubt, say N
+ </quote> (or, occasionally, `Y'); this is for people who have no
+ idea what you are talking about.
</para>
</listitem>
<listitem>
<para>
- Document your option in Documentation/Configure.help. Mention
- incompatibilities and issues here. <emphasis> Definitely
- </emphasis> end your description with <quote> if in doubt, say N
- </quote> (or, occasionally, `Y'); this is for people who have no
- idea what you are talking about.
+ Edit the <filename>Makefile</filename>: the CONFIG variables are
+ exported here so you can usually just add a "obj-$(CONFIG_xxx) +=
+ xxx.o" line. The syntax is documented in
+ <filename>Documentation/kbuild/makefiles.txt</filename>.
</para>
</listitem>
@@ -1253,20 +1239,12 @@ static struct block_device_operations opt_fops = {
</para>
<para>
- <filename>include/linux/brlock.h:</filename>
+ <filename>include/asm-i386/delay.h:</filename>
</para>
<programlisting>
-extern inline void br_read_lock (enum brlock_indices idx)
-{
- /*
- * This causes a link-time bug message if an
- * invalid index is used:
- */
- if (idx >= __BR_END)
- __br_lock_usage_bug();
-
- read_lock(&amp;__brlock_array[smp_processor_id()][idx]);
-}
+#define ndelay(n) (__builtin_constant_p(n) ? \
+ ((n) > 20000 ? __bad_ndelay() : __const_udelay((n) * 5ul)) : \
+ __ndelay(n))
</programlisting>
<para>
diff --git a/Documentation/DocBook/usb.tmpl b/Documentation/DocBook/usb.tmpl
index f3ef0bf435e9..705c442c7bf4 100644
--- a/Documentation/DocBook/usb.tmpl
+++ b/Documentation/DocBook/usb.tmpl
@@ -841,7 +841,7 @@ usbdev_ioctl (int fd, int ifno, unsigned request, void *param)
File modification time is not updated by this request.
</para><para>
Those struct members are from some interface descriptor
- applying to the the current configuration.
+ applying to the current configuration.
The interface number is the bInterfaceNumber value, and
the altsetting number is the bAlternateSetting value.
(This resets each endpoint in the interface.)
diff --git a/Documentation/MSI-HOWTO.txt b/Documentation/MSI-HOWTO.txt
index d5032eb480aa..63edc5f847c4 100644
--- a/Documentation/MSI-HOWTO.txt
+++ b/Documentation/MSI-HOWTO.txt
@@ -430,7 +430,7 @@ which may result in system hang. The software driver of specific
MSI-capable hardware is responsible for whether calling
pci_enable_msi or not. A return of zero indicates the kernel
successfully initializes the MSI/MSI-X capability structure of the
-device funtion. The device function is now running on MSI/MSI-X mode.
+device function. The device function is now running on MSI/MSI-X mode.
5.6 How to tell whether MSI/MSI-X is enabled on device function
diff --git a/Documentation/RCU/RTFP.txt b/Documentation/RCU/RTFP.txt
index 9c6d450138ea..fcbcbc35b122 100644
--- a/Documentation/RCU/RTFP.txt
+++ b/Documentation/RCU/RTFP.txt
@@ -2,7 +2,8 @@ Read the F-ing Papers!
This document describes RCU-related publications, and is followed by
-the corresponding bibtex entries.
+the corresponding bibtex entries. A number of the publications may
+be found at http://www.rdrop.com/users/paulmck/RCU/.
The first thing resembling RCU was published in 1980, when Kung and Lehman
[Kung80] recommended use of a garbage collector to defer destruction
@@ -113,6 +114,10 @@ describing how to make RCU safe for soft-realtime applications [Sarma04c],
and a paper describing SELinux performance with RCU [JamesMorris04b].
+2005 has seen further adaptation of RCU to realtime use, permitting
+preemption of RCU realtime critical sections [PaulMcKenney05a,
+PaulMcKenney05b].
+
Bibtex Entries
@article{Kung80
@@ -410,3 +415,32 @@ Oregon Health and Sciences University"
\url{http://www.livejournal.com/users/james_morris/2153.html}
[Viewed December 10, 2004]"
}
+
+@unpublished{PaulMcKenney05a
+,Author="Paul E. McKenney"
+,Title="{[RFC]} {RCU} and {CONFIG\_PREEMPT\_RT} progress"
+,month="May"
+,year="2005"
+,note="Available:
+\url{http://lkml.org/lkml/2005/5/9/185}
+[Viewed May 13, 2005]"
+,annotation="
+ First publication of working lock-based deferred free patches
+ for the CONFIG_PREEMPT_RT environment.
+"
+}
+
+@conference{PaulMcKenney05b
+,Author="Paul E. McKenney and Dipankar Sarma"
+,Title="Towards Hard Realtime Response from the Linux Kernel on SMP Hardware"
+,Booktitle="linux.conf.au 2005"
+,month="April"
+,year="2005"
+,address="Canberra, Australia"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/realtimeRCU.2005.04.23a.pdf}
+[Viewed May 13, 2005]"
+,annotation="
+ Realtime turns into making RCU yet more realtime friendly.
+"
+}
diff --git a/Documentation/RCU/UP.txt b/Documentation/RCU/UP.txt
index 3bfb84b3b7db..aab4a9ec3931 100644
--- a/Documentation/RCU/UP.txt
+++ b/Documentation/RCU/UP.txt
@@ -8,7 +8,7 @@ is that since there is only one CPU, it should not be necessary to
wait for anything else to get done, since there are no other CPUs for
anything else to be happening on. Although this approach will -sort- -of-
work a surprising amount of the time, it is a very bad idea in general.
-This document presents two examples that demonstrate exactly how bad an
+This document presents three examples that demonstrate exactly how bad an
idea this is.
@@ -26,6 +26,9 @@ from softirq, the list scan would find itself referencing a newly freed
element B. This situation can greatly decrease the life expectancy of
your kernel.
+This same problem can occur if call_rcu() is invoked from a hardware
+interrupt handler.
+
Example 2: Function-Call Fatality
@@ -44,8 +47,37 @@ its arguments would cause it to fail to make the fundamental guarantee
underlying RCU, namely that call_rcu() defers invoking its arguments until
all RCU read-side critical sections currently executing have completed.
-Quick Quiz: why is it -not- legal to invoke synchronize_rcu() in
-this case?
+Quick Quiz #1: why is it -not- legal to invoke synchronize_rcu() in
+ this case?
+
+
+Example 3: Death by Deadlock
+
+Suppose that call_rcu() is invoked while holding a lock, and that the
+callback function must acquire this same lock. In this case, if
+call_rcu() were to directly invoke the callback, the result would
+be self-deadlock.
+
+In some cases, it would possible to restructure to code so that
+the call_rcu() is delayed until after the lock is released. However,
+there are cases where this can be quite ugly:
+
+1. If a number of items need to be passed to call_rcu() within
+ the same critical section, then the code would need to create
+ a list of them, then traverse the list once the lock was
+ released.
+
+2. In some cases, the lock will be held across some kernel API,
+ so that delaying the call_rcu() until the lock is released
+ requires that the data item be passed up via a common API.
+ It is far better to guarantee that callbacks are invoked
+ with no locks held than to have to modify such APIs to allow
+ arbitrary data items to be passed back up through them.
+
+If call_rcu() directly invokes the callback, painful locking restrictions
+or API changes would be required.
+
+Quick Quiz #2: What locking restriction must RCU callbacks respect?
Summary
@@ -53,12 +85,35 @@ Summary
Permitting call_rcu() to immediately invoke its arguments or permitting
synchronize_rcu() to immediately return breaks RCU, even on a UP system.
So do not do it! Even on a UP system, the RCU infrastructure -must-
-respect grace periods.
-
-
-Answer to Quick Quiz
-
-The calling function is scanning an RCU-protected linked list, and
-is therefore within an RCU read-side critical section. Therefore,
-the called function has been invoked within an RCU read-side critical
-section, and is not permitted to block.
+respect grace periods, and -must- invoke callbacks from a known environment
+in which no locks are held.
+
+
+Answer to Quick Quiz #1:
+ Why is it -not- legal to invoke synchronize_rcu() in this case?
+
+ Because the calling function is scanning an RCU-protected linked
+ list, and is therefore within an RCU read-side critical section.
+ Therefore, the called function has been invoked within an RCU
+ read-side critical section, and is not permitted to block.
+
+Answer to Quick Quiz #2:
+ What locking restriction must RCU callbacks respect?
+
+ Any lock that is acquired within an RCU callback must be
+ acquired elsewhere using an _irq variant of the spinlock
+ primitive. For example, if "mylock" is acquired by an
+ RCU callback, then a process-context acquisition of this
+ lock must use something like spin_lock_irqsave() to
+ acquire the lock.
+
+ If the process-context code were to simply use spin_lock(),
+ then, since RCU callbacks can be invoked from softirq context,
+ the callback might be called from a softirq that interrupted
+ the process-context critical section. This would result in
+ self-deadlock.
+
+ This restriction might seem gratuitous, since very few RCU
+ callbacks acquire locks directly. However, a great many RCU
+ callbacks do acquire locks -indirectly-, for example, via
+ the kfree() primitive.
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
index 8f3fb77c9cd3..e118a7c1a092 100644
--- a/Documentation/RCU/checklist.txt
+++ b/Documentation/RCU/checklist.txt
@@ -43,6 +43,10 @@ over a rather long period of time, but improvements are always welcome!
rcu_read_lock_bh()) in the read-side critical sections,
and are also an excellent aid to readability.
+ As a rough rule of thumb, any dereference of an RCU-protected
+ pointer must be covered by rcu_read_lock() or rcu_read_lock_bh()
+ or by the appropriate update-side lock.
+
3. Does the update code tolerate concurrent accesses?
The whole point of RCU is to permit readers to run without
@@ -90,7 +94,11 @@ over a rather long period of time, but improvements are always welcome!
The rcu_dereference() primitive is used by the various
"_rcu()" list-traversal primitives, such as the
- list_for_each_entry_rcu().
+ list_for_each_entry_rcu(). Note that it is perfectly
+ legal (if redundant) for update-side code to use
+ rcu_dereference() and the "_rcu()" list-traversal
+ primitives. This is particularly useful in code
+ that is common to readers and updaters.
b. If the list macros are being used, the list_add_tail_rcu()
and list_add_rcu() primitives must be used in order
@@ -150,16 +158,9 @@ over a rather long period of time, but improvements are always welcome!
Use of the _rcu() list-traversal primitives outside of an
RCU read-side critical section causes no harm other than
- a slight performance degradation on Alpha CPUs and some
- confusion on the part of people trying to read the code.
-
- Another way of thinking of this is "If you are holding the
- lock that prevents the data structure from changing, why do
- you also need RCU-based protection?" That said, there may
- well be situations where use of the _rcu() list-traversal
- primitives while the update-side lock is held results in
- simpler and more maintainable code. The jury is still out
- on this question.
+ a slight performance degradation on Alpha CPUs. It can
+ also be quite helpful in reducing code bloat when common
+ code is shared between readers and updaters.
10. Conversely, if you are in an RCU read-side critical section,
you -must- use the "_rcu()" variants of the list macros.
diff --git a/Documentation/RCU/rcu.txt b/Documentation/RCU/rcu.txt
index eb444006683e..6fa092251586 100644
--- a/Documentation/RCU/rcu.txt
+++ b/Documentation/RCU/rcu.txt
@@ -64,6 +64,54 @@ o I hear that RCU is patented? What is with that?
Of these, one was allowed to lapse by the assignee, and the
others have been contributed to the Linux kernel under GPL.
+o I hear that RCU needs work in order to support realtime kernels?
+
+ Yes, work in progress.
+
o Where can I find more information on RCU?
See the RTFP.txt file in this directory.
+ Or point your browser at http://www.rdrop.com/users/paulmck/RCU/.
+
+o What are all these files in this directory?
+
+
+ NMI-RCU.txt
+
+ Describes how to use RCU to implement dynamic
+ NMI handlers, which can be revectored on the fly,
+ without rebooting.
+
+ RTFP.txt
+
+ List of RCU-related publications and web sites.
+
+ UP.txt
+
+ Discussion of RCU usage in UP kernels.
+
+ arrayRCU.txt
+
+ Describes how to use RCU to protect arrays, with
+ resizeable arrays whose elements reference other
+ data structures being of the most interest.
+
+ checklist.txt
+
+ Lists things to check for when inspecting code that
+ uses RCU.
+
+ listRCU.txt
+
+ Describes how to use RCU to protect linked lists.
+ This is the simplest and most common use of RCU
+ in the Linux kernel.
+
+ rcu.txt
+
+ You are reading it!
+
+ whatisRCU.txt
+
+ Overview of how the RCU implementation works. Along
+ the way, presents a conceptual view of RCU.
diff --git a/Documentation/RCU/rcuref.txt b/Documentation/RCU/rcuref.txt
new file mode 100644
index 000000000000..a23fee66064d
--- /dev/null
+++ b/Documentation/RCU/rcuref.txt
@@ -0,0 +1,74 @@
+Refcounter framework for elements of lists/arrays protected by
+RCU.
+
+Refcounting on elements of lists which are protected by traditional
+reader/writer spinlocks or semaphores are straight forward as in:
+
+1. 2.
+add() search_and_reference()
+{ {
+ alloc_object read_lock(&list_lock);
+ ... search_for_element
+ atomic_set(&el->rc, 1); atomic_inc(&el->rc);
+ write_lock(&list_lock); ...
+ add_element read_unlock(&list_lock);
+ ... ...
+ write_unlock(&list_lock); }
+}
+
+3. 4.
+release_referenced() delete()
+{ {
+ ... write_lock(&list_lock);
+ atomic_dec(&el->rc, relfunc) ...
+ ... delete_element
+} write_unlock(&list_lock);
+ ...
+ if (atomic_dec_and_test(&el->rc))
+ kfree(el);
+ ...
+ }
+
+If this list/array is made lock free using rcu as in changing the
+write_lock in add() and delete() to spin_lock and changing read_lock
+in search_and_reference to rcu_read_lock(), the rcuref_get in
+search_and_reference could potentially hold reference to an element which
+has already been deleted from the list/array. rcuref_lf_get_rcu takes
+care of this scenario. search_and_reference should look as;
+
+1. 2.
+add() search_and_reference()
+{ {
+ alloc_object rcu_read_lock();
+ ... search_for_element
+ atomic_set(&el->rc, 1); if (rcuref_inc_lf(&el->rc)) {
+ write_lock(&list_lock); rcu_read_unlock();
+ return FAIL;
+ add_element }
+ ... ...
+ write_unlock(&list_lock); rcu_read_unlock();
+} }
+3. 4.
+release_referenced() delete()
+{ {
+ ... write_lock(&list_lock);
+ rcuref_dec(&el->rc, relfunc) ...
+ ... delete_element
+} write_unlock(&list_lock);
+ ...
+ if (rcuref_dec_and_test(&el->rc))
+ call_rcu(&el->head, el_free);
+ ...
+ }
+
+Sometimes, reference to the element need to be obtained in the
+update (write) stream. In such cases, rcuref_inc_lf might be an overkill
+since the spinlock serialising list updates are held. rcuref_inc
+is to be used in such cases.
+For arches which do not have cmpxchg rcuref_inc_lf
+api uses a hashed spinlock implementation and the same hashed spinlock
+is acquired in all rcuref_xxx primitives to preserve atomicity.
+Note: Use rcuref_inc api only if you need to use rcuref_inc_lf on the
+refcounter atleast at one place. Mixing rcuref_inc and atomic_xxx api
+might lead to races. rcuref_inc_lf() must be used in lockfree
+RCU critical sections only.
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
new file mode 100644
index 000000000000..354d89c78377
--- /dev/null
+++ b/Documentation/RCU/whatisRCU.txt
@@ -0,0 +1,902 @@
+What is RCU?
+
+RCU is a synchronization mechanism that was added to the Linux kernel
+during the 2.5 development effort that is optimized for read-mostly
+situations. Although RCU is actually quite simple once you understand it,
+getting there can sometimes be a challenge. Part of the problem is that
+most of the past descriptions of RCU have been written with the mistaken
+assumption that there is "one true way" to describe RCU. Instead,
+the experience has been that different people must take different paths
+to arrive at an understanding of RCU. This document provides several
+different paths, as follows:
+
+1. RCU OVERVIEW
+2. WHAT IS RCU'S CORE API?
+3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
+4. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
+5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
+6. ANALOGY WITH READER-WRITER LOCKING
+7. FULL LIST OF RCU APIs
+8. ANSWERS TO QUICK QUIZZES
+
+People who prefer starting with a conceptual overview should focus on
+Section 1, though most readers will profit by reading this section at
+some point. People who prefer to start with an API that they can then
+experiment with should focus on Section 2. People who prefer to start
+with example uses should focus on Sections 3 and 4. People who need to
+understand the RCU implementation should focus on Section 5, then dive
+into the kernel source code. People who reason best by analogy should
+focus on Section 6. Section 7 serves as an index to the docbook API
+documentation, and Section 8 is the traditional answer key.
+
+So, start with the section that makes the most sense to you and your
+preferred method of learning. If you need to know everything about
+everything, feel free to read the whole thing -- but if you are really
+that type of person, you have perused the source code and will therefore
+never need this document anyway. ;-)
+
+
+1. RCU OVERVIEW
+
+The basic idea behind RCU is to split updates into "removal" and
+"reclamation" phases. The removal phase removes references to data items
+within a data structure (possibly by replacing them with references to
+new versions of these data items), and can run concurrently with readers.
+The reason that it is safe to run the removal phase concurrently with
+readers is the semantics of modern CPUs guarantee that readers will see
+either the old or the new version of the data structure rather than a
+partially updated reference. The reclamation phase does the work of reclaiming
+(e.g., freeing) the data items removed from the data structure during the
+removal phase. Because reclaiming data items can disrupt any readers
+concurrently referencing those data items, the reclamation phase must
+not start until readers no longer hold references to those data items.
+
+Splitting the update into removal and reclamation phases permits the
+updater to perform the removal phase immediately, and to defer the
+reclamation phase until all readers active during the removal phase have
+completed, either by blocking until they finish or by registering a
+callback that is invoked after they finish. Only readers that are active
+during the removal phase need be considered, because any reader starting
+after the removal phase will be unable to gain a reference to the removed
+data items, and therefore cannot be disrupted by the reclamation phase.
+
+So the typical RCU update sequence goes something like the following:
+
+a. Remove pointers to a data structure, so that subsequent
+ readers cannot gain a reference to it.
+
+b. Wait for all previous readers to complete their RCU read-side
+ critical sections.
+
+c. At this point, there cannot be any readers who hold references
+ to the data structure, so it now may safely be reclaimed
+ (e.g., kfree()d).
+
+Step (b) above is the key idea underlying RCU's deferred destruction.
+The ability to wait until all readers are done allows RCU readers to
+use much lighter-weight synchronization, in some cases, absolutely no
+synchronization at all. In contrast, in more conventional lock-based
+schemes, readers must use heavy-weight synchronization in order to
+prevent an updater from deleting the data structure out from under them.
+This is because lock-based updaters typically update data items in place,
+and must therefore exclude readers. In contrast, RCU-based updaters
+typically take advantage of the fact that writes to single aligned
+pointers are atomic on modern CPUs, allowing atomic insertion, removal,
+and replacement of data items in a linked structure without disrupting
+readers. Concurrent RCU readers can then continue accessing the old
+versions, and can dispense with the atomic operations, memory barriers,
+and communications cache misses that are so expensive on present-day
+SMP computer systems, even in absence of lock contention.
+
+In the three-step procedure shown above, the updater is performing both
+the removal and the reclamation step, but it is often helpful for an
+entirely different thread to do the reclamation, as is in fact the case
+in the Linux kernel's directory-entry cache (dcache). Even if the same
+thread performs both the update step (step (a) above) and the reclamation
+step (step (c) above), it is often helpful to think of them separately.
+For example, RCU readers and updaters need not communicate at all,
+but RCU provides implicit low-overhead communication between readers
+and reclaimers, namely, in step (b) above.
+
+So how the heck can a reclaimer tell when a reader is done, given
+that readers are not doing any sort of synchronization operations???
+Read on to learn about how RCU's API makes this easy.
+
+
+2. WHAT IS RCU'S CORE API?
+
+The core RCU API is quite small:
+
+a. rcu_read_lock()
+b. rcu_read_unlock()
+c. synchronize_rcu() / call_rcu()
+d. rcu_assign_pointer()
+e. rcu_dereference()
+
+There are many other members of the RCU API, but the rest can be
+expressed in terms of these five, though most implementations instead
+express synchronize_rcu() in terms of the call_rcu() callback API.
+
+The five core RCU APIs are described below, the other 18 will be enumerated
+later. See the kernel docbook documentation for more info, or look directly
+at the function header comments.
+
+rcu_read_lock()
+
+ void rcu_read_lock(void);
+
+ Used by a reader to inform the reclaimer that the reader is
+ entering an RCU read-side critical section. It is illegal
+ to block while in an RCU read-side critical section, though
+ kernels built with CONFIG_PREEMPT_RCU can preempt RCU read-side
+ critical sections. Any RCU-protected data structure accessed
+ during an RCU read-side critical section is guaranteed to remain
+ unreclaimed for the full duration of that critical section.
+ Reference counts may be used in conjunction with RCU to maintain
+ longer-term references to data structures.
+
+rcu_read_unlock()
+
+ void rcu_read_unlock(void);
+
+ Used by a reader to inform the reclaimer that the reader is
+ exiting an RCU read-side critical section. Note that RCU
+ read-side critical sections may be nested and/or overlapping.
+
+synchronize_rcu()
+
+ void synchronize_rcu(void);
+
+ Marks the end of updater code and the beginning of reclaimer
+ code. It does this by blocking until all pre-existing RCU
+ read-side critical sections on all CPUs have completed.
+ Note that synchronize_rcu() will -not- necessarily wait for
+ any subsequent RCU read-side critical sections to complete.
+ For example, consider the following sequence of events:
+
+ CPU 0 CPU 1 CPU 2
+ ----------------- ------------------------- ---------------
+ 1. rcu_read_lock()
+ 2. enters synchronize_rcu()
+ 3. rcu_read_lock()
+ 4. rcu_read_unlock()
+ 5. exits synchronize_rcu()
+ 6. rcu_read_unlock()
+
+ To reiterate, synchronize_rcu() waits only for ongoing RCU
+ read-side critical sections to complete, not necessarily for
+ any that begin after synchronize_rcu() is invoked.
+
+ Of course, synchronize_rcu() does not necessarily return
+ -immediately- after the last pre-existing RCU read-side critical
+ section completes. For one thing, there might well be scheduling
+ delays. For another thing, many RCU implementations process
+ requests in batches in order to improve efficiencies, which can
+ further delay synchronize_rcu().
+
+ Since synchronize_rcu() is the API that must figure out when
+ readers are done, its implementation is key to RCU. For RCU
+ to be useful in all but the most read-intensive situations,
+ synchronize_rcu()'s overhead must also be quite small.
+
+ The call_rcu() API is a callback form of synchronize_rcu(),
+ and is described in more detail in a later section. Instead of
+ blocking, it registers a function and argument which are invoked
+ after all ongoing RCU read-side critical sections have completed.
+ This callback variant is particularly useful in situations where
+ it is illegal to block.
+
+rcu_assign_pointer()
+
+ typeof(p) rcu_assign_pointer(p, typeof(p) v);
+
+ Yes, rcu_assign_pointer() -is- implemented as a macro, though it
+ would be cool to be able to declare a function in this manner.
+ (Compiler experts will no doubt disagree.)
+
+ The updater uses this function to assign a new value to an
+ RCU-protected pointer, in order to safely communicate the change
+ in value from the updater to the reader. This function returns
+ the new value, and also executes any memory-barrier instructions
+ required for a given CPU architecture.
+
+ Perhaps more important, it serves to document which pointers
+ are protected by RCU. That said, rcu_assign_pointer() is most
+ frequently used indirectly, via the _rcu list-manipulation
+ primitives such as list_add_rcu().
+
+rcu_dereference()
+
+ typeof(p) rcu_dereference(p);
+
+ Like rcu_assign_pointer(), rcu_dereference() must be implemented
+ as a macro.
+
+ The reader uses rcu_dereference() to fetch an RCU-protected
+ pointer, which returns a value that may then be safely
+ dereferenced. Note that rcu_deference() does not actually
+ dereference the pointer, instead, it protects the pointer for
+ later dereferencing. It also executes any needed memory-barrier
+ instructions for a given CPU architecture. Currently, only Alpha
+ needs memory barriers within rcu_dereference() -- on other CPUs,
+ it compiles to nothing, not even a compiler directive.
+
+ Common coding practice uses rcu_dereference() to copy an
+ RCU-protected pointer to a local variable, then dereferences
+ this local variable, for example as follows:
+
+ p = rcu_dereference(head.next);
+ return p->data;
+
+ However, in this case, one could just as easily combine these
+ into one statement:
+
+ return rcu_dereference(head.next)->data;
+
+ If you are going to be fetching multiple fields from the
+ RCU-protected structure, using the local variable is of
+ course preferred. Repeated rcu_dereference() calls look
+ ugly and incur unnecessary overhead on Alpha CPUs.
+
+ Note that the value returned by rcu_dereference() is valid
+ only within the enclosing RCU read-side critical section.
+ For example, the following is -not- legal:
+
+ rcu_read_lock();
+ p = rcu_dereference(head.next);
+ rcu_read_unlock();
+ x = p->address;
+ rcu_read_lock();
+ y = p->data;
+ rcu_read_unlock();
+
+ Holding a reference from one RCU read-side critical section
+ to another is just as illegal as holding a reference from
+ one lock-based critical section to another! Similarly,
+ using a reference outside of the critical section in which
+ it was acquired is just as illegal as doing so with normal
+ locking.
+
+ As with rcu_assign_pointer(), an important function of
+ rcu_dereference() is to document which pointers are protected
+ by RCU. And, again like rcu_assign_pointer(), rcu_dereference()
+ is typically used indirectly, via the _rcu list-manipulation
+ primitives, such as list_for_each_entry_rcu().
+
+The following diagram shows how each API communicates among the
+reader, updater, and reclaimer.
+
+
+ rcu_assign_pointer()
+ +--------+
+ +---------------------->| reader |---------+
+ | +--------+ |
+ | | |
+ | | | Protect:
+ | | | rcu_read_lock()
+ | | | rcu_read_unlock()
+ | rcu_dereference() | |
+ +---------+ | |
+ | updater |<---------------------+ |
+ +---------+ V
+ | +-----------+
+ +----------------------------------->| reclaimer |
+ +-----------+
+ Defer:
+ synchronize_rcu() & call_rcu()
+
+
+The RCU infrastructure observes the time sequence of rcu_read_lock(),
+rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
+order to determine when (1) synchronize_rcu() invocations may return
+to their callers and (2) call_rcu() callbacks may be invoked. Efficient
+implementations of the RCU infrastructure make heavy use of batching in
+order to amortize their overhead over many uses of the corresponding APIs.
+
+There are no fewer than three RCU mechanisms in the Linux kernel; the
+diagram above shows the first one, which is by far the most commonly used.
+The rcu_dereference() and rcu_assign_pointer() primitives are used for
+all three mechanisms, but different defer and protect primitives are
+used as follows:
+
+ Defer Protect
+
+a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock()
+ call_rcu()
+
+b. call_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh()
+
+c. synchronize_sched() preempt_disable() / preempt_enable()
+ local_irq_save() / local_irq_restore()
+ hardirq enter / hardirq exit
+ NMI enter / NMI exit
+
+These three mechanisms are used as follows:
+
+a. RCU applied to normal data structures.
+
+b. RCU applied to networking data structures that may be subjected
+ to remote denial-of-service attacks.
+
+c. RCU applied to scheduler and interrupt/NMI-handler tasks.
+
+Again, most uses will be of (a). The (b) and (c) cases are important
+for specialized uses, but are relatively uncommon.
+
+
+3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
+
+This section shows a simple use of the core RCU API to protect a
+global pointer to a dynamically allocated structure. More typical
+uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.
+
+ struct foo {
+ int a;
+ char b;
+ long c;
+ };
+ DEFINE_SPINLOCK(foo_mutex);
+
+ struct foo *gbl_foo;
+
+ /*
+ * Create a new struct foo that is the same as the one currently
+ * pointed to by gbl_foo, except that field "a" is replaced
+ * with "new_a". Points gbl_foo to the new structure, and
+ * frees up the old structure after a grace period.
+ *
+ * Uses rcu_assign_pointer() to ensure that concurrent readers
+ * see the initialized version of the new structure.
+ *
+ * Uses synchronize_rcu() to ensure that any readers that might
+ * have references to the old structure complete before freeing
+ * the old structure.
+ */
+ void foo_update_a(int new_a)
+ {
+ struct foo *new_fp;
+ struct foo *old_fp;
+
+ new_fp = kmalloc(sizeof(*fp), GFP_KERNEL);
+ spin_lock(&foo_mutex);
+ old_fp = gbl_foo;
+ *new_fp = *old_fp;
+ new_fp->a = new_a;
+ rcu_assign_pointer(gbl_foo, new_fp);
+ spin_unlock(&foo_mutex);
+ synchronize_rcu();
+ kfree(old_fp);
+ }
+
+ /*
+ * Return the value of field "a" of the current gbl_foo
+ * structure. Use rcu_read_lock() and rcu_read_unlock()
+ * to ensure that the structure does not get deleted out
+ * from under us, and use rcu_dereference() to ensure that
+ * we see the initialized version of the structure (important
+ * for DEC Alpha and for people reading the code).
+ */
+ int foo_get_a(void)
+ {
+ int retval;
+
+ rcu_read_lock();
+ retval = rcu_dereference(gbl_foo)->a;
+ rcu_read_unlock();
+ return retval;
+ }
+
+So, to sum up:
+
+o Use rcu_read_lock() and rcu_read_unlock() to guard RCU
+ read-side critical sections.
+
+o Within an RCU read-side critical section, use rcu_dereference()
+ to dereference RCU-protected pointers.
+
+o Use some solid scheme (such as locks or semaphores) to
+ keep concurrent updates from interfering with each other.
+
+o Use rcu_assign_pointer() to update an RCU-protected pointer.
+ This primitive protects concurrent readers from the updater,
+ -not- concurrent updates from each other! You therefore still
+ need to use locking (or something similar) to keep concurrent
+ rcu_assign_pointer() primitives from interfering with each other.
+
+o Use synchronize_rcu() -after- removing a data element from an
+ RCU-protected data structure, but -before- reclaiming/freeing
+ the data element, in order to wait for the completion of all
+ RCU read-side critical sections that might be referencing that
+ data item.
+
+See checklist.txt for additional rules to follow when using RCU.
+
+
+4. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
+
+In the example above, foo_update_a() blocks until a grace period elapses.
+This is quite simple, but in some cases one cannot afford to wait so
+long -- there might be other high-priority work to be done.
+
+In such cases, one uses call_rcu() rather than synchronize_rcu().
+The call_rcu() API is as follows:
+
+ void call_rcu(struct rcu_head * head,
+ void (*func)(struct rcu_head *head));
+
+This function invokes func(head) after a grace period has elapsed.
+This invocation might happen from either softirq or process context,
+so the function is not permitted to block. The foo struct needs to
+have an rcu_head structure added, perhaps as follows:
+
+ struct foo {
+ int a;
+ char b;
+ long c;
+ struct rcu_head rcu;
+ };
+
+The foo_update_a() function might then be written as follows:
+
+ /*
+ * Create a new struct foo that is the same as the one currently
+ * pointed to by gbl_foo, except that field "a" is replaced
+ * with "new_a". Points gbl_foo to the new structure, and
+ * frees up the old structure after a grace period.
+ *
+ * Uses rcu_assign_pointer() to ensure that concurrent readers
+ * see the initialized version of the new structure.
+ *
+ * Uses call_rcu() to ensure that any readers that might have
+ * references to the old structure complete before freeing the
+ * old structure.
+ */
+ void foo_update_a(int new_a)
+ {
+ struct foo *new_fp;
+ struct foo *old_fp;
+
+ new_fp = kmalloc(sizeof(*fp), GFP_KERNEL);
+ spin_lock(&foo_mutex);
+ old_fp = gbl_foo;
+ *new_fp = *old_fp;
+ new_fp->a = new_a;
+ rcu_assign_pointer(gbl_foo, new_fp);
+ spin_unlock(&foo_mutex);
+ call_rcu(&old_fp->rcu, foo_reclaim);
+ }
+
+The foo_reclaim() function might appear as follows:
+
+ void foo_reclaim(struct rcu_head *rp)
+ {
+ struct foo *fp = container_of(rp, struct foo, rcu);
+
+ kfree(fp);
+ }
+
+The container_of() primitive is a macro that, given a pointer into a
+struct, the type of the struct, and the pointed-to field within the
+struct, returns a pointer to the beginning of the struct.
+
+The use of call_rcu() permits the caller of foo_update_a() to
+immediately regain control, without needing to worry further about the
+old version of the newly updated element. It also clearly shows the
+RCU distinction between updater, namely foo_update_a(), and reclaimer,
+namely foo_reclaim().
+
+The summary of advice is the same as for the previous section, except
+that we are now using call_rcu() rather than synchronize_rcu():
+
+o Use call_rcu() -after- removing a data element from an
+ RCU-protected data structure in order to register a callback
+ function that will be invoked after the completion of all RCU
+ read-side critical sections that might be referencing that
+ data item.
+
+Again, see checklist.txt for additional rules governing the use of RCU.
+
+
+5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
+
+One of the nice things about RCU is that it has extremely simple "toy"
+implementations that are a good first step towards understanding the
+production-quality implementations in the Linux kernel. This section
+presents two such "toy" implementations of RCU, one that is implemented
+in terms of familiar locking primitives, and another that more closely
+resembles "classic" RCU. Both are way too simple for real-world use,
+lacking both functionality and performance. However, they are useful
+in getting a feel for how RCU works. See kernel/rcupdate.c for a
+production-quality implementation, and see:
+
+ http://www.rdrop.com/users/paulmck/RCU
+
+for papers describing the Linux kernel RCU implementation. The OLS'01
+and OLS'02 papers are a good introduction, and the dissertation provides
+more details on the current implementation.
+
+
+5A. "TOY" IMPLEMENTATION #1: LOCKING
+
+This section presents a "toy" RCU implementation that is based on
+familiar locking primitives. Its overhead makes it a non-starter for
+real-life use, as does its lack of scalability. It is also unsuitable
+for realtime use, since it allows scheduling latency to "bleed" from
+one read-side critical section to another.
+
+However, it is probably the easiest implementation to relate to, so is
+a good starting point.
+
+It is extremely simple:
+
+ static DEFINE_RWLOCK(rcu_gp_mutex);
+
+ void rcu_read_lock(void)
+ {
+ read_lock(&rcu_gp_mutex);
+ }
+
+ void rcu_read_unlock(void)
+ {
+ read_unlock(&rcu_gp_mutex);
+ }
+
+ void synchronize_rcu(void)
+ {
+ write_lock(&rcu_gp_mutex);
+ write_unlock(&rcu_gp_mutex);
+ }
+
+[You can ignore rcu_assign_pointer() and rcu_dereference() without
+missing much. But here they are anyway. And whatever you do, don't
+forget about them when submitting patches making use of RCU!]
+
+ #define rcu_assign_pointer(p, v) ({ \
+ smp_wmb(); \
+ (p) = (v); \
+ })
+
+ #define rcu_dereference(p) ({ \
+ typeof(p) _________p1 = p; \
+ smp_read_barrier_depends(); \
+ (_________p1); \
+ })
+
+
+The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
+and release a global reader-writer lock. The synchronize_rcu()
+primitive write-acquires this same lock, then immediately releases
+it. This means that once synchronize_rcu() exits, all RCU read-side
+critical sections that were in progress before synchonize_rcu() was
+called are guaranteed to have completed -- there is no way that
+synchronize_rcu() would have been able to write-acquire the lock
+otherwise.
+
+It is possible to nest rcu_read_lock(), since reader-writer locks may
+be recursively acquired. Note also that rcu_read_lock() is immune
+from deadlock (an important property of RCU). The reason for this is
+that the only thing that can block rcu_read_lock() is a synchronize_rcu().
+But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
+so there can be no deadlock cycle.
+
+Quick Quiz #1: Why is this argument naive? How could a deadlock
+ occur when using this algorithm in a real-world Linux
+ kernel? How could this deadlock be avoided?
+
+
+5B. "TOY" EXAMPLE #2: CLASSIC RCU
+
+This section presents a "toy" RCU implementation that is based on
+"classic RCU". It is also short on performance (but only for updates) and
+on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT
+kernels. The definitions of rcu_dereference() and rcu_assign_pointer()
+are the same as those shown in the preceding section, so they are omitted.
+
+ void rcu_read_lock(void) { }
+
+ void rcu_read_unlock(void) { }
+
+ void synchronize_rcu(void)
+ {
+ int cpu;
+
+ for_each_cpu(cpu)
+ run_on(cpu);
+ }
+
+Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
+This is the great strength of classic RCU in a non-preemptive kernel:
+read-side overhead is precisely zero, at least on non-Alpha CPUs.
+And there is absolutely no way that rcu_read_lock() can possibly
+participate in a deadlock cycle!
+
+The implementation of synchronize_rcu() simply schedules itself on each
+CPU in turn. The run_on() primitive can be implemented straightforwardly
+in terms of the sched_setaffinity() primitive. Of course, a somewhat less
+"toy" implementation would restore the affinity upon completion rather
+than just leaving all tasks running on the last CPU, but when I said
+"toy", I meant -toy-!
+
+So how the heck is this supposed to work???
+
+Remember that it is illegal to block while in an RCU read-side critical
+section. Therefore, if a given CPU executes a context switch, we know
+that it must have completed all preceding RCU read-side critical sections.
+Once -all- CPUs have executed a context switch, then -all- preceding
+RCU read-side critical sections will have completed.
+
+So, suppose that we remove a data item from its structure and then invoke
+synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed
+that there are no RCU read-side critical sections holding a reference
+to that data item, so we can safely reclaim it.
+
+Quick Quiz #2: Give an example where Classic RCU's read-side
+ overhead is -negative-.
+
+Quick Quiz #3: If it is illegal to block in an RCU read-side
+ critical section, what the heck do you do in
+ PREEMPT_RT, where normal spinlocks can block???
+
+
+6. ANALOGY WITH READER-WRITER LOCKING
+
+Although RCU can be used in many different ways, a very common use of
+RCU is analogous to reader-writer locking. The following unified
+diff shows how closely related RCU and reader-writer locking can be.
+
+ @@ -13,15 +14,15 @@
+ struct list_head *lp;
+ struct el *p;
+
+ - read_lock();
+ - list_for_each_entry(p, head, lp) {
+ + rcu_read_lock();
+ + list_for_each_entry_rcu(p, head, lp) {
+ if (p->key == key) {
+ *result = p->data;
+ - read_unlock();
+ + rcu_read_unlock();
+ return 1;
+ }
+ }
+ - read_unlock();
+ + rcu_read_unlock();
+ return 0;
+ }
+
+ @@ -29,15 +30,16 @@
+ {
+ struct el *p;
+
+ - write_lock(&listmutex);
+ + spin_lock(&listmutex);
+ list_for_each_entry(p, head, lp) {
+ if (p->key == key) {
+ list_del(&p->list);
+ - write_unlock(&listmutex);
+ + spin_unlock(&listmutex);
+ + synchronize_rcu();
+ kfree(p);
+ return 1;
+ }
+ }
+ - write_unlock(&listmutex);
+ + spin_unlock(&listmutex);
+ return 0;
+ }
+
+Or, for those who prefer a side-by-side listing:
+
+ 1 struct el { 1 struct el {
+ 2 struct list_head list; 2 struct list_head list;
+ 3 long key; 3 long key;
+ 4 spinlock_t mutex; 4 spinlock_t mutex;
+ 5 int data; 5 int data;
+ 6 /* Other data fields */ 6 /* Other data fields */
+ 7 }; 7 };
+ 8 spinlock_t listmutex; 8 spinlock_t listmutex;
+ 9 struct el head; 9 struct el head;
+
+ 1 int search(long key, int *result) 1 int search(long key, int *result)
+ 2 { 2 {
+ 3 struct list_head *lp; 3 struct list_head *lp;
+ 4 struct el *p; 4 struct el *p;
+ 5 5
+ 6 read_lock(); 6 rcu_read_lock();
+ 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) {
+ 8 if (p->key == key) { 8 if (p->key == key) {
+ 9 *result = p->data; 9 *result = p->data;
+10 read_unlock(); 10 rcu_read_unlock();
+11 return 1; 11 return 1;
+12 } 12 }
+13 } 13 }
+14 read_unlock(); 14 rcu_read_unlock();
+15 return 0; 15 return 0;
+16 } 16 }
+
+ 1 int delete(long key) 1 int delete(long key)
+ 2 { 2 {
+ 3 struct el *p; 3 struct el *p;
+ 4 4
+ 5 write_lock(&listmutex); 5 spin_lock(&listmutex);
+ 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) {
+ 7 if (p->key == key) { 7 if (p->key == key) {
+ 8 list_del(&p->list); 8 list_del(&p->list);
+ 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex);
+ 10 synchronize_rcu();
+10 kfree(p); 11 kfree(p);
+11 return 1; 12 return 1;
+12 } 13 }
+13 } 14 }
+14 write_unlock(&listmutex); 15 spin_unlock(&listmutex);
+15 return 0; 16 return 0;
+16 } 17 }
+
+Either way, the differences are quite small. Read-side locking moves
+to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
+from a reader-writer lock to a simple spinlock, and a synchronize_rcu()
+precedes the kfree().
+
+However, there is one potential catch: the read-side and update-side
+critical sections can now run concurrently. In many cases, this will
+not be a problem, but it is necessary to check carefully regardless.
+For example, if multiple independent list updates must be seen as
+a single atomic update, converting to RCU will require special care.
+
+Also, the presence of synchronize_rcu() means that the RCU version of
+delete() can now block. If this is a problem, there is a callback-based
+mechanism that never blocks, namely call_rcu(), that can be used in
+place of synchronize_rcu().
+
+
+7. FULL LIST OF RCU APIs
+
+The RCU APIs are documented in docbook-format header comments in the
+Linux-kernel source code, but it helps to have a full list of the
+APIs, since there does not appear to be a way to categorize them
+in docbook. Here is the list, by category.
+
+Markers for RCU read-side critical sections:
+
+ rcu_read_lock
+ rcu_read_unlock
+ rcu_read_lock_bh
+ rcu_read_unlock_bh
+
+RCU pointer/list traversal:
+
+ rcu_dereference
+ list_for_each_rcu (to be deprecated in favor of
+ list_for_each_entry_rcu)
+ list_for_each_safe_rcu (deprecated, not used)
+ list_for_each_entry_rcu
+ list_for_each_continue_rcu (to be deprecated in favor of new
+ list_for_each_entry_continue_rcu)
+ hlist_for_each_rcu (to be deprecated in favor of
+ hlist_for_each_entry_rcu)
+ hlist_for_each_entry_rcu
+
+RCU pointer update:
+
+ rcu_assign_pointer
+ list_add_rcu
+ list_add_tail_rcu
+ list_del_rcu
+ list_replace_rcu
+ hlist_del_rcu
+ hlist_add_head_rcu
+
+RCU grace period:
+
+ synchronize_kernel (deprecated)
+ synchronize_net
+ synchronize_sched
+ synchronize_rcu
+ call_rcu
+ call_rcu_bh
+
+See the comment headers in the source code (or the docbook generated
+from them) for more information.
+
+
+8. ANSWERS TO QUICK QUIZZES
+
+Quick Quiz #1: Why is this argument naive? How could a deadlock
+ occur when using this algorithm in a real-world Linux
+ kernel? [Referring to the lock-based "toy" RCU
+ algorithm.]
+
+Answer: Consider the following sequence of events:
+
+ 1. CPU 0 acquires some unrelated lock, call it
+ "problematic_lock".
+
+ 2. CPU 1 enters synchronize_rcu(), write-acquiring
+ rcu_gp_mutex.
+
+ 3. CPU 0 enters rcu_read_lock(), but must wait
+ because CPU 1 holds rcu_gp_mutex.
+
+ 4. CPU 1 is interrupted, and the irq handler
+ attempts to acquire problematic_lock.
+
+ The system is now deadlocked.
+
+ One way to avoid this deadlock is to use an approach like
+ that of CONFIG_PREEMPT_RT, where all normal spinlocks
+ become blocking locks, and all irq handlers execute in
+ the context of special tasks. In this case, in step 4
+ above, the irq handler would block, allowing CPU 1 to
+ release rcu_gp_mutex, avoiding the deadlock.
+
+ Even in the absence of deadlock, this RCU implementation
+ allows latency to "bleed" from readers to other
+ readers through synchronize_rcu(). To see this,
+ consider task A in an RCU read-side critical section
+ (thus read-holding rcu_gp_mutex), task B blocked
+ attempting to write-acquire rcu_gp_mutex, and
+ task C blocked in rcu_read_lock() attempting to
+ read_acquire rcu_gp_mutex. Task A's RCU read-side
+ latency is holding up task C, albeit indirectly via
+ task B.
+
+ Realtime RCU implementations therefore use a counter-based
+ approach where tasks in RCU read-side critical sections
+ cannot be blocked by tasks executing synchronize_rcu().
+
+Quick Quiz #2: Give an example where Classic RCU's read-side
+ overhead is -negative-.
+
+Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT
+ kernel where a routing table is used by process-context
+ code, but can be updated by irq-context code (for example,
+ by an "ICMP REDIRECT" packet). The usual way of handling
+ this would be to have the process-context code disable
+ interrupts while searching the routing table. Use of
+ RCU allows such interrupt-disabling to be dispensed with.
+ Thus, without RCU, you pay the cost of disabling interrupts,
+ and with RCU you don't.
+
+ One can argue that the overhead of RCU in this
+ case is negative with respect to the single-CPU
+ interrupt-disabling approach. Others might argue that
+ the overhead of RCU is merely zero, and that replacing
+ the positive overhead of the interrupt-disabling scheme
+ with the zero-overhead RCU scheme does not constitute
+ negative overhead.
+
+ In real life, of course, things are more complex. But
+ even the theoretical possibility of negative overhead for
+ a synchronization primitive is a bit unexpected. ;-)
+
+Quick Quiz #3: If it is illegal to block in an RCU read-side
+ critical section, what the heck do you do in
+ PREEMPT_RT, where normal spinlocks can block???
+
+Answer: Just as PREEMPT_RT permits preemption of spinlock
+ critical sections, it permits preemption of RCU
+ read-side critical sections. It also permits
+ spinlocks blocking while in RCU read-side critical
+ sections.
+
+ Why the apparent inconsistency? Because it is it
+ possible to use priority boosting to keep the RCU
+ grace periods short if need be (for example, if running
+ short of memory). In contrast, if blocking waiting
+ for (say) network reception, there is no way to know
+ what should be boosted. Especially given that the
+ process we need to boost might well be a human being
+ who just went out for a pizza or something. And although
+ a computer-operated cattle prod might arouse serious
+ interest, it might also provoke serious objections.
+ Besides, how does the computer know what pizza parlor
+ the human being went to???
+
+
+ACKNOWLEDGEMENTS
+
+My thanks to the people who helped make this human-readable, including
+Jon Walpole, Josh Triplett, Serge Hallyn, and Suzanne Wood.
+
+
+For more information, see http://www.rdrop.com/users/paulmck/RCU.
diff --git a/Documentation/applying-patches.txt b/Documentation/applying-patches.txt
new file mode 100644
index 000000000000..681e426e2482
--- /dev/null
+++ b/Documentation/applying-patches.txt
@@ -0,0 +1,439 @@
+
+ Applying Patches To The Linux Kernel
+ ------------------------------------
+
+ (Written by Jesper Juhl, August 2005)
+
+
+
+A frequently asked question on the Linux Kernel Mailing List is how to apply
+a patch to the kernel or, more specifically, what base kernel a patch for
+one of the many trees/branches should be applied to. Hopefully this document
+will explain this to you.
+
+In addition to explaining how to apply and revert patches, a brief
+description of the different kernel trees (and examples of how to apply
+their specific patches) is also provided.
+
+
+What is a patch?
+---
+ A patch is a small text document containing a delta of changes between two
+different versions of a source tree. Patches are created with the `diff'
+program.
+To correctly apply a patch you need to know what base it was generated from
+and what new version the patch will change the source tree into. These
+should both be present in the patch file metadata or be possible to deduce
+from the filename.
+
+
+How do I apply or revert a patch?
+---
+ You apply a patch with the `patch' program. The patch program reads a diff
+(or patch) file and makes the changes to the source tree described in it.
+
+Patches for the Linux kernel are generated relative to the parent directory
+holding the kernel source dir.
+
+This means that paths to files inside the patch file contain the name of the
+kernel source directories it was generated against (or some other directory
+names like "a/" and "b/").
+Since this is unlikely to match the name of the kernel source dir on your
+local machine (but is often useful info to see what version an otherwise
+unlabeled patch was generated against) you should change into your kernel
+source directory and then strip the first element of the path from filenames
+in the patch file when applying it (the -p1 argument to `patch' does this).
+
+To revert a previously applied patch, use the -R argument to patch.
+So, if you applied a patch like this:
+ patch -p1 < ../patch-x.y.z
+
+You can revert (undo) it like this:
+ patch -R -p1 < ../patch-x.y.z
+
+
+How do I feed a patch/diff file to `patch'?
+---
+ This (as usual with Linux and other UNIX like operating systems) can be
+done in several different ways.
+In all the examples below I feed the file (in uncompressed form) to patch
+via stdin using the following syntax:
+ patch -p1 < path/to/patch-x.y.z
+
+If you just want to be able to follow the examples below and don't want to
+know of more than one way to use patch, then you can stop reading this
+section here.
+
+Patch can also get the name of the file to use via the -i argument, like
+this:
+ patch -p1 -i path/to/patch-x.y.z
+
+If your patch file is compressed with gzip or bzip2 and you don't want to
+uncompress it before applying it, then you can feed it to patch like this
+instead:
+ zcat path/to/patch-x.y.z.gz | patch -p1
+ bzcat path/to/patch-x.y.z.bz2 | patch -p1
+
+If you wish to uncompress the patch file by hand first before applying it
+(what I assume you've done in the examples below), then you simply run
+gunzip or bunzip2 on the file - like this:
+ gunzip patch-x.y.z.gz
+ bunzip2 patch-x.y.z.bz2
+
+Which will leave you with a plain text patch-x.y.z file that you can feed to
+patch via stdin or the -i argument, as you prefer.
+
+A few other nice arguments for patch are -s which causes patch to be silent
+except for errors which is nice to prevent errors from scrolling out of the
+screen too fast, and --dry-run which causes patch to just print a listing of
+what would happen, but doesn't actually make any changes. Finally --verbose
+tells patch to print more information about the work being done.
+
+
+Common errors when patching
+---
+ When patch applies a patch file it attempts to verify the sanity of the
+file in different ways.
+Checking that the file looks like a valid patch file, checking the code
+around the bits being modified matches the context provided in the patch are
+just two of the basic sanity checks patch does.
+
+If patch encounters something that doesn't look quite right it has two
+options. It can either refuse to apply the changes and abort or it can try
+to find a way to make the patch apply with a few minor changes.
+
+One example of something that's not 'quite right' that patch will attempt to
+fix up is if all the context matches, the lines being changed match, but the
+line numbers are different. This can happen, for example, if the patch makes
+a change in the middle of the file but for some reasons a few lines have
+been added or removed near the beginning of the file. In that case
+everything looks good it has just moved up or down a bit, and patch will
+usually adjust the line numbers and apply the patch.
+
+Whenever patch applies a patch that it had to modify a bit to make it fit
+it'll tell you about it by saying the patch applied with 'fuzz'.
+You should be wary of such changes since even though patch probably got it
+right it doesn't /always/ get it right, and the result will sometimes be
+wrong.
+
+When patch encounters a change that it can't fix up with fuzz it rejects it
+outright and leaves a file with a .rej extension (a reject file). You can
+read this file to see exactely what change couldn't be applied, so you can
+go fix it up by hand if you wish.
+
+If you don't have any third party patches applied to your kernel source, but
+only patches from kernel.org and you apply the patches in the correct order,
+and have made no modifications yourself to the source files, then you should
+never see a fuzz or reject message from patch. If you do see such messages
+anyway, then there's a high risk that either your local source tree or the
+patch file is corrupted in some way. In that case you should probably try
+redownloading the patch and if things are still not OK then you'd be advised
+to start with a fresh tree downloaded in full from kernel.org.
+
+Let's look a bit more at some of the messages patch can produce.
+
+If patch stops and presents a "File to patch:" prompt, then patch could not
+find a file to be patched. Most likely you forgot to specify -p1 or you are
+in the wrong directory. Less often, you'll find patches that need to be
+applied with -p0 instead of -p1 (reading the patch file should reveal if
+this is the case - if so, then this is an error by the person who created
+the patch but is not fatal).
+
+If you get "Hunk #2 succeeded at 1887 with fuzz 2 (offset 7 lines)." or a
+message similar to that, then it means that patch had to adjust the location
+of the change (in this example it needed to move 7 lines from where it
+expected to make the change to make it fit).
+The resulting file may or may not be OK, depending on the reason the file
+was different than expected.
+This often happens if you try to apply a patch that was generated against a
+different kernel version than the one you are trying to patch.
+
+If you get a message like "Hunk #3 FAILED at 2387.", then it means that the
+patch could not be applied correctly and the patch program was unable to
+fuzz its way through. This will generate a .rej file with the change that
+caused the patch to fail and also a .orig file showing you the original
+content that couldn't be changed.
+
+If you get "Reversed (or previously applied) patch detected! Assume -R? [n]"
+then patch detected that the change contained in the patch seems to have
+already been made.
+If you actually did apply this patch previously and you just re-applied it
+in error, then just say [n]o and abort this patch. If you applied this patch
+previously and actually intended to revert it, but forgot to specify -R,
+then you can say [y]es here to make patch revert it for you.
+This can also happen if the creator of the patch reversed the source and
+destination directories when creating the patch, and in that case reverting
+the patch will in fact apply it.
+
+A message similar to "patch: **** unexpected end of file in patch" or "patch
+unexpectedly ends in middle of line" means that patch could make no sense of
+the file you fed to it. Either your download is broken or you tried to feed
+patch a compressed patch file without uncompressing it first.
+
+As I already mentioned above, these errors should never happen if you apply
+a patch from kernel.org to the correct version of an unmodified source tree.
+So if you get these errors with kernel.org patches then you should probably
+assume that either your patch file or your tree is broken and I'd advice you
+to start over with a fresh download of a full kernel tree and the patch you
+wish to apply.
+
+
+Are there any alternatives to `patch'?
+---
+ Yes there are alternatives. You can use the `interdiff' program
+(http://cyberelk.net/tim/patchutils/) to generate a patch representing the
+differences between two patches and then apply the result.
+This will let you move from something like 2.6.12.2 to 2.6.12.3 in a single
+step. The -z flag to interdiff will even let you feed it patches in gzip or
+bzip2 compressed form directly without the use of zcat or bzcat or manual
+decompression.
+
+Here's how you'd go from 2.6.12.2 to 2.6.12.3 in a single step:
+ interdiff -z ../patch-2.6.12.2.bz2 ../patch-2.6.12.3.gz | patch -p1
+
+Although interdiff may save you a step or two you are generally advised to
+do the additional steps since interdiff can get things wrong in some cases.
+
+ Another alternative is `ketchup', which is a python script for automatic
+downloading and applying of patches (http://www.selenic.com/ketchup/).
+
+Other nice tools are diffstat which shows a summary of changes made by a
+patch, lsdiff which displays a short listing of affected files in a patch
+file, along with (optionally) the line numbers of the start of each patch
+and grepdiff which displays a list of the files modified by a patch where
+the patch contains a given regular expression.
+
+
+Where can I download the patches?
+---
+ The patches are available at http://kernel.org/
+Most recent patches are linked from the front page, but they also have
+specific homes.
+
+The 2.6.x.y (-stable) and 2.6.x patches live at
+ ftp://ftp.kernel.org/pub/linux/kernel/v2.6/
+
+The -rc patches live at
+ ftp://ftp.kernel.org/pub/linux/kernel/v2.6/testing/
+
+The -git patches live at
+ ftp://ftp.kernel.org/pub/linux/kernel/v2.6/snapshots/
+
+The -mm kernels live at
+ ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/
+
+In place of ftp.kernel.org you can use ftp.cc.kernel.org, where cc is a
+country code. This way you'll be downloading from a mirror site that's most
+likely geographically closer to you, resulting in faster downloads for you,
+less bandwidth used globally and less load on the main kernel.org servers -
+these are good things, do use mirrors when possible.
+
+
+The 2.6.x kernels
+---
+ These are the base stable releases released by Linus. The highest numbered
+release is the most recent.
+
+If regressions or other serious flaws are found then a -stable fix patch
+will be released (see below) on top of this base. Once a new 2.6.x base
+kernel is released, a patch is made available that is a delta between the
+previous 2.6.x kernel and the new one.
+
+To apply a patch moving from 2.6.11 to 2.6.12 you'd do the following (note
+that such patches do *NOT* apply on top of 2.6.x.y kernels but on top of the
+base 2.6.x kernel - if you need to move from 2.6.x.y to 2.6.x+1 you need to
+first revert the 2.6.x.y patch).
+
+Here are some examples:
+
+# moving from 2.6.11 to 2.6.12
+$ cd ~/linux-2.6.11 # change to kernel source dir
+$ patch -p1 < ../patch-2.6.12 # apply the 2.6.12 patch
+$ cd ..
+$ mv linux-2.6.11 linux-2.6.12 # rename source dir
+
+# moving from 2.6.11.1 to 2.6.12
+$ cd ~/linux-2.6.11.1 # change to kernel source dir
+$ patch -p1 -R < ../patch-2.6.11.1 # revert the 2.6.11.1 patch
+ # source dir is now 2.6.11
+$ patch -p1 < ../patch-2.6.12 # apply new 2.6.12 patch
+$ cd ..
+$ mv linux-2.6.11.1 inux-2.6.12 # rename source dir
+
+
+The 2.6.x.y kernels
+---
+ Kernels with 4 digit versions are -stable kernels. They contain small(ish)
+critical fixes for security problems or significant regressions discovered
+in a given 2.6.x kernel.
+
+This is the recommended branch for users who want the most recent stable
+kernel and are not interested in helping test development/experimental
+versions.
+
+If no 2.6.x.y kernel is available, then the highest numbered 2.6.x kernel is
+the current stable kernel.
+
+These patches are not incremental, meaning that for example the 2.6.12.3
+patch does not apply on top of the 2.6.12.2 kernel source, but rather on top
+of the base 2.6.12 kernel source.
+So, in order to apply the 2.6.12.3 patch to your existing 2.6.12.2 kernel
+source you have to first back out the 2.6.12.2 patch (so you are left with a
+base 2.6.12 kernel source) and then apply the new 2.6.12.3 patch.
+
+Here's a small example:
+
+$ cd ~/linux-2.6.12.2 # change into the kernel source dir
+$ patch -p1 -R < ../patch-2.6.12.2 # revert the 2.6.12.2 patch
+$ patch -p1 < ../patch-2.6.12.3 # apply the new 2.6.12.3 patch
+$ cd ..
+$ mv linux-2.6.12.2 linux-2.6.12.3 # rename the kernel source dir
+
+
+The -rc kernels
+---
+ These are release-candidate kernels. These are development kernels released
+by Linus whenever he deems the current git (the kernel's source management
+tool) tree to be in a reasonably sane state adequate for testing.
+
+These kernels are not stable and you should expect occasional breakage if
+you intend to run them. This is however the most stable of the main
+development branches and is also what will eventually turn into the next
+stable kernel, so it is important that it be tested by as many people as
+possible.
+
+This is a good branch to run for people who want to help out testing
+development kernels but do not want to run some of the really experimental
+stuff (such people should see the sections about -git and -mm kernels below).
+
+The -rc patches are not incremental, they apply to a base 2.6.x kernel, just
+like the 2.6.x.y patches described above. The kernel version before the -rcN
+suffix denotes the version of the kernel that this -rc kernel will eventually
+turn into.
+So, 2.6.13-rc5 means that this is the fifth release candidate for the 2.6.13
+kernel and the patch should be applied on top of the 2.6.12 kernel source.
+
+Here are 3 examples of how to apply these patches:
+
+# first an example of moving from 2.6.12 to 2.6.13-rc3
+$ cd ~/linux-2.6.12 # change into the 2.6.12 source dir
+$ patch -p1 < ../patch-2.6.13-rc3 # apply the 2.6.13-rc3 patch
+$ cd ..
+$ mv linux-2.6.12 linux-2.6.13-rc3 # rename the source dir
+
+# now let's move from 2.6.13-rc3 to 2.6.13-rc5
+$ cd ~/linux-2.6.13-rc3 # change into the 2.6.13-rc3 dir
+$ patch -p1 -R < ../patch-2.6.13-rc3 # revert the 2.6.13-rc3 patch
+$ patch -p1 < ../patch-2.6.13-rc5 # apply the new 2.6.13-rc5 patch
+$ cd ..
+$ mv linux-2.6.13-rc3 linux-2.6.13-rc5 # rename the source dir
+
+# finally let's try and move from 2.6.12.3 to 2.6.13-rc5
+$ cd ~/linux-2.6.12.3 # change to the kernel source dir
+$ patch -p1 -R < ../patch-2.6.12.3 # revert the 2.6.12.3 patch
+$ patch -p1 < ../patch-2.6.13-rc5 # apply new 2.6.13-rc5 patch
+$ cd ..
+$ mv linux-2.6.12.3 linux-2.6.13-rc5 # rename the kernel source dir
+
+
+The -git kernels
+---
+ These are daily snapshots of Linus' kernel tree (managed in a git
+repository, hence the name).
+
+These patches are usually released daily and represent the current state of
+Linus' tree. They are more experimental than -rc kernels since they are
+generated automatically without even a cursory glance to see if they are
+sane.
+
+-git patches are not incremental and apply either to a base 2.6.x kernel or
+a base 2.6.x-rc kernel - you can see which from their name.
+A patch named 2.6.12-git1 applies to the 2.6.12 kernel source and a patch
+named 2.6.13-rc3-git2 applies to the source of the 2.6.13-rc3 kernel.
+
+Here are some examples of how to apply these patches:
+
+# moving from 2.6.12 to 2.6.12-git1
+$ cd ~/linux-2.6.12 # change to the kernel source dir
+$ patch -p1 < ../patch-2.6.12-git1 # apply the 2.6.12-git1 patch
+$ cd ..
+$ mv linux-2.6.12 linux-2.6.12-git1 # rename the kernel source dir
+
+# moving from 2.6.12-git1 to 2.6.13-rc2-git3
+$ cd ~/linux-2.6.12-git1 # change to the kernel source dir
+$ patch -p1 -R < ../patch-2.6.12-git1 # revert the 2.6.12-git1 patch
+ # we now have a 2.6.12 kernel
+$ patch -p1 < ../patch-2.6.13-rc2 # apply the 2.6.13-rc2 patch
+ # the kernel is now 2.6.13-rc2
+$ patch -p1 < ../patch-2.6.13-rc2-git3 # apply the 2.6.13-rc2-git3 patch
+ # the kernel is now 2.6.13-rc2-git3
+$ cd ..
+$ mv linux-2.6.12-git1 linux-2.6.13-rc2-git3 # rename source dir
+
+
+The -mm kernels
+---
+ These are experimental kernels released by Andrew Morton.
+
+The -mm tree serves as a sort of proving ground for new features and other
+experimental patches.
+Once a patch has proved its worth in -mm for a while Andrew pushes it on to
+Linus for inclusion in mainline.
+
+Although it's encouraged that patches flow to Linus via the -mm tree, this
+is not always enforced.
+Subsystem maintainers (or individuals) sometimes push their patches directly
+to Linus, even though (or after) they have been merged and tested in -mm (or
+sometimes even without prior testing in -mm).
+
+You should generally strive to get your patches into mainline via -mm to
+ensure maximum testing.
+
+This branch is in constant flux and contains many experimental features, a
+lot of debugging patches not appropriate for mainline etc and is the most
+experimental of the branches described in this document.
+
+These kernels are not appropriate for use on systems that are supposed to be
+stable and they are more risky to run than any of the other branches (make
+sure you have up-to-date backups - that goes for any experimental kernel but
+even more so for -mm kernels).
+
+These kernels in addition to all the other experimental patches they contain
+usually also contain any changes in the mainline -git kernels available at
+the time of release.
+
+Testing of -mm kernels is greatly appreciated since the whole point of the
+tree is to weed out regressions, crashes, data corruption bugs, build
+breakage (and any other bug in general) before changes are merged into the
+more stable mainline Linus tree.
+But testers of -mm should be aware that breakage in this tree is more common
+than in any other tree.
+
+The -mm kernels are not released on a fixed schedule, but usually a few -mm
+kernels are released in between each -rc kernel (1 to 3 is common).
+The -mm kernels apply to either a base 2.6.x kernel (when no -rc kernels
+have been released yet) or to a Linus -rc kernel.
+
+Here are some examples of applying the -mm patches:
+
+# moving from 2.6.12 to 2.6.12-mm1
+$ cd ~/linux-2.6.12 # change to the 2.6.12 source dir
+$ patch -p1 < ../2.6.12-mm1 # apply the 2.6.12-mm1 patch
+$ cd ..
+$ mv linux-2.6.12 linux-2.6.12-mm1 # rename the source appropriately
+
+# moving from 2.6.12-mm1 to 2.6.13-rc3-mm3
+$ cd ~/linux-2.6.12-mm1
+$ patch -p1 -R < ../2.6.12-mm1 # revert the 2.6.12-mm1 patch
+ # we now have a 2.6.12 source
+$ patch -p1 < ../patch-2.6.13-rc3 # apply the 2.6.13-rc3 patch
+ # we now have a 2.6.13-rc3 source
+$ patch -p1 < ../2.6.13-rc3-mm3 # apply the 2.6.13-rc3-mm3 patch
+$ cd ..
+$ mv linux-2.6.12-mm1 linux-2.6.13-rc3-mm3 # rename the source dir
+
+
+This concludes this list of explanations of the various kernel trees and I
+hope you are now crystal clear on how to apply the various patches and help
+testing the kernel.
+
diff --git a/Documentation/cpu-freq/cpufreq-stats.txt b/Documentation/cpu-freq/cpufreq-stats.txt
index e2d1e760b4ba..6a82948ff4bd 100644
--- a/Documentation/cpu-freq/cpufreq-stats.txt
+++ b/Documentation/cpu-freq/cpufreq-stats.txt
@@ -36,7 +36,7 @@ cpufreq stats provides following statistics (explained in detail below).
All the statistics will be from the time the stats driver has been inserted
to the time when a read of a particular statistic is done. Obviously, stats
-driver will not have any information about the the frequcny transitions before
+driver will not have any information about the frequency transitions before
the stats driver insertion.
--------------------------------------------------------------------------------
diff --git a/Documentation/cpusets.txt b/Documentation/cpusets.txt
index 47f4114fbf54..d17b7d2dd771 100644
--- a/Documentation/cpusets.txt
+++ b/Documentation/cpusets.txt
@@ -277,7 +277,7 @@ rewritten to the 'tasks' file of its cpuset. This is done to avoid
impacting the scheduler code in the kernel with a check for changes
in a tasks processor placement.
-There is an exception to the above. If hotplug funtionality is used
+There is an exception to the above. If hotplug functionality is used
to remove all the CPUs that are currently assigned to a cpuset,
then the kernel will automatically update the cpus_allowed of all
tasks attached to CPUs in that cpuset to allow all CPUs. When memory
diff --git a/Documentation/crypto/descore-readme.txt b/Documentation/crypto/descore-readme.txt
index 166474c2ee0b..16e9e6350755 100644
--- a/Documentation/crypto/descore-readme.txt
+++ b/Documentation/crypto/descore-readme.txt
@@ -1,4 +1,4 @@
-Below is the orginal README file from the descore.shar package.
+Below is the original README file from the descore.shar package.
------------------------------------------------------------------------------
des - fast & portable DES encryption & decryption.
diff --git a/Documentation/dvb/bt8xx.txt b/Documentation/dvb/bt8xx.txt
index 4b8c326c6aac..cb63b7a93c82 100644
--- a/Documentation/dvb/bt8xx.txt
+++ b/Documentation/dvb/bt8xx.txt
@@ -1,55 +1,74 @@
-How to get the Nebula Electronics DigiTV, Pinnacle PCTV Sat, Twinhan DST + clones working
-=========================================================================================
+How to get the Nebula, PCTV and Twinhan DST cards working
+=========================================================
-1) General information
-======================
+This class of cards has a bt878a as the PCI interface, and
+require the bttv driver.
-This class of cards has a bt878a chip as the PCI interface.
-The different card drivers require the bttv driver to provide the means
-to access the i2c bus and the gpio pins of the bt8xx chipset.
+Please pay close attention to the warning about the bttv module
+options below for the DST card.
-2) Compilation rules for Kernel >= 2.6.12
-=========================================
+1) General informations
+=======================
-Enable the following options:
+These drivers require the bttv driver to provide the means to access
+the i2c bus and the gpio pins of the bt8xx chipset.
+Because of this, you need to enable
"Device drivers" => "Multimedia devices"
- => "Video For Linux" => "BT848 Video For Linux"
+ => "Video For Linux" => "BT848 Video For Linux"
+
+Furthermore you need to enable
"Device drivers" => "Multimedia devices" => "Digital Video Broadcasting Devices"
- => "DVB for Linux" "DVB Core Support" "BT8xx based PCI cards"
+ => "DVB for Linux" "DVB Core Support" "BT8xx based PCI cards"
-3) Loading Modules, described by two approaches
-===============================================
+2) Loading Modules
+==================
In general you need to load the bttv driver, which will handle the gpio and
-i2c communication for us, plus the common dvb-bt8xx device driver,
-which is called the backend.
-The frontends for Nebula DigiTV (nxt6000), Pinnacle PCTV Sat (cx24110),
-TwinHan DST + clones (dst and dst-ca) are loaded automatically by the backend.
-For further details about TwinHan DST + clones see /Documentation/dvb/ci.txt.
+i2c communication for us, plus the common dvb-bt8xx device driver.
+The frontends for Nebula (nxt6000), Pinnacle PCTV (cx24110) and
+TwinHan (dst) are loaded automatically by the dvb-bt8xx device driver.
-3a) The manual approach
------------------------
+3a) Nebula / Pinnacle PCTV
+--------------------------
-Loading modules:
-modprobe bttv
-modprobe dvb-bt8xx
+ $ modprobe bttv (normally bttv is being loaded automatically by kmod)
+ $ modprobe dvb-bt8xx (or just place dvb-bt8xx in /etc/modules for automatic loading)
-Unloading modules:
-modprobe -r dvb-bt8xx
-modprobe -r bttv
-3b) The automatic approach
+3b) TwinHan and Clones
--------------------------
-If not already done by installation, place a line either in
-/etc/modules.conf or in /etc/modprobe.conf containing this text:
-alias char-major-81 bttv
+ $ modprobe bttv i2c_hw=1 card=0x71
+ $ modprobe dvb-bt8xx
+ $ modprobe dst
+
+The value 0x71 will override the PCI type detection for dvb-bt8xx,
+which is necessary for TwinHan cards.
+
+If you're having an older card (blue color circuit) and card=0x71 locks
+your machine, try using 0x68, too. If that does not work, ask on the
+mailing list.
+
+The DST module takes a couple of useful parameters.
+
+verbose takes values 0 to 4. These values control the verbosity level,
+and can be used to debug also.
+
+verbose=0 means complete disabling of messages
+ 1 only error messages are displayed
+ 2 notifications are also displayed
+ 3 informational messages are also displayed
+ 4 debug setting
+
+dst_addons takes values 0 and 0x20. A value of 0 means it is a FTA card.
+0x20 means it has a Conditional Access slot.
+
+The autodected values are determined bythe cards 'response
+string' which you can see in your logs e.g.
-Then place a line in /etc/modules containing this text:
-dvb-bt8xx
+dst_get_device_id: Recognise [DSTMCI]
-Reboot your system and have fun!
--
-Authors: Richard Walker, Jamie Honan, Michael Hunold, Manu Abraham, Uwe Bugla
+Authors: Richard Walker, Jamie Honan, Michael Hunold, Manu Abraham
diff --git a/Documentation/dvb/ci.txt b/Documentation/dvb/ci.txt
index 62e0701b542a..95f0e73b2135 100644
--- a/Documentation/dvb/ci.txt
+++ b/Documentation/dvb/ci.txt
@@ -23,7 +23,6 @@ This application requires the following to function properly as of now.
eg: $ szap -c channels.conf -r "TMC" -x
(b) a channels.conf containing a valid PMT PID
-
eg: TMC:11996:h:0:27500:278:512:650:321
here 278 is a valid PMT PID. the rest of the values are the
@@ -31,13 +30,7 @@ This application requires the following to function properly as of now.
(c) after running a szap, you have to run ca_zap, for the
descrambler to function,
-
- eg: $ ca_zap patched_channels.conf "TMC"
-
- The patched means a patch to apply to scan, such that scan can
- generate a channels.conf_with pmt, which has this PMT PID info
- (NOTE: szap cannot use this channels.conf with the PMT_PID)
-
+ eg: $ ca_zap channels.conf "TMC"
(d) Hopeflly Enjoy your favourite subscribed channel as you do with
a FTA card.
diff --git a/Documentation/fb/cyblafb/bugs b/Documentation/fb/cyblafb/bugs
new file mode 100644
index 000000000000..f90cc66ea919
--- /dev/null
+++ b/Documentation/fb/cyblafb/bugs
@@ -0,0 +1,14 @@
+Bugs
+====
+
+I currently don't know of any bug. Please do send reports to:
+ - linux-fbdev-devel@lists.sourceforge.net
+ - Knut_Petersen@t-online.de.
+
+
+Untested features
+=================
+
+All LCD stuff is untested. If it worked in tridentfb, it should work in
+cyblafb. Please test and report the results to Knut_Petersen@t-online.de.
+
diff --git a/Documentation/fb/cyblafb/credits b/Documentation/fb/cyblafb/credits
new file mode 100644
index 000000000000..0eb3b443dc2b
--- /dev/null
+++ b/Documentation/fb/cyblafb/credits
@@ -0,0 +1,7 @@
+Thanks to
+=========
+ * Alan Hourihane, for writing the X trident driver
+ * Jani Monoses, for writing the tridentfb driver
+ * Antonino A. Daplas, for review of the first published
+ version of cyblafb and some code
+ * Jochen Hein, for testing and a helpfull bug report
diff --git a/Documentation/fb/cyblafb/documentation b/Documentation/fb/cyblafb/documentation
new file mode 100644
index 000000000000..bb1aac048425
--- /dev/null
+++ b/Documentation/fb/cyblafb/documentation
@@ -0,0 +1,17 @@
+Available Documentation
+=======================
+
+Apollo PLE 133 Chipset VT8601A North Bridge Datasheet, Rev. 1.82, October 22,
+2001, available from VIA:
+
+ http://www.viavpsd.com/product/6/15/DS8601A182.pdf
+
+The datasheet is incomplete, some registers that need to be programmed are not
+explained at all and important bits are listed as "reserved". But you really
+need the datasheet to understand the code. "p. xxx" comments refer to page
+numbers of this document.
+
+XFree/XOrg drivers are available and of good quality, looking at the code
+there is a good idea if the datasheet does not provide enough information
+or if the datasheet seems to be wrong.
+
diff --git a/Documentation/fb/cyblafb/fb.modes b/Documentation/fb/cyblafb/fb.modes
new file mode 100644
index 000000000000..cf4351fc32ff
--- /dev/null
+++ b/Documentation/fb/cyblafb/fb.modes
@@ -0,0 +1,155 @@
+#
+# Sample fb.modes file
+#
+# Provides an incomplete list of working modes for
+# the cyberblade/i1 graphics core.
+#
+# The value 4294967256 is used instead of -40. Of course, -40 is not
+# a really reasonable value, but chip design does not always follow
+# logic. Believe me, it's ok, and it's the way the BIOS does it.
+#
+# fbset requires 4294967256 in fb.modes and -40 as an argument to
+# the -t parameter. That's also not too reasonable, and it might change
+# in the future or might even be differt for your current version.
+#
+
+mode "640x480-50"
+ geometry 640 480 640 3756 8
+ timings 47619 4294967256 24 17 0 216 3
+endmode
+
+mode "640x480-60"
+ geometry 640 480 640 3756 8
+ timings 39682 4294967256 24 17 0 216 3
+endmode
+
+mode "640x480-70"
+ geometry 640 480 640 3756 8
+ timings 34013 4294967256 24 17 0 216 3
+endmode
+
+mode "640x480-72"
+ geometry 640 480 640 3756 8
+ timings 33068 4294967256 24 17 0 216 3
+endmode
+
+mode "640x480-75"
+ geometry 640 480 640 3756 8
+ timings 31746 4294967256 24 17 0 216 3
+endmode
+
+mode "640x480-80"
+ geometry 640 480 640 3756 8
+ timings 29761 4294967256 24 17 0 216 3
+endmode
+
+mode "640x480-85"
+ geometry 640 480 640 3756 8
+ timings 28011 4294967256 24 17 0 216 3
+endmode
+
+mode "800x600-50"
+ geometry 800 600 800 3221 8
+ timings 30303 96 24 14 0 136 11
+endmode
+
+mode "800x600-60"
+ geometry 800 600 800 3221 8
+ timings 25252 96 24 14 0 136 11
+endmode
+
+mode "800x600-70"
+ geometry 800 600 800 3221 8
+ timings 21645 96 24 14 0 136 11
+endmode
+
+mode "800x600-72"
+ geometry 800 600 800 3221 8
+ timings 21043 96 24 14 0 136 11
+endmode
+
+mode "800x600-75"
+ geometry 800 600 800 3221 8
+ timings 20202 96 24 14 0 136 11
+endmode
+
+mode "800x600-80"
+ geometry 800 600 800 3221 8
+ timings 18939 96 24 14 0 136 11
+endmode
+
+mode "800x600-85"
+ geometry 800 600 800 3221 8
+ timings 17825 96 24 14 0 136 11
+endmode
+
+mode "1024x768-50"
+ geometry 1024 768 1024 2815 8
+ timings 19054 144 24 29 0 120 3
+endmode
+
+mode "1024x768-60"
+ geometry 1024 768 1024 2815 8
+ timings 15880 144 24 29 0 120 3
+endmode
+
+mode "1024x768-70"
+ geometry 1024 768 1024 2815 8
+ timings 13610 144 24 29 0 120 3
+endmode
+
+mode "1024x768-72"
+ geometry 1024 768 1024 2815 8
+ timings 13232 144 24 29 0 120 3
+endmode
+
+mode "1024x768-75"
+ geometry 1024 768 1024 2815 8
+ timings 12703 144 24 29 0 120 3
+endmode
+
+mode "1024x768-80"
+ geometry 1024 768 1024 2815 8
+ timings 11910 144 24 29 0 120 3
+endmode
+
+mode "1024x768-85"
+ geometry 1024 768 1024 2815 8
+ timings 11209 144 24 29 0 120 3
+endmode
+
+mode "1280x1024-50"
+ geometry 1280 1024 1280 2662 8
+ timings 11114 232 16 39 0 160 3
+endmode
+
+mode "1280x1024-60"
+ geometry 1280 1024 1280 2662 8
+ timings 9262 232 16 39 0 160 3
+endmode
+
+mode "1280x1024-70"
+ geometry 1280 1024 1280 2662 8
+ timings 7939 232 16 39 0 160 3
+endmode
+
+mode "1280x1024-72"
+ geometry 1280 1024 1280 2662 8
+ timings 7719 232 16 39 0 160 3
+endmode
+
+mode "1280x1024-75"
+ geometry 1280 1024 1280 2662 8
+ timings 7410 232 16 39 0 160 3
+endmode
+
+mode "1280x1024-80"
+ geometry 1280 1024 1280 2662 8
+ timings 6946 232 16 39 0 160 3
+endmode
+
+mode "1280x1024-85"
+ geometry 1280 1024 1280 2662 8
+ timings 6538 232 16 39 0 160 3
+endmode
+
diff --git a/Documentation/fb/cyblafb/performance b/Documentation/fb/cyblafb/performance
new file mode 100644
index 000000000000..eb4e47a9cea6
--- /dev/null
+++ b/Documentation/fb/cyblafb/performance
@@ -0,0 +1,80 @@
+Speed
+=====
+
+CyBlaFB is much faster than tridentfb and vesafb. Compare the performance data
+for mode 1280x1024-[8,16,32]@61 Hz.
+
+Test 1: Cat a file with 2000 lines of 0 characters.
+Test 2: Cat a file with 2000 lines of 80 characters.
+Test 3: Cat a file with 2000 lines of 160 characters.
+
+All values show system time use in seconds, kernel 2.6.12 was used for
+the measurements. 2.6.13 is a bit slower, 2.6.14 hopefully will include a
+patch that speeds up kernel bitblitting a lot ( > 20%).
+
++-----------+-----------------------------------------------------+
+| | not accelerated |
+| TRIDENTFB +-----------------+-----------------+-----------------+
+| of 2.6.12 | 8 bpp | 16 bpp | 32 bpp |
+| | noypan | ypan | noypan | ypan | noypan | ypan |
++-----------+--------+--------+--------+--------+--------+--------+
+| Test 1 | 4.31 | 4.33 | 6.05 | 12.81 | ---- | ---- |
+| Test 2 | 67.94 | 5.44 | 123.16 | 14.79 | ---- | ---- |
+| Test 3 | 131.36 | 6.55 | 240.12 | 16.76 | ---- | ---- |
++-----------+--------+--------+--------+--------+--------+--------+
+| Comments | | | completely bro- |
+| | | | ken, monitor |
+| | | | switches off |
++-----------+-----------------+-----------------+-----------------+
+
+
++-----------+-----------------------------------------------------+
+| | accelerated |
+| TRIDENTFB +-----------------+-----------------+-----------------+
+| of 2.6.12 | 8 bpp | 16 bpp | 32 bpp |
+| | noypan | ypan | noypan | ypan | noypan | ypan |
++-----------+--------+--------+--------+--------+--------+--------+
+| Test 1 | ---- | ---- | 20.62 | 1.22 | ---- | ---- |
+| Test 2 | ---- | ---- | 22.61 | 3.19 | ---- | ---- |
+| Test 3 | ---- | ---- | 24.59 | 5.16 | ---- | ---- |
++-----------+--------+--------+--------+--------+--------+--------+
+| Comments | broken, writing | broken, ok only | completely bro- |
+| | to wrong places | if bgcolor is | ken, monitor |
+| | on screen + bug | black, bug in | switches off |
+| | in fillrect() | fillrect() | |
++-----------+-----------------+-----------------+-----------------+
+
+
++-----------+-----------------------------------------------------+
+| | not accelerated |
+| VESAFB +-----------------+-----------------+-----------------+
+| of 2.6.12 | 8 bpp | 16 bpp | 32 bpp |
+| | noypan | ypan | noypan | ypan | noypan | ypan |
++-----------+--------+--------+--------+--------+--------+--------+
+| Test 1 | 4.26 | 3.76 | 5.99 | 7.23 | ---- | ---- |
+| Test 2 | 65.65 | 4.89 | 120.88 | 9.08 | ---- | ---- |
+| Test 3 | 126.91 | 5.94 | 235.77 | 11.03 | ---- | ---- |
++-----------+--------+--------+--------+--------+--------+--------+
+| Comments | vga=0x307 | vga=0x31a | vga=0x31b not |
+| | fh=80kHz | fh=80kHz | supported by |
+| | fv=75kHz | fv=75kHz | video BIOS and |
+| | | | hardware |
++-----------+-----------------+-----------------+-----------------+
+
+
++-----------+-----------------------------------------------------+
+| | accelerated |
+| CYBLAFB +-----------------+-----------------+-----------------+
+| | 8 bpp | 16 bpp | 32 bpp |
+| | noypan | ypan | noypan | ypan | noypan | ypan |
++-----------+--------+--------+--------+--------+--------+--------+
+| Test 1 | 8.02 | 0.23 | 19.04 | 0.61 | 57.12 | 2.74 |
+| Test 2 | 8.38 | 0.55 | 19.39 | 0.92 | 57.54 | 3.13 |
+| Test 3 | 8.73 | 0.86 | 19.74 | 1.24 | 57.95 | 3.51 |
++-----------+--------+--------+--------+--------+--------+--------+
+| Comments | | | |
+| | | | |
+| | | | |
+| | | | |
++-----------+-----------------+-----------------+-----------------+
+
diff --git a/Documentation/fb/cyblafb/todo b/Documentation/fb/cyblafb/todo
new file mode 100644
index 000000000000..80fb2f89b6c1
--- /dev/null
+++ b/Documentation/fb/cyblafb/todo
@@ -0,0 +1,32 @@
+TODO / Missing features
+=======================
+
+Verify LCD stuff "stretch" and "center" options are
+ completely untested ... this code needs to be
+ verified. As I don't have access to such
+ hardware, please contact me if you are
+ willing run some tests.
+
+Interlaced video modes The reason that interleaved
+ modes are disabled is that I do not know
+ the meaning of the vertical interlace
+ parameter. Also the datasheet mentions a
+ bit d8 of a horizontal interlace parameter,
+ but nowhere the lower 8 bits. Please help
+ if you can.
+
+low-res double scan modes Who needs it?
+
+accelerated color blitting Who needs it? The console driver does use color
+ blitting for nothing but drawing the penguine,
+ everything else is done using color expanding
+ blitting of 1bpp character bitmaps.
+
+xpanning Who needs it?
+
+ioctls Who needs it?
+
+TV-out Will be done later
+
+??? Feel free to contact me if you have any
+ feature requests
diff --git a/Documentation/fb/cyblafb/usage b/Documentation/fb/cyblafb/usage
new file mode 100644
index 000000000000..e627c8f54211
--- /dev/null
+++ b/Documentation/fb/cyblafb/usage
@@ -0,0 +1,206 @@
+CyBlaFB is a framebuffer driver for the Cyberblade/i1 graphics core integrated
+into the VIA Apollo PLE133 (aka vt8601) south bridge. It is developed and
+tested using a VIA EPIA 5000 board.
+
+Cyblafb - compiled into the kernel or as a module?
+==================================================
+
+You might compile cyblafb either as a module or compile it permanently into the
+kernel.
+
+Unless you have a real reason to do so you should not compile both vesafb and
+cyblafb permanently into the kernel. It's possible and it helps during the
+developement cycle, but it's useless and will at least block some otherwise
+usefull memory for ordinary users.
+
+Selecting Modes
+===============
+
+ Startup Mode
+ ============
+
+ First of all, you might use the "vga=???" boot parameter as it is
+ documented in vesafb.txt and svga.txt. Cyblafb will detect the video
+ mode selected and will use the geometry and timings found by
+ inspecting the hardware registers.
+
+ video=cyblafb vga=0x317
+
+ Alternatively you might use a combination of the mode, ref and bpp
+ parameters. If you compiled the driver into the kernel, add something
+ like this to the kernel command line:
+
+ video=cyblafb:1280x1024,bpp=16,ref=50 ...
+
+ If you compiled the driver as a module, the same mode would be
+ selected by the following command:
+
+ modprobe cyblafb mode=1280x1024 bpp=16 ref=50 ...
+
+ None of the modes possible to select as startup modes are affected by
+ the problems described at the end of the next subsection.
+
+ Mode changes using fbset
+ ========================
+
+ You might use fbset to change the video mode, see "man fbset". Cyblafb
+ generally does assume that you know what you are doing. But it does
+ some checks, especially those that are needed to prevent you from
+ damaging your hardware.
+
+ - only 8, 16, 24 and 32 bpp video modes are accepted
+ - interlaced video modes are not accepted
+ - double scan video modes are not accepted
+ - if a flat panel is found, cyblafb does not allow you
+ to program a resolution higher than the physical
+ resolution of the flat panel monitor
+ - cyblafb does not allow xres to differ from xres_virtual
+ - cyblafb does not allow vclk to exceed 230 MHz. As 32 bpp
+ and (currently) 24 bit modes use a doubled vclk internally,
+ the dotclock limit as seen by fbset is 115 MHz for those
+ modes and 230 MHz for 8 and 16 bpp modes.
+
+ Any request that violates the rules given above will be ignored and
+ fbset will return an error.
+
+ If you program a virtual y resolution higher than the hardware limit,
+ cyblafb will silently decrease that value to the highest possible
+ value.
+
+ Attempts to disable acceleration are ignored.
+
+ Some video modes that should work do not work as expected. If you use
+ the standard fb.modes, fbset 640x480-60 will program that mode, but
+ you will see a vertical area, about two characters wide, with only
+ much darker characters than the other characters on the screen.
+ Cyblafb does allow that mode to be set, as it does not violate the
+ official specifications. It would need a lot of code to reliably sort
+ out all invalid modes, playing around with the margin values will
+ give a valid mode quickly. And if cyblafb would detect such an invalid
+ mode, should it silently alter the requested values or should it
+ report an error? Both options have some pros and cons. As stated
+ above, none of the startup modes are affected, and if you set
+ verbosity to 1 or higher, cyblafb will print the fbset command that
+ would be needed to program that mode using fbset.
+
+
+Other Parameters
+================
+
+
+crt don't autodetect, assume monitor connected to
+ standard VGA connector
+
+fp don't autodetect, assume flat panel display
+ connected to flat panel monitor interface
+
+nativex inform driver about native x resolution of
+ flat panel monitor connected to special
+ interface (should be autodetected)
+
+stretch stretch image to adapt low resolution modes to
+ higer resolutions of flat panel monitors
+ connected to special interface
+
+center center image to adapt low resolution modes to
+ higer resolutions of flat panel monitors
+ connected to special interface
+
+memsize use if autodetected memsize is wrong ...
+ should never be necessary
+
+nopcirr disable PCI read retry
+nopciwr disable PCI write retry
+nopcirb disable PCI read bursts
+nopciwb disable PCI write bursts
+
+bpp bpp for specified modes
+ valid values: 8 || 16 || 24 || 32
+
+ref refresh rate for specified mode
+ valid values: 50 <= ref <= 85
+
+mode 640x480 or 800x600 or 1024x768 or 1280x1024
+ if not specified, the startup mode will be detected
+ and used, so you might also use the vga=??? parameter
+ described in vesafb.txt. If you do not specify a mode,
+ bpp and ref parameters are ignored.
+
+verbosity 0 is the default, increase to at least 2 for every
+ bug report!
+
+vesafb allows cyblafb to be loaded after vesafb has been
+ loaded. See sections "Module unloading ...".
+
+
+Development hints
+=================
+
+It's much faster do compile a module and to load the new version after
+unloading the old module than to compile a new kernel and to reboot. So if you
+try to work on cyblafb, it might be a good idea to use cyblafb as a module.
+In real life, fast often means dangerous, and that's also the case here. If
+you introduce a serious bug when cyblafb is compiled into the kernel, the
+kernel will lock or oops with a high probability before the file system is
+mounted, and the danger for your data is low. If you load a broken own version
+of cyblafb on a running system, the danger for the integrity of the file
+system is much higher as you might need a hard reset afterwards. Decide
+yourself.
+
+Module unloading, the vfb method
+================================
+
+If you want to unload/reload cyblafb using the virtual framebuffer, you need
+to enable vfb support in the kernel first. After that, load the modules as
+shown below:
+
+ modprobe vfb vfb_enable=1
+ modprobe fbcon
+ modprobe cyblafb
+ fbset -fb /dev/fb1 1280x1024-60 -vyres 2662
+ con2fb /dev/fb1 /dev/tty1
+ ...
+
+If you now made some changes to cyblafb and want to reload it, you might do it
+as show below:
+
+ con2fb /dev/fb0 /dev/tty1
+ ...
+ rmmod cyblafb
+ modprobe cyblafb
+ con2fb /dev/fb1 /dev/tty1
+ ...
+
+Of course, you might choose another mode, and most certainly you also want to
+map some other /dev/tty* to the real framebuffer device. You might also choose
+to compile fbcon as a kernel module or place it permanently in the kernel.
+
+I do not know of any way to unload fbcon, and fbcon will prevent the
+framebuffer device loaded first from unloading. [If there is a way, then
+please add a description here!]
+
+Module unloading, the vesafb method
+===================================
+
+Configure the kernel:
+
+ <*> Support for frame buffer devices
+ [*] VESA VGA graphics support
+ <M> Cyberblade/i1 support
+
+Add e.g. "video=vesafb:ypan vga=0x307" to the kernel parameters. The ypan
+parameter is important, choose any vga parameter you like as long as it is
+a graphics mode.
+
+After booting, load cyblafb without any mode and bpp parameter and assign
+cyblafb to individual ttys using con2fb, e.g.:
+
+ modprobe cyblafb vesafb=1
+ con2fb /dev/fb1 /dev/tty1
+
+Unloading cyblafb works without problems after you assign vesafb to all
+ttys again, e.g.:
+
+ con2fb /dev/fb0 /dev/tty1
+ rmmod cyblafb
+
diff --git a/Documentation/fb/cyblafb/whycyblafb b/Documentation/fb/cyblafb/whycyblafb
new file mode 100644
index 000000000000..a123bc11e698
--- /dev/null
+++ b/Documentation/fb/cyblafb/whycyblafb
@@ -0,0 +1,85 @@
+I tried the following framebuffer drivers:
+
+ - TRIDENTFB is full of bugs. Acceleration is broken for Blade3D
+ graphics cores like the cyberblade/i1. It claims to support a great
+ number of devices, but documentation for most of these devices is
+ unfortunately not available. There is _no_ reason to use tridentfb
+ for cyberblade/i1 + CRT users. VESAFB is faster, and the one
+ advantage, mode switching, is broken in tridentfb.
+
+ - VESAFB is used by many distributions as a standard. Vesafb does
+ not support mode switching. VESAFB is a bit faster than the working
+ configurations of TRIDENTFB, but it is still too slow, even if you
+ use ypan.
+
+ - EPIAFB (you'll find it on sourceforge) supports the Cyberblade/i1
+ graphics core, but it still has serious bugs and developement seems
+ to have stopped. This is the one driver with TV-out support. If you
+ do need this feature, try epiafb.
+
+None of these drivers was a real option for me.
+
+I believe that is unreasonable to change code that announces to support 20
+devices if I only have more or less sufficient documentation for exactly one
+of these. The risk of breaking device foo while fixing device bar is too high.
+
+So I decided to start CyBlaFB as a stripped down tridentfb.
+
+All code specific to other Trident chips has been removed. After that there
+were a lot of cosmetic changes to increase the readability of the code. All
+register names were changed to those mnemonics used in the datasheet. Function
+and macro names were changed if they hindered easy understanding of the code.
+
+After that I debugged the code and implemented some new features. I'll try to
+give a little summary of the main changes:
+
+ - calculation of vertical and horizontal timings was fixed
+
+ - video signal quality has been improved dramatically
+
+ - acceleration:
+
+ - fillrect and copyarea were fixed and reenabled
+
+ - color expanding imageblit was newly implemented, color
+ imageblit (only used to draw the penguine) still uses the
+ generic code.
+
+ - init of the acceleration engine was improved and moved to a
+ place where it really works ...
+
+ - sync function has a timeout now and tries to reset and
+ reinit the accel engine if necessary
+
+ - fewer slow copyarea calls when doing ypan scrolling by using
+ undocumented bit d21 of screen start address stored in
+ CR2B[5]. BIOS does use it also, so this should be safe.
+
+ - cyblafb rejects any attempt to set modes that would cause vclk
+ values above reasonable 230 MHz. 32bit modes use a clock
+ multiplicator of 2, so fbset does show the correct values for
+ pixclock but not for vclk in this case. The fbset limit is 115 MHz
+ for 32 bpp modes.
+
+ - cyblafb rejects modes known to be broken or unimplemented (all
+ interlaced modes, all doublescan modes for now)
+
+ - cyblafb now works independant of the video mode in effect at startup
+ time (tridentfb does not init all needed registers to reasonable
+ values)
+
+ - switching between video modes does work reliably now
+
+ - the first video mode now is the one selected on startup using the
+ vga=???? mechanism or any of
+ - 640x480, 800x600, 1024x768, 1280x1024
+ - 8, 16, 24 or 32 bpp
+ - refresh between 50 Hz and 85 Hz, 1 Hz steps (1280x1024-32
+ is limited to 63Hz)
+
+ - pci retry and pci burst mode are settable (try to disable if you
+ experience latency problems)
+
+ - built as a module cyblafb might be unloaded and reloaded using
+ the vfb module and con2vt or might be used together with vesafb
+
diff --git a/Documentation/fb/intel810.txt b/Documentation/fb/intel810.txt
index fd68b162e4a1..4f0d6bc789ef 100644
--- a/Documentation/fb/intel810.txt
+++ b/Documentation/fb/intel810.txt
@@ -5,6 +5,7 @@ Intel 810/815 Framebuffer driver
March 17, 2002
First Released: July 2001
+ Last Update: September 12, 2005
================================================================
A. Introduction
@@ -44,6 +45,8 @@ B. Features
- Hardware Cursor Support
+ - Supports EDID probing either by DDC/I2C or through the BIOS
+
C. List of available options
a. "video=i810fb"
@@ -52,14 +55,17 @@ C. List of available options
Recommendation: required
b. "xres:<value>"
- select horizontal resolution in pixels
+ select horizontal resolution in pixels. (This parameter will be
+ ignored if 'mode_option' is specified. See 'o' below).
Recommendation: user preference
(default = 640)
c. "yres:<value>"
select vertical resolution in scanlines. If Discrete Video Timings
- is enabled, this will be ignored and computed as 3*xres/4.
+ is enabled, this will be ignored and computed as 3*xres/4. (This
+ parameter will be ignored if 'mode_option' is specified. See 'o'
+ below)
Recommendation: user preference
(default = 480)
@@ -86,7 +92,8 @@ C. List of available options
g. "hsync1/hsync2:<value>"
select the minimum and maximum Horizontal Sync Frequency of the
monitor in KHz. If a using a fixed frequency monitor, hsync1 must
- be equal to hsync2.
+ be equal to hsync2. If EDID probing is successful, these will be
+ ignored and values will be taken from the EDID block.
Recommendation: check monitor manual for correct values
default (29/30)
@@ -94,7 +101,8 @@ C. List of available options
h. "vsync1/vsync2:<value>"
select the minimum and maximum Vertical Sync Frequency of the monitor
in Hz. You can also use this option to lock your monitor's refresh
- rate.
+ rate. If EDID probing is successful, these will be ignored and values
+ will be taken from the EDID block.
Recommendation: check monitor manual for correct values
(default = 60/60)
@@ -154,7 +162,11 @@ C. List of available options
Recommendation: do not set
(default = not set)
-
+ o. <xres>x<yres>[-<bpp>][@<refresh>]
+ The driver will now accept specification of boot mode option. If this
+ is specified, the options 'xres' and 'yres' will be ignored. See
+ Documentation/fb/modedb.txt for usage.
+
D. Kernel booting
Separate each option/option-pair by commas (,) and the option from its value
@@ -176,7 +188,10 @@ will be computed based on the hsync1/hsync2 and vsync1/vsync2 values.
IMPORTANT:
You must include hsync1, hsync2, vsync1 and vsync2 to enable video modes
-better than 640x480 at 60Hz.
+better than 640x480 at 60Hz. HOWEVER, if your chipset/display combination
+supports I2C and has an EDID block, you can safely exclude hsync1, hsync2,
+vsync1 and vsync2 parameters. These parameters will be taken from the EDID
+block.
E. Module options
@@ -217,32 +232,21 @@ F. Setup
This is required. The option is under "Character Devices"
d. Under "Graphics Support", select "Intel 810/815" either statically
- or as a module. Choose "use VESA GTF for video timings" if you
- need to maximize the capability of your display. To be on the
+ or as a module. Choose "use VESA Generalized Timing Formula" if
+ you need to maximize the capability of your display. To be on the
safe side, you can leave this unselected.
- e. If you want a framebuffer console, enable it under "Console
+ e. If you want support for DDC/I2C probing (Plug and Play Displays),
+ set 'Enable DDC Support' to 'y'. To make this option appear, set
+ 'use VESA Generalized Timing Formula' to 'y'.
+
+ f. If you want a framebuffer console, enable it under "Console
Drivers"
- f. Compile your kernel.
+ g. Compile your kernel.
- g. Load the driver as described in section D and E.
+ h. Load the driver as described in section D and E.
- Optional:
- h. If you are going to run XFree86 with its native drivers, the
- standard XFree86 4.1.0 and 4.2.0 drivers should work as is.
- However, there's a bug in the XFree86 i810 drivers. It attempts
- to use XAA even when switched to the console. This will crash
- your server. I have a fix at this site:
-
- http://i810fb.sourceforge.net.
-
- You can either use the patch, or just replace
-
- /usr/X11R6/lib/modules/drivers/i810_drv.o
-
- with the one provided at the website.
-
i. Try the DirectFB (http://www.directfb.org) + the i810 gfxdriver
patch to see the chipset in action (or inaction :-).
diff --git a/Documentation/fb/modedb.txt b/Documentation/fb/modedb.txt
index e04458b319d5..4fcdb4cf4cca 100644
--- a/Documentation/fb/modedb.txt
+++ b/Documentation/fb/modedb.txt
@@ -20,12 +20,83 @@ in a video= option, fbmem considers that to be a global video mode option.
Valid mode specifiers (mode_option argument):
- <xres>x<yres>[-<bpp>][@<refresh>]
+ <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][m]
<name>[-<bpp>][@<refresh>]
with <xres>, <yres>, <bpp> and <refresh> decimal numbers and <name> a string.
Things between square brackets are optional.
+If 'M' is specified in the mode_option argument (after <yres> and before
+<bpp> and <refresh>, if specified) the timings will be calculated using
+VESA(TM) Coordinated Video Timings instead of looking up the mode from a table.
+If 'R' is specified, do a 'reduced blanking' calculation for digital displays.
+If 'i' is specified, calculate for an interlaced mode. And if 'm' is
+specified, add margins to the calculation (1.8% of xres rounded down to 8
+pixels and 1.8% of yres).
+
+ Sample usage: 1024x768M@60m - CVT timing with margins
+
+***** oOo ***** oOo ***** oOo ***** oOo ***** oOo ***** oOo ***** oOo *****
+
+What is the VESA(TM) Coordinated Video Timings (CVT)?
+
+From the VESA(TM) Website:
+
+ "The purpose of CVT is to provide a method for generating a consistent
+ and coordinated set of standard formats, display refresh rates, and
+ timing specifications for computer display products, both those
+ employing CRTs, and those using other display technologies. The
+ intention of CVT is to give both source and display manufacturers a
+ common set of tools to enable new timings to be developed in a
+ consistent manner that ensures greater compatibility."
+
+This is the third standard approved by VESA(TM) concerning video timings. The
+first was the Discrete Video Timings (DVT) which is a collection of
+pre-defined modes approved by VESA(TM). The second is the Generalized Timing
+Formula (GTF) which is an algorithm to calculate the timings, given the
+pixelclock, the horizontal sync frequency, or the vertical refresh rate.
+
+The GTF is limited by the fact that it is designed mainly for CRT displays.
+It artificially increases the pixelclock because of its high blanking
+requirement. This is inappropriate for digital display interface with its high
+data rate which requires that it conserves the pixelclock as much as possible.
+Also, GTF does not take into account the aspect ratio of the display.
+
+The CVT addresses these limitations. If used with CRT's, the formula used
+is a derivation of GTF with a few modifications. If used with digital
+displays, the "reduced blanking" calculation can be used.
+
+From the framebuffer subsystem perspective, new formats need not be added
+to the global mode database whenever a new mode is released by display
+manufacturers. Specifying for CVT will work for most, if not all, relatively
+new CRT displays and probably with most flatpanels, if 'reduced blanking'
+calculation is specified. (The CVT compatibility of the display can be
+determined from its EDID. The version 1.3 of the EDID has extra 128-byte
+blocks where additional timing information is placed. As of this time, there
+is no support yet in the layer to parse this additional blocks.)
+
+CVT also introduced a new naming convention (should be seen from dmesg output):
+
+ <pix>M<a>[-R]
+
+ where: pix = total amount of pixels in MB (xres x yres)
+ M = always present
+ a = aspect ratio (3 - 4:3; 4 - 5:4; 9 - 15:9, 16:9; A - 16:10)
+ -R = reduced blanking
+
+ example: .48M3-R - 800x600 with reduced blanking
+
+Note: VESA(TM) has restrictions on what is a standard CVT timing:
+
+ - aspect ratio can only be one of the above values
+ - acceptable refresh rates are 50, 60, 70 or 85 Hz only
+ - if reduced blanking, the refresh rate must be at 60Hz
+
+If one of the above are not satisfied, the kernel will print a warning but the
+timings will still be calculated.
+
+***** oOo ***** oOo ***** oOo ***** oOo ***** oOo ***** oOo ***** oOo *****
+
To find a suitable video mode, you just call
int __init fb_find_mode(struct fb_var_screeninfo *var,
diff --git a/Documentation/feature-removal-schedule.txt b/Documentation/feature-removal-schedule.txt
index 5f95d4b3cab1..784e08c1c80a 100644
--- a/Documentation/feature-removal-schedule.txt
+++ b/Documentation/feature-removal-schedule.txt
@@ -17,14 +17,6 @@ Who: Greg Kroah-Hartman <greg@kroah.com>
---------------------------
-What: ACPI S4bios support
-When: May 2005
-Why: Noone uses it, and it probably does not work, anyway. swsusp is
- faster, more reliable, and people are actually using it.
-Who: Pavel Machek <pavel@suse.cz>
-
----------------------------
-
What: io_remap_page_range() (macro or function)
When: September 2005
Why: Replaced by io_remap_pfn_range() which allows more memory space
diff --git a/Documentation/filesystems/files.txt b/Documentation/filesystems/files.txt
new file mode 100644
index 000000000000..8c206f4e0250
--- /dev/null
+++ b/Documentation/filesystems/files.txt
@@ -0,0 +1,123 @@
+File management in the Linux kernel
+-----------------------------------
+
+This document describes how locking for files (struct file)
+and file descriptor table (struct files) works.
+
+Up until 2.6.12, the file descriptor table has been protected
+with a lock (files->file_lock) and reference count (files->count).
+->file_lock protected accesses to all the file related fields
+of the table. ->count was used for sharing the file descriptor
+table between tasks cloned with CLONE_FILES flag. Typically
+this would be the case for posix threads. As with the common
+refcounting model in the kernel, the last task doing
+a put_files_struct() frees the file descriptor (fd) table.
+The files (struct file) themselves are protected using
+reference count (->f_count).
+
+In the new lock-free model of file descriptor management,
+the reference counting is similar, but the locking is
+based on RCU. The file descriptor table contains multiple
+elements - the fd sets (open_fds and close_on_exec, the
+array of file pointers, the sizes of the sets and the array
+etc.). In order for the updates to appear atomic to
+a lock-free reader, all the elements of the file descriptor
+table are in a separate structure - struct fdtable.
+files_struct contains a pointer to struct fdtable through
+which the actual fd table is accessed. Initially the
+fdtable is embedded in files_struct itself. On a subsequent
+expansion of fdtable, a new fdtable structure is allocated
+and files->fdtab points to the new structure. The fdtable
+structure is freed with RCU and lock-free readers either
+see the old fdtable or the new fdtable making the update
+appear atomic. Here are the locking rules for
+the fdtable structure -
+
+1. All references to the fdtable must be done through
+ the files_fdtable() macro :
+
+ struct fdtable *fdt;
+
+ rcu_read_lock();
+
+ fdt = files_fdtable(files);
+ ....
+ if (n <= fdt->max_fds)
+ ....
+ ...
+ rcu_read_unlock();
+
+ files_fdtable() uses rcu_dereference() macro which takes care of
+ the memory barrier requirements for lock-free dereference.
+ The fdtable pointer must be read within the read-side
+ critical section.
+
+2. Reading of the fdtable as described above must be protected
+ by rcu_read_lock()/rcu_read_unlock().
+
+3. For any update to the the fd table, files->file_lock must
+ be held.
+
+4. To look up the file structure given an fd, a reader
+ must use either fcheck() or fcheck_files() APIs. These
+ take care of barrier requirements due to lock-free lookup.
+ An example :
+
+ struct file *file;
+
+ rcu_read_lock();
+ file = fcheck(fd);
+ if (file) {
+ ...
+ }
+ ....
+ rcu_read_unlock();
+
+5. Handling of the file structures is special. Since the look-up
+ of the fd (fget()/fget_light()) are lock-free, it is possible
+ that look-up may race with the last put() operation on the
+ file structure. This is avoided using the rcuref APIs
+ on ->f_count :
+
+ rcu_read_lock();
+ file = fcheck_files(files, fd);
+ if (file) {
+ if (rcuref_inc_lf(&file->f_count))
+ *fput_needed = 1;
+ else
+ /* Didn't get the reference, someone's freed */
+ file = NULL;
+ }
+ rcu_read_unlock();
+ ....
+ return file;
+
+ rcuref_inc_lf() detects if refcounts is already zero or
+ goes to zero during increment. If it does, we fail
+ fget()/fget_light().
+
+6. Since both fdtable and file structures can be looked up
+ lock-free, they must be installed using rcu_assign_pointer()
+ API. If they are looked up lock-free, rcu_dereference()
+ must be used. However it is advisable to use files_fdtable()
+ and fcheck()/fcheck_files() which take care of these issues.
+
+7. While updating, the fdtable pointer must be looked up while
+ holding files->file_lock. If ->file_lock is dropped, then
+ another thread expand the files thereby creating a new
+ fdtable and making the earlier fdtable pointer stale.
+ For example :
+
+ spin_lock(&files->file_lock);
+ fd = locate_fd(files, file, start);
+ if (fd >= 0) {
+ /* locate_fd() may have expanded fdtable, load the ptr */
+ fdt = files_fdtable(files);
+ FD_SET(fd, fdt->open_fds);
+ FD_CLR(fd, fdt->close_on_exec);
+ spin_unlock(&files->file_lock);
+ .....
+
+ Since locate_fd() can drop ->file_lock (and reacquire ->file_lock),
+ the fdtable pointer (fdt) must be loaded after locate_fd().
+
diff --git a/Documentation/filesystems/fuse.txt b/Documentation/filesystems/fuse.txt
new file mode 100644
index 000000000000..6b5741e651a2
--- /dev/null
+++ b/Documentation/filesystems/fuse.txt
@@ -0,0 +1,315 @@
+Definitions
+~~~~~~~~~~~
+
+Userspace filesystem:
+
+ A filesystem in which data and metadata are provided by an ordinary
+ userspace process. The filesystem can be accessed normally through
+ the kernel interface.
+
+Filesystem daemon:
+
+ The process(es) providing the data and metadata of the filesystem.
+
+Non-privileged mount (or user mount):
+
+ A userspace filesystem mounted by a non-privileged (non-root) user.
+ The filesystem daemon is running with the privileges of the mounting
+ user. NOTE: this is not the same as mounts allowed with the "user"
+ option in /etc/fstab, which is not discussed here.
+
+Mount owner:
+
+ The user who does the mounting.
+
+User:
+
+ The user who is performing filesystem operations.
+
+What is FUSE?
+~~~~~~~~~~~~~
+
+FUSE is a userspace filesystem framework. It consists of a kernel
+module (fuse.ko), a userspace library (libfuse.*) and a mount utility
+(fusermount).
+
+One of the most important features of FUSE is allowing secure,
+non-privileged mounts. This opens up new possibilities for the use of
+filesystems. A good example is sshfs: a secure network filesystem
+using the sftp protocol.
+
+The userspace library and utilities are available from the FUSE
+homepage:
+
+ http://fuse.sourceforge.net/
+
+Mount options
+~~~~~~~~~~~~~
+
+'fd=N'
+
+ The file descriptor to use for communication between the userspace
+ filesystem and the kernel. The file descriptor must have been
+ obtained by opening the FUSE device ('/dev/fuse').
+
+'rootmode=M'
+
+ The file mode of the filesystem's root in octal representation.
+
+'user_id=N'
+
+ The numeric user id of the mount owner.
+
+'group_id=N'
+
+ The numeric group id of the mount owner.
+
+'default_permissions'
+
+ By default FUSE doesn't check file access permissions, the
+ filesystem is free to implement it's access policy or leave it to
+ the underlying file access mechanism (e.g. in case of network
+ filesystems). This option enables permission checking, restricting
+ access based on file mode. This is option is usually useful
+ together with the 'allow_other' mount option.
+
+'allow_other'
+
+ This option overrides the security measure restricting file access
+ to the user mounting the filesystem. This option is by default only
+ allowed to root, but this restriction can be removed with a
+ (userspace) configuration option.
+
+'max_read=N'
+
+ With this option the maximum size of read operations can be set.
+ The default is infinite. Note that the size of read requests is
+ limited anyway to 32 pages (which is 128kbyte on i386).
+
+How do non-privileged mounts work?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Since the mount() system call is a privileged operation, a helper
+program (fusermount) is needed, which is installed setuid root.
+
+The implication of providing non-privileged mounts is that the mount
+owner must not be able to use this capability to compromise the
+system. Obvious requirements arising from this are:
+
+ A) mount owner should not be able to get elevated privileges with the
+ help of the mounted filesystem
+
+ B) mount owner should not get illegitimate access to information from
+ other users' and the super user's processes
+
+ C) mount owner should not be able to induce undesired behavior in
+ other users' or the super user's processes
+
+How are requirements fulfilled?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ A) The mount owner could gain elevated privileges by either:
+
+ 1) creating a filesystem containing a device file, then opening
+ this device
+
+ 2) creating a filesystem containing a suid or sgid application,
+ then executing this application
+
+ The solution is not to allow opening device files and ignore
+ setuid and setgid bits when executing programs. To ensure this
+ fusermount always adds "nosuid" and "nodev" to the mount options
+ for non-privileged mounts.
+
+ B) If another user is accessing files or directories in the
+ filesystem, the filesystem daemon serving requests can record the
+ exact sequence and timing of operations performed. This
+ information is otherwise inaccessible to the mount owner, so this
+ counts as an information leak.
+
+ The solution to this problem will be presented in point 2) of C).
+
+ C) There are several ways in which the mount owner can induce
+ undesired behavior in other users' processes, such as:
+
+ 1) mounting a filesystem over a file or directory which the mount
+ owner could otherwise not be able to modify (or could only
+ make limited modifications).
+
+ This is solved in fusermount, by checking the access
+ permissions on the mountpoint and only allowing the mount if
+ the mount owner can do unlimited modification (has write
+ access to the mountpoint, and mountpoint is not a "sticky"
+ directory)
+
+ 2) Even if 1) is solved the mount owner can change the behavior
+ of other users' processes.
+
+ i) It can slow down or indefinitely delay the execution of a
+ filesystem operation creating a DoS against the user or the
+ whole system. For example a suid application locking a
+ system file, and then accessing a file on the mount owner's
+ filesystem could be stopped, and thus causing the system
+ file to be locked forever.
+
+ ii) It can present files or directories of unlimited length, or
+ directory structures of unlimited depth, possibly causing a
+ system process to eat up diskspace, memory or other
+ resources, again causing DoS.
+
+ The solution to this as well as B) is not to allow processes
+ to access the filesystem, which could otherwise not be
+ monitored or manipulated by the mount owner. Since if the
+ mount owner can ptrace a process, it can do all of the above
+ without using a FUSE mount, the same criteria as used in
+ ptrace can be used to check if a process is allowed to access
+ the filesystem or not.
+
+ Note that the ptrace check is not strictly necessary to
+ prevent B/2/i, it is enough to check if mount owner has enough
+ privilege to send signal to the process accessing the
+ filesystem, since SIGSTOP can be used to get a similar effect.
+
+I think these limitations are unacceptable?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If a sysadmin trusts the users enough, or can ensure through other
+measures, that system processes will never enter non-privileged
+mounts, it can relax the last limitation with a "user_allow_other"
+config option. If this config option is set, the mounting user can
+add the "allow_other" mount option which disables the check for other
+users' processes.
+
+Kernel - userspace interface
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following diagram shows how a filesystem operation (in this
+example unlink) is performed in FUSE.
+
+NOTE: everything in this description is greatly simplified
+
+ | "rm /mnt/fuse/file" | FUSE filesystem daemon
+ | |
+ | | >sys_read()
+ | | >fuse_dev_read()
+ | | >request_wait()
+ | | [sleep on fc->waitq]
+ | |
+ | >sys_unlink() |
+ | >fuse_unlink() |
+ | [get request from |
+ | fc->unused_list] |
+ | >request_send() |
+ | [queue req on fc->pending] |
+ | [wake up fc->waitq] | [woken up]
+ | >request_wait_answer() |
+ | [sleep on req->waitq] |
+ | | <request_wait()
+ | | [remove req from fc->pending]
+ | | [copy req to read buffer]
+ | | [add req to fc->processing]
+ | | <fuse_dev_read()
+ | | <sys_read()
+ | |
+ | | [perform unlink]
+ | |
+ | | >sys_write()
+ | | >fuse_dev_write()
+ | | [look up req in fc->processing]
+ | | [remove from fc->processing]
+ | | [copy write buffer to req]
+ | [woken up] | [wake up req->waitq]
+ | | <fuse_dev_write()
+ | | <sys_write()
+ | <request_wait_answer() |
+ | <request_send() |
+ | [add request to |
+ | fc->unused_list] |
+ | <fuse_unlink() |
+ | <sys_unlink() |
+
+There are a couple of ways in which to deadlock a FUSE filesystem.
+Since we are talking about unprivileged userspace programs,
+something must be done about these.
+
+Scenario 1 - Simple deadlock
+-----------------------------
+
+ | "rm /mnt/fuse/file" | FUSE filesystem daemon
+ | |
+ | >sys_unlink("/mnt/fuse/file") |
+ | [acquire inode semaphore |
+ | for "file"] |
+ | >fuse_unlink() |
+ | [sleep on req->waitq] |
+ | | <sys_read()
+ | | >sys_unlink("/mnt/fuse/file")
+ | | [acquire inode semaphore
+ | | for "file"]
+ | | *DEADLOCK*
+
+The solution for this is to allow requests to be interrupted while
+they are in userspace:
+
+ | [interrupted by signal] |
+ | <fuse_unlink() |
+ | [release semaphore] | [semaphore acquired]
+ | <sys_unlink() |
+ | | >fuse_unlink()
+ | | [queue req on fc->pending]
+ | | [wake up fc->waitq]
+ | | [sleep on req->waitq]
+
+If the filesystem daemon was single threaded, this will stop here,
+since there's no other thread to dequeue and execute the request.
+In this case the solution is to kill the FUSE daemon as well. If
+there are multiple serving threads, you just have to kill them as
+long as any remain.
+
+Moral: a filesystem which deadlocks, can soon find itself dead.
+
+Scenario 2 - Tricky deadlock
+----------------------------
+
+This one needs a carefully crafted filesystem. It's a variation on
+the above, only the call back to the filesystem is not explicit,
+but is caused by a pagefault.
+
+ | Kamikaze filesystem thread 1 | Kamikaze filesystem thread 2
+ | |
+ | [fd = open("/mnt/fuse/file")] | [request served normally]
+ | [mmap fd to 'addr'] |
+ | [close fd] | [FLUSH triggers 'magic' flag]
+ | [read a byte from addr] |
+ | >do_page_fault() |
+ | [find or create page] |
+ | [lock page] |
+ | >fuse_readpage() |
+ | [queue READ request] |
+ | [sleep on req->waitq] |
+ | | [read request to buffer]
+ | | [create reply header before addr]
+ | | >sys_write(addr - headerlength)
+ | | >fuse_dev_write()
+ | | [look up req in fc->processing]
+ | | [remove from fc->processing]
+ | | [copy write buffer to req]
+ | | >do_page_fault()
+ | | [find or create page]
+ | | [lock page]
+ | | * DEADLOCK *
+
+Solution is again to let the the request be interrupted (not
+elaborated further).
+
+An additional problem is that while the write buffer is being
+copied to the request, the request must not be interrupted. This
+is because the destination address of the copy may not be valid
+after the request is interrupted.
+
+This is solved with doing the copy atomically, and allowing
+interruption while the page(s) belonging to the write buffer are
+faulted with get_user_pages(). The 'req->locked' flag indicates
+when the copy is taking place, and interruption is delayed until
+this flag is unset.
+
diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
index 5024ba7a592c..d4773565ea2f 100644
--- a/Documentation/filesystems/proc.txt
+++ b/Documentation/filesystems/proc.txt
@@ -1241,16 +1241,38 @@ swap-intensive.
overcommit_memory
-----------------
-This file contains one value. The following algorithm is used to decide if
-there's enough memory: if the value of overcommit_memory is positive, then
-there's always enough memory. This is a useful feature, since programs often
-malloc() huge amounts of memory 'just in case', while they only use a small
-part of it. Leaving this value at 0 will lead to the failure of such a huge
-malloc(), when in fact the system has enough memory for the program to run.
-
-On the other hand, enabling this feature can cause you to run out of memory
-and thrash the system to death, so large and/or important servers will want to
-set this value to 0.
+Controls overcommit of system memory, possibly allowing processes
+to allocate (but not use) more memory than is actually available.
+
+
+0 - Heuristic overcommit handling. Obvious overcommits of
+ address space are refused. Used for a typical system. It
+ ensures a seriously wild allocation fails while allowing
+ overcommit to reduce swap usage. root is allowed to
+ allocate slighly more memory in this mode. This is the
+ default.
+
+1 - Always overcommit. Appropriate for some scientific
+ applications.
+
+2 - Don't overcommit. The total address space commit
+ for the system is not permitted to exceed swap plus a
+ configurable percentage (default is 50) of physical RAM.
+ Depending on the percentage you use, in most situations
+ this means a process will not be killed while attempting
+ to use already-allocated memory but will receive errors
+ on memory allocation as appropriate.
+
+overcommit_ratio
+----------------
+
+Percentage of physical memory size to include in overcommit calculations
+(see above.)
+
+Memory allocation limit = swapspace + physmem * (overcommit_ratio / 100)
+
+ swapspace = total size of all swap areas
+ physmem = size of physical memory in system
nr_hugepages and hugetlb_shm_group
----------------------------------
diff --git a/Documentation/filesystems/v9fs.txt b/Documentation/filesystems/v9fs.txt
new file mode 100644
index 000000000000..4e92feb6b507
--- /dev/null
+++ b/Documentation/filesystems/v9fs.txt
@@ -0,0 +1,95 @@
+ V9FS: 9P2000 for Linux
+ ======================
+
+ABOUT
+=====
+
+v9fs is a Unix implementation of the Plan 9 9p remote filesystem protocol.
+
+This software was originally developed by Ron Minnich <rminnich@lanl.gov>
+and Maya Gokhale <maya@lanl.gov>. Additional development by Greg Watson
+<gwatson@lanl.gov> and most recently Eric Van Hensbergen
+<ericvh@gmail.com> and Latchesar Ionkov <lucho@ionkov.net>.
+
+USAGE
+=====
+
+For remote file server:
+
+ mount -t 9P 10.10.1.2 /mnt/9
+
+For Plan 9 From User Space applications (http://swtch.com/plan9)
+
+ mount -t 9P `namespace`/acme /mnt/9 -o proto=unix,name=$USER
+
+OPTIONS
+=======
+
+ proto=name select an alternative transport. Valid options are
+ currently:
+ unix - specifying a named pipe mount point
+ tcp - specifying a normal TCP/IP connection
+ fd - used passed file descriptors for connection
+ (see rfdno and wfdno)
+
+ name=name user name to attempt mount as on the remote server. The
+ server may override or ignore this value. Certain user
+ names may require authentication.
+
+ aname=name aname specifies the file tree to access when the server is
+ offering several exported file systems.
+
+ debug=n specifies debug level. The debug level is a bitmask.
+ 0x01 = display verbose error messages
+ 0x02 = developer debug (DEBUG_CURRENT)
+ 0x04 = display 9P trace
+ 0x08 = display VFS trace
+ 0x10 = display Marshalling debug
+ 0x20 = display RPC debug
+ 0x40 = display transport debug
+ 0x80 = display allocation debug
+
+ rfdno=n the file descriptor for reading with proto=fd
+
+ wfdno=n the file descriptor for writing with proto=fd
+
+ maxdata=n the number of bytes to use for 9P packet payload (msize)
+
+ port=n port to connect to on the remote server
+
+ timeout=n request timeouts (in ms) (default 60000ms)
+
+ noextend force legacy mode (no 9P2000.u semantics)
+
+ uid attempt to mount as a particular uid
+
+ gid attempt to mount with a particular gid
+
+ afid security channel - used by Plan 9 authentication protocols
+
+ nodevmap do not map special files - represent them as normal files.
+ This can be used to share devices/named pipes/sockets between
+ hosts. This functionality will be expanded in later versions.
+
+RESOURCES
+=========
+
+The Linux version of the 9P server, along with some client-side utilities
+can be found at http://v9fs.sf.net (along with a CVS repository of the
+development branch of this module). There are user and developer mailing
+lists here, as well as a bug-tracker.
+
+For more information on the Plan 9 Operating System check out
+http://plan9.bell-labs.com/plan9
+
+For information on Plan 9 from User Space (Plan 9 applications and libraries
+ported to Linux/BSD/OSX/etc) check out http://swtch.com/plan9
+
+
+STATUS
+======
+
+The 2.6 kernel support is working on PPC and x86.
+
+PLEASE USE THE SOURCEFORGE BUG-TRACKER TO REPORT PROBLEMS.
+
diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index 3f318dd44c77..f042c12e0ed2 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -1,35 +1,27 @@
-/* -*- auto-fill -*- */
- Overview of the Virtual File System
+ Overview of the Linux Virtual File System
- Richard Gooch <rgooch@atnf.csiro.au>
+ Original author: Richard Gooch <rgooch@atnf.csiro.au>
- 5-JUL-1999
+ Last updated on August 25, 2005
+ Copyright (C) 1999 Richard Gooch
+ Copyright (C) 2005 Pekka Enberg
-Conventions used in this document <section>
-=================================
+ This file is released under the GPLv2.
-Each section in this document will have the string "<section>" at the
-right-hand side of the section title. Each subsection will have
-"<subsection>" at the right-hand side. These strings are meant to make
-it easier to search through the document.
-NOTE that the master copy of this document is available online at:
-http://www.atnf.csiro.au/~rgooch/linux/docs/vfs.txt
-
-
-What is it? <section>
+What is it?
===========
The Virtual File System (otherwise known as the Virtual Filesystem
Switch) is the software layer in the kernel that provides the
filesystem interface to userspace programs. It also provides an
abstraction within the kernel which allows different filesystem
-implementations to co-exist.
+implementations to coexist.
-A Quick Look At How It Works <section>
+A Quick Look At How It Works
============================
In this section I'll briefly describe how things work, before
@@ -38,7 +30,8 @@ when user programs open and manipulate files, and then look from the
other view which is how a filesystem is supported and subsequently
mounted.
-Opening a File <subsection>
+
+Opening a File
--------------
The VFS implements the open(2), stat(2), chmod(2) and similar system
@@ -77,7 +70,7 @@ back to userspace.
Opening a file requires another operation: allocation of a file
structure (this is the kernel-side implementation of file
-descriptors). The freshly allocated file structure is initialised with
+descriptors). The freshly allocated file structure is initialized with
a pointer to the dentry and a set of file operation member functions.
These are taken from the inode data. The open() file method is then
called so the specific filesystem implementation can do it's work. You
@@ -102,7 +95,8 @@ filesystem or driver code at the same time, on different
processors. You should ensure that access to shared resources is
protected by appropriate locks.
-Registering and Mounting a Filesystem <subsection>
+
+Registering and Mounting a Filesystem
-------------------------------------
If you want to support a new kind of filesystem in the kernel, all you
@@ -123,17 +117,21 @@ updated to point to the root inode for the new filesystem.
It's now time to look at things in more detail.
-struct file_system_type <section>
+struct file_system_type
=======================
-This describes the filesystem. As of kernel 2.1.99, the following
+This describes the filesystem. As of kernel 2.6.13, the following
members are defined:
struct file_system_type {
const char *name;
int fs_flags;
- struct super_block *(*read_super) (struct super_block *, void *, int);
- struct file_system_type * next;
+ struct super_block *(*get_sb) (struct file_system_type *, int,
+ const char *, void *);
+ void (*kill_sb) (struct super_block *);
+ struct module *owner;
+ struct file_system_type * next;
+ struct list_head fs_supers;
};
name: the name of the filesystem type, such as "ext2", "iso9660",
@@ -141,51 +139,97 @@ struct file_system_type {
fs_flags: various flags (i.e. FS_REQUIRES_DEV, FS_NO_DCACHE, etc.)
- read_super: the method to call when a new instance of this
+ get_sb: the method to call when a new instance of this
filesystem should be mounted
- next: for internal VFS use: you should initialise this to NULL
+ kill_sb: the method to call when an instance of this filesystem
+ should be unmounted
+
+ owner: for internal VFS use: you should initialize this to THIS_MODULE in
+ most cases.
-The read_super() method has the following arguments:
+ next: for internal VFS use: you should initialize this to NULL
+
+The get_sb() method has the following arguments:
struct super_block *sb: the superblock structure. This is partially
- initialised by the VFS and the rest must be initialised by the
- read_super() method
+ initialized by the VFS and the rest must be initialized by the
+ get_sb() method
+
+ int flags: mount flags
+
+ const char *dev_name: the device name we are mounting.
void *data: arbitrary mount options, usually comes as an ASCII
string
int silent: whether or not to be silent on error
-The read_super() method must determine if the block device specified
+The get_sb() method must determine if the block device specified
in the superblock contains a filesystem of the type the method
supports. On success the method returns the superblock pointer, on
failure it returns NULL.
The most interesting member of the superblock structure that the
-read_super() method fills in is the "s_op" field. This is a pointer to
+get_sb() method fills in is the "s_op" field. This is a pointer to
a "struct super_operations" which describes the next level of the
filesystem implementation.
+Usually, a filesystem uses generic one of the generic get_sb()
+implementations and provides a fill_super() method instead. The
+generic methods are:
+
+ get_sb_bdev: mount a filesystem residing on a block device
-struct super_operations <section>
+ get_sb_nodev: mount a filesystem that is not backed by a device
+
+ get_sb_single: mount a filesystem which shares the instance between
+ all mounts
+
+A fill_super() method implementation has the following arguments:
+
+ struct super_block *sb: the superblock structure. The method fill_super()
+ must initialize this properly.
+
+ void *data: arbitrary mount options, usually comes as an ASCII
+ string
+
+ int silent: whether or not to be silent on error
+
+
+struct super_operations
=======================
This describes how the VFS can manipulate the superblock of your
-filesystem. As of kernel 2.1.99, the following members are defined:
+filesystem. As of kernel 2.6.13, the following members are defined:
struct super_operations {
- void (*read_inode) (struct inode *);
- int (*write_inode) (struct inode *, int);
- void (*put_inode) (struct inode *);
- void (*drop_inode) (struct inode *);
- void (*delete_inode) (struct inode *);
- int (*notify_change) (struct dentry *, struct iattr *);
- void (*put_super) (struct super_block *);
- void (*write_super) (struct super_block *);
- int (*statfs) (struct super_block *, struct statfs *, int);
- int (*remount_fs) (struct super_block *, int *, char *);
- void (*clear_inode) (struct inode *);
+ struct inode *(*alloc_inode)(struct super_block *sb);
+ void (*destroy_inode)(struct inode *);
+
+ void (*read_inode) (struct inode *);
+
+ void (*dirty_inode) (struct inode *);
+ int (*write_inode) (struct inode *, int);
+ void (*put_inode) (struct inode *);
+ void (*drop_inode) (struct inode *);
+ void (*delete_inode) (struct inode *);
+ void (*put_super) (struct super_block *);
+ void (*write_super) (struct super_block *);
+ int (*sync_fs)(struct super_block *sb, int wait);
+ void (*write_super_lockfs) (struct super_block *);
+ void (*unlockfs) (struct super_block *);
+ int (*statfs) (struct super_block *, struct kstatfs *);
+ int (*remount_fs) (struct super_block *, int *, char *);
+ void (*clear_inode) (struct inode *);
+ void (*umount_begin) (struct super_block *);
+
+ void (*sync_inodes) (struct super_block *sb,
+ struct writeback_control *wbc);
+ int (*show_options)(struct seq_file *, struct vfsmount *);
+
+ ssize_t (*quota_read)(struct super_block *, int, char *, size_t, loff_t);
+ ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t);
};
All methods are called without any locks being held, unless otherwise
@@ -193,43 +237,62 @@ noted. This means that most methods can block safely. All methods are
only called from a process context (i.e. not from an interrupt handler
or bottom half).
+ alloc_inode: this method is called by inode_alloc() to allocate memory
+ for struct inode and initialize it.
+
+ destroy_inode: this method is called by destroy_inode() to release
+ resources allocated for struct inode.
+
read_inode: this method is called to read a specific inode from the
- mounted filesystem. The "i_ino" member in the "struct inode"
- will be initialised by the VFS to indicate which inode to
- read. Other members are filled in by this method
+ mounted filesystem. The i_ino member in the struct inode is
+ initialized by the VFS to indicate which inode to read. Other
+ members are filled in by this method.
+
+ You can set this to NULL and use iget5_locked() instead of iget()
+ to read inodes. This is necessary for filesystems for which the
+ inode number is not sufficient to identify an inode.
+
+ dirty_inode: this method is called by the VFS to mark an inode dirty.
write_inode: this method is called when the VFS needs to write an
inode to disc. The second parameter indicates whether the write
should be synchronous or not, not all filesystems check this flag.
put_inode: called when the VFS inode is removed from the inode
- cache. This method is optional
+ cache.
drop_inode: called when the last access to the inode is dropped,
with the inode_lock spinlock held.
- This method should be either NULL (normal unix filesystem
+ This method should be either NULL (normal UNIX filesystem
semantics) or "generic_delete_inode" (for filesystems that do not
want to cache inodes - causing "delete_inode" to always be
called regardless of the value of i_nlink)
- The "generic_delete_inode()" behaviour is equivalent to the
+ The "generic_delete_inode()" behavior is equivalent to the
old practice of using "force_delete" in the put_inode() case,
but does not have the races that the "force_delete()" approach
had.
delete_inode: called when the VFS wants to delete an inode
- notify_change: called when VFS inode attributes are changed. If this
- is NULL the VFS falls back to the write_inode() method. This
- is called with the kernel lock held
-
put_super: called when the VFS wishes to free the superblock
(i.e. unmount). This is called with the superblock lock held
write_super: called when the VFS superblock needs to be written to
disc. This method is optional
+ sync_fs: called when VFS is writing out all dirty data associated with
+ a superblock. The second parameter indicates whether the method
+ should wait until the write out has been completed. Optional.
+
+ write_super_lockfs: called when VFS is locking a filesystem and forcing
+ it into a consistent state. This function is currently used by the
+ Logical Volume Manager (LVM).
+
+ unlockfs: called when VFS is unlocking a filesystem and making it writable
+ again.
+
statfs: called when the VFS needs to get filesystem statistics. This
is called with the kernel lock held
@@ -238,21 +301,31 @@ or bottom half).
clear_inode: called then the VFS clears the inode. Optional
+ umount_begin: called when the VFS is unmounting a filesystem.
+
+ sync_inodes: called when the VFS is writing out dirty data associated with
+ a superblock.
+
+ show_options: called by the VFS to show mount options for /proc/<pid>/mounts.
+
+ quota_read: called by the VFS to read from filesystem quota file.
+
+ quota_write: called by the VFS to write to filesystem quota file.
+
The read_inode() method is responsible for filling in the "i_op"
field. This is a pointer to a "struct inode_operations" which
describes the methods that can be performed on individual inodes.
-struct inode_operations <section>
+struct inode_operations
=======================
This describes how the VFS can manipulate an inode in your
-filesystem. As of kernel 2.1.99, the following members are defined:
+filesystem. As of kernel 2.6.13, the following members are defined:
struct inode_operations {
- struct file_operations * default_file_ops;
- int (*create) (struct inode *,struct dentry *,int);
- int (*lookup) (struct inode *,struct dentry *);
+ int (*create) (struct inode *,struct dentry *,int, struct nameidata *);
+ struct dentry * (*lookup) (struct inode *,struct dentry *, struct nameidata *);
int (*link) (struct dentry *,struct inode *,struct dentry *);
int (*unlink) (struct inode *,struct dentry *);
int (*symlink) (struct inode *,struct dentry *,const char *);
@@ -261,25 +334,22 @@ struct inode_operations {
int (*mknod) (struct inode *,struct dentry *,int,dev_t);
int (*rename) (struct inode *, struct dentry *,
struct inode *, struct dentry *);
- int (*readlink) (struct dentry *, char *,int);
- struct dentry * (*follow_link) (struct dentry *, struct dentry *);
- int (*readpage) (struct file *, struct page *);
- int (*writepage) (struct page *page, struct writeback_control *wbc);
- int (*bmap) (struct inode *,int);
+ int (*readlink) (struct dentry *, char __user *,int);
+ void * (*follow_link) (struct dentry *, struct nameidata *);
+ void (*put_link) (struct dentry *, struct nameidata *, void *);
void (*truncate) (struct inode *);
- int (*permission) (struct inode *, int);
- int (*smap) (struct inode *,int);
- int (*updatepage) (struct file *, struct page *, const char *,
- unsigned long, unsigned int, int);
- int (*revalidate) (struct dentry *);
+ int (*permission) (struct inode *, int, struct nameidata *);
+ int (*setattr) (struct dentry *, struct iattr *);
+ int (*getattr) (struct vfsmount *mnt, struct dentry *, struct kstat *);
+ int (*setxattr) (struct dentry *, const char *,const void *,size_t,int);
+ ssize_t (*getxattr) (struct dentry *, const char *, void *, size_t);
+ ssize_t (*listxattr) (struct dentry *, char *, size_t);
+ int (*removexattr) (struct dentry *, const char *);
};
Again, all methods are called without any locks being held, unless
otherwise noted.
- default_file_ops: this is a pointer to a "struct file_operations"
- which describes how to open and then manipulate open files
-
create: called by the open(2) and creat(2) system calls. Only
required if you want to support regular files. The dentry you
get should not have an inode (i.e. it should be a negative
@@ -328,31 +398,143 @@ otherwise noted.
you want to support reading symbolic links
follow_link: called by the VFS to follow a symbolic link to the
- inode it points to. Only required if you want to support
- symbolic links
+ inode it points to. Only required if you want to support
+ symbolic links. This function returns a void pointer cookie
+ that is passed to put_link().
+
+ put_link: called by the VFS to release resources allocated by
+ follow_link(). The cookie returned by follow_link() is passed to
+ to this function as the last parameter. It is used by filesystems
+ such as NFS where page cache is not stable (i.e. page that was
+ installed when the symbolic link walk started might not be in the
+ page cache at the end of the walk).
+
+ truncate: called by the VFS to change the size of a file. The i_size
+ field of the inode is set to the desired size by the VFS before
+ this function is called. This function is called by the truncate(2)
+ system call and related functionality.
+
+ permission: called by the VFS to check for access rights on a POSIX-like
+ filesystem.
+
+ setattr: called by the VFS to set attributes for a file. This function is
+ called by chmod(2) and related system calls.
+
+ getattr: called by the VFS to get attributes of a file. This function is
+ called by stat(2) and related system calls.
+
+ setxattr: called by the VFS to set an extended attribute for a file.
+ Extended attribute is a name:value pair associated with an inode. This
+ function is called by setxattr(2) system call.
+
+ getxattr: called by the VFS to retrieve the value of an extended attribute
+ name. This function is called by getxattr(2) function call.
+
+ listxattr: called by the VFS to list all extended attributes for a given
+ file. This function is called by listxattr(2) system call.
+
+ removexattr: called by the VFS to remove an extended attribute from a file.
+ This function is called by removexattr(2) system call.
+
+
+struct address_space_operations
+===============================
+
+This describes how the VFS can manipulate mapping of a file to page cache in
+your filesystem. As of kernel 2.6.13, the following members are defined:
+
+struct address_space_operations {
+ int (*writepage)(struct page *page, struct writeback_control *wbc);
+ int (*readpage)(struct file *, struct page *);
+ int (*sync_page)(struct page *);
+ int (*writepages)(struct address_space *, struct writeback_control *);
+ int (*set_page_dirty)(struct page *page);
+ int (*readpages)(struct file *filp, struct address_space *mapping,
+ struct list_head *pages, unsigned nr_pages);
+ int (*prepare_write)(struct file *, struct page *, unsigned, unsigned);
+ int (*commit_write)(struct file *, struct page *, unsigned, unsigned);
+ sector_t (*bmap)(struct address_space *, sector_t);
+ int (*invalidatepage) (struct page *, unsigned long);
+ int (*releasepage) (struct page *, int);
+ ssize_t (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
+ loff_t offset, unsigned long nr_segs);
+ struct page* (*get_xip_page)(struct address_space *, sector_t,
+ int);
+};
+
+ writepage: called by the VM write a dirty page to backing store.
+
+ readpage: called by the VM to read a page from backing store.
+
+ sync_page: called by the VM to notify the backing store to perform all
+ queued I/O operations for a page. I/O operations for other pages
+ associated with this address_space object may also be performed.
+
+ writepages: called by the VM to write out pages associated with the
+ address_space object.
+
+ set_page_dirty: called by the VM to set a page dirty.
+
+ readpages: called by the VM to read pages associated with the address_space
+ object.
+ prepare_write: called by the generic write path in VM to set up a write
+ request for a page.
-struct file_operations <section>
+ commit_write: called by the generic write path in VM to write page to
+ its backing store.
+
+ bmap: called by the VFS to map a logical block offset within object to
+ physical block number. This method is use by for the legacy FIBMAP
+ ioctl. Other uses are discouraged.
+
+ invalidatepage: called by the VM on truncate to disassociate a page from its
+ address_space mapping.
+
+ releasepage: called by the VFS to release filesystem specific metadata from
+ a page.
+
+ direct_IO: called by the VM for direct I/O writes and reads.
+
+ get_xip_page: called by the VM to translate a block number to a page.
+ The page is valid until the corresponding filesystem is unmounted.
+ Filesystems that want to use execute-in-place (XIP) need to implement
+ it. An example implementation can be found in fs/ext2/xip.c.
+
+
+struct file_operations
======================
This describes how the VFS can manipulate an open file. As of kernel
-2.1.99, the following members are defined:
+2.6.13, the following members are defined:
struct file_operations {
loff_t (*llseek) (struct file *, loff_t, int);
- ssize_t (*read) (struct file *, char *, size_t, loff_t *);
- ssize_t (*write) (struct file *, const char *, size_t, loff_t *);
+ ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
+ ssize_t (*aio_read) (struct kiocb *, char __user *, size_t, loff_t);
+ ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
+ ssize_t (*aio_write) (struct kiocb *, const char __user *, size_t, loff_t);
int (*readdir) (struct file *, void *, filldir_t);
unsigned int (*poll) (struct file *, struct poll_table_struct *);
int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
+ long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
+ long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
int (*mmap) (struct file *, struct vm_area_struct *);
int (*open) (struct inode *, struct file *);
+ int (*flush) (struct file *);
int (*release) (struct inode *, struct file *);
- int (*fsync) (struct file *, struct dentry *);
- int (*fasync) (struct file *, int);
- int (*check_media_change) (kdev_t dev);
- int (*revalidate) (kdev_t dev);
+ int (*fsync) (struct file *, struct dentry *, int datasync);
+ int (*aio_fsync) (struct kiocb *, int datasync);
+ int (*fasync) (int, struct file *, int);
int (*lock) (struct file *, int, struct file_lock *);
+ ssize_t (*readv) (struct file *, const struct iovec *, unsigned long, loff_t *);
+ ssize_t (*writev) (struct file *, const struct iovec *, unsigned long, loff_t *);
+ ssize_t (*sendfile) (struct file *, loff_t *, size_t, read_actor_t, void *);
+ ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
+ unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+ int (*check_flags)(int);
+ int (*dir_notify)(struct file *filp, unsigned long arg);
+ int (*flock) (struct file *, int, struct file_lock *);
};
Again, all methods are called without any locks being held, unless
@@ -362,8 +544,12 @@ otherwise noted.
read: called by read(2) and related system calls
+ aio_read: called by io_submit(2) and other asynchronous I/O operations
+
write: called by write(2) and related system calls
+ aio_write: called by io_submit(2) and other asynchronous I/O operations
+
readdir: called when the VFS needs to read the directory contents
poll: called by the VFS when a process wants to check if there is
@@ -372,18 +558,25 @@ otherwise noted.
ioctl: called by the ioctl(2) system call
+ unlocked_ioctl: called by the ioctl(2) system call. Filesystems that do not
+ require the BKL should use this method instead of the ioctl() above.
+
+ compat_ioctl: called by the ioctl(2) system call when 32 bit system calls
+ are used on 64 bit kernels.
+
mmap: called by the mmap(2) system call
open: called by the VFS when an inode should be opened. When the VFS
- opens a file, it creates a new "struct file" and initialises
- the "f_op" file operations member with the "default_file_ops"
- field in the inode structure. It then calls the open method
- for the newly allocated file structure. You might think that
- the open method really belongs in "struct inode_operations",
- and you may be right. I think it's done the way it is because
- it makes filesystems simpler to implement. The open() method
- is a good place to initialise the "private_data" member in the
- file structure if you want to point to a device structure
+ opens a file, it creates a new "struct file". It then calls the
+ open method for the newly allocated file structure. You might
+ think that the open method really belongs in
+ "struct inode_operations", and you may be right. I think it's
+ done the way it is because it makes filesystems simpler to
+ implement. The open() method is a good place to initialize the
+ "private_data" member in the file structure if you want to point
+ to a device structure
+
+ flush: called by the close(2) system call to flush a file
release: called when the last reference to an open file is closed
@@ -392,6 +585,23 @@ otherwise noted.
fasync: called by the fcntl(2) system call when asynchronous
(non-blocking) mode is enabled for a file
+ lock: called by the fcntl(2) system call for F_GETLK, F_SETLK, and F_SETLKW
+ commands
+
+ readv: called by the readv(2) system call
+
+ writev: called by the writev(2) system call
+
+ sendfile: called by the sendfile(2) system call
+
+ get_unmapped_area: called by the mmap(2) system call
+
+ check_flags: called by the fcntl(2) system call for F_SETFL command
+
+ dir_notify: called by the fcntl(2) system call for F_NOTIFY command
+
+ flock: called by the flock(2) system call
+
Note that the file operations are implemented by the specific
filesystem in which the inode resides. When opening a device node
(character or block special) most filesystems will call special
@@ -400,29 +610,28 @@ driver information. These support routines replace the filesystem file
operations with those for the device driver, and then proceed to call
the new open() method for the file. This is how opening a device file
in the filesystem eventually ends up calling the device driver open()
-method. Note the devfs (the Device FileSystem) has a more direct path
-from device node to device driver (this is an unofficial kernel
-patch).
+method.
-Directory Entry Cache (dcache) <section>
-------------------------------
+Directory Entry Cache (dcache)
+==============================
+
struct dentry_operations
-========================
+------------------------
This describes how a filesystem can overload the standard dentry
operations. Dentries and the dcache are the domain of the VFS and the
individual filesystem implementations. Device drivers have no business
here. These methods may be set to NULL, as they are either optional or
-the VFS uses a default. As of kernel 2.1.99, the following members are
+the VFS uses a default. As of kernel 2.6.13, the following members are
defined:
struct dentry_operations {
- int (*d_revalidate)(struct dentry *);
+ int (*d_revalidate)(struct dentry *, struct nameidata *);
int (*d_hash) (struct dentry *, struct qstr *);
int (*d_compare) (struct dentry *, struct qstr *, struct qstr *);
- void (*d_delete)(struct dentry *);
+ int (*d_delete)(struct dentry *);
void (*d_release)(struct dentry *);
void (*d_iput)(struct dentry *, struct inode *);
};
@@ -451,6 +660,7 @@ Each dentry has a pointer to its parent dentry, as well as a hash list
of child dentries. Child dentries are basically like files in a
directory.
+
Directory Entry Cache APIs
--------------------------
@@ -471,7 +681,7 @@ manipulate dentries:
"d_delete" method is called
d_drop: this unhashes a dentry from its parents hash list. A
- subsequent call to dput() will dellocate the dentry if its
+ subsequent call to dput() will deallocate the dentry if its
usage count drops to 0
d_delete: delete a dentry. If there are no other open references to
@@ -507,16 +717,16 @@ up by walking the tree starting with the first component
of the pathname and using that dentry along with the next
component to look up the next level and so on. Since it
is a frequent operation for workloads like multiuser
-environments and webservers, it is important to optimize
+environments and web servers, it is important to optimize
this path.
Prior to 2.5.10, dcache_lock was acquired in d_lookup and thus
in every component during path look-up. Since 2.5.10 onwards,
-fastwalk algorithm changed this by holding the dcache_lock
+fast-walk algorithm changed this by holding the dcache_lock
at the beginning and walking as many cached path component
-dentries as possible. This signficantly decreases the number
+dentries as possible. This significantly decreases the number
of acquisition of dcache_lock. However it also increases the
-lock hold time signficantly and affects performance in large
+lock hold time significantly and affects performance in large
SMP machines. Since 2.5.62 kernel, dcache has been using
a new locking model that uses RCU to make dcache look-up
lock-free.
@@ -527,7 +737,7 @@ protected the hash chain, d_child, d_alias, d_lru lists as well
as d_inode and several other things like mount look-up. RCU-based
changes affect only the way the hash chain is protected. For everything
else the dcache_lock must be taken for both traversing as well as
-updating. The hash chain updations too take the dcache_lock.
+updating. The hash chain updates too take the dcache_lock.
The significant change is the way d_lookup traverses the hash chain,
it doesn't acquire the dcache_lock for this and rely on RCU to
ensure that the dentry has not been *freed*.
@@ -535,14 +745,15 @@ ensure that the dentry has not been *freed*.
Dcache locking details
----------------------
+
For many multi-user workloads, open() and stat() on files are
very frequently occurring operations. Both involve walking
of path names to find the dentry corresponding to the
concerned file. In 2.4 kernel, dcache_lock was held
during look-up of each path component. Contention and
-cacheline bouncing of this global lock caused significant
+cache-line bouncing of this global lock caused significant
scalability problems. With the introduction of RCU
-in linux kernel, this was worked around by making
+in Linux kernel, this was worked around by making
the look-up of path components during path walking lock-free.
@@ -562,7 +773,7 @@ Some of the important changes are :
2. Insertion of a dentry into the hash table is done using
hlist_add_head_rcu() which take care of ordering the writes -
the writes to the dentry must be visible before the dentry
- is inserted. This works in conjuction with hlist_for_each_rcu()
+ is inserted. This works in conjunction with hlist_for_each_rcu()
while walking the hash chain. The only requirement is that
all initialization to the dentry must be done before hlist_add_head_rcu()
since we don't have dcache_lock protection while traversing
@@ -584,7 +795,7 @@ Some of the important changes are :
the same. In some sense, dcache_rcu path walking looks like
the pre-2.5.10 version.
-5. All dentry hash chain updations must take the dcache_lock as well as
+5. All dentry hash chain updates must take the dcache_lock as well as
the per-dentry lock in that order. dput() does this to ensure
that a dentry that has just been looked up in another CPU
doesn't get deleted before dget() can be done on it.
@@ -640,10 +851,10 @@ handled as described below :
Since we redo the d_parent check and compare name while holding
d_lock, lock-free look-up will not race against d_move().
-4. There can be a theoritical race when a dentry keeps coming back
+4. There can be a theoretical race when a dentry keeps coming back
to original bucket due to double moves. Due to this look-up may
consider that it has never moved and can end up in a infinite loop.
- But this is not any worse that theoritical livelocks we already
+ But this is not any worse that theoretical livelocks we already
have in the kernel.
diff --git a/Documentation/ioctl/cdrom.txt b/Documentation/ioctl/cdrom.txt
index 4ccdcc6fe364..8ec32cc49eb1 100644
--- a/Documentation/ioctl/cdrom.txt
+++ b/Documentation/ioctl/cdrom.txt
@@ -878,7 +878,7 @@ DVD_READ_STRUCT Read structure
error returns:
EINVAL physical.layer_num exceeds number of layers
- EIO Recieved invalid response from drive
+ EIO Received invalid response from drive
diff --git a/Documentation/kbuild/makefiles.txt b/Documentation/kbuild/makefiles.txt
index 9a1586590d82..d802ce88bedc 100644
--- a/Documentation/kbuild/makefiles.txt
+++ b/Documentation/kbuild/makefiles.txt
@@ -31,7 +31,7 @@ This document describes the Linux kernel Makefiles.
=== 6 Architecture Makefiles
--- 6.1 Set variables to tweak the build to the architecture
- --- 6.2 Add prerequisites to prepare:
+ --- 6.2 Add prerequisites to archprepare:
--- 6.3 List directories to visit when descending
--- 6.4 Architecture specific boot images
--- 6.5 Building non-kbuild targets
@@ -734,18 +734,18 @@ When kbuild executes the following steps are followed (roughly):
for loadable kernel modules.
---- 6.2 Add prerequisites to prepare:
+--- 6.2 Add prerequisites to archprepare:
- The prepare: rule is used to list prerequisites that needs to be
+ The archprepare: rule is used to list prerequisites that needs to be
built before starting to descend down in the subdirectories.
This is usual header files containing assembler constants.
Example:
- #arch/s390/Makefile
- prepare: include/asm-$(ARCH)/offsets.h
+ #arch/arm/Makefile
+ archprepare: maketools
- In this example the file include/asm-$(ARCH)/offsets.h will
- be built before descending down in the subdirectories.
+ In this example the file target maketools will be processed
+ before descending down in the subdirectories.
See also chapter XXX-TODO that describe how kbuild supports
generating offset header files.
diff --git a/Documentation/kdump/kdump.txt b/Documentation/kdump/kdump.txt
index 7ff213f4becd..1f5f7d28c9e6 100644
--- a/Documentation/kdump/kdump.txt
+++ b/Documentation/kdump/kdump.txt
@@ -39,8 +39,7 @@ SETUP
and apply http://lse.sourceforge.net/kdump/patches/kexec-tools-1.101-kdump.patch
and after that build the source.
-2) Download and build the appropriate (latest) kexec/kdump (-mm) kernel
- patchset and apply it to the vanilla kernel tree.
+2) Download and build the appropriate (2.6.13-rc1 onwards) vanilla kernel.
Two kernels need to be built in order to get this feature working.
@@ -84,15 +83,16 @@ SETUP
4) Load the second kernel to be booted using:
- kexec -p <second-kernel> --crash-dump --args-linux --append="root=<root-dev>
- init 1 irqpoll"
+ kexec -p <second-kernel> --args-linux --elf32-core-headers
+ --append="root=<root-dev> init 1 irqpoll"
Note: i) <second-kernel> has to be a vmlinux image. bzImage will not work,
as of now.
- ii) By default ELF headers are stored in ELF32 format (for i386). This
- is sufficient to represent the physical memory up to 4GB. To store
- headers in ELF64 format, specifiy "--elf64-core-headers" on the
- kexec command line additionally.
+ ii) By default ELF headers are stored in ELF64 format. Option
+ --elf32-core-headers forces generation of ELF32 headers. gdb can
+ not open ELF64 headers on 32 bit systems. So creating ELF32
+ headers can come handy for users who have got non-PAE systems and
+ hence have memory less than 4GB.
iii) Specify "irqpoll" as command line parameter. This reduces driver
initialization failures in second kernel due to shared interrupts.
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index d2f0c67ba1fb..7086f0a90d14 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -164,6 +164,15 @@ running once the system is up.
over-ride platform specific driver.
See also Documentation/acpi-hotkey.txt.
+ enable_timer_pin_1 [i386,x86-64]
+ Enable PIN 1 of APIC timer
+ Can be useful to work around chipset bugs (in particular on some ATI chipsets)
+ The kernel tries to set a reasonable default.
+
+ disable_timer_pin_1 [i386,x86-64]
+ Disable PIN 1 of APIC timer
+ Can be useful to work around chipset bugs.
+
ad1816= [HW,OSS]
Format: <io>,<irq>,<dma>,<dma2>
See also Documentation/sound/oss/AD1816.
@@ -549,6 +558,7 @@ running once the system is up.
keyboard and can not control its state
(Don't attempt to blink the leds)
i8042.noaux [HW] Don't check for auxiliary (== mouse) port
+ i8042.nokbd [HW] Don't check/create keyboard port
i8042.nomux [HW] Don't check presence of an active multiplexing
controller
i8042.nopnp [HW] Don't use ACPIPnP / PnPBIOS to discover KBD/AUX
diff --git a/Documentation/mono.txt b/Documentation/mono.txt
index 6739ab9615ef..807a0c7b4737 100644
--- a/Documentation/mono.txt
+++ b/Documentation/mono.txt
@@ -30,7 +30,7 @@ other program after you have done the following:
Read the file 'binfmt_misc.txt' in this directory to know
more about the configuration process.
-3) Add the following enries to /etc/rc.local or similar script
+3) Add the following entries to /etc/rc.local or similar script
to be run at system startup:
# Insert BINFMT_MISC module into the kernel
diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt
index 24d029455baa..a55f0f95b171 100644
--- a/Documentation/networking/bonding.txt
+++ b/Documentation/networking/bonding.txt
@@ -1241,7 +1241,7 @@ traffic while still maintaining carrier on.
If running SNMP agents, the bonding driver should be loaded
before any network drivers participating in a bond. This requirement
-is due to the the interface index (ipAdEntIfIndex) being associated to
+is due to the interface index (ipAdEntIfIndex) being associated to
the first interface found with a given IP address. That is, there is
only one ipAdEntIfIndex for each IP address. For example, if eth0 and
eth1 are slaves of bond0 and the driver for eth0 is loaded before the
@@ -1937,7 +1937,7 @@ switches currently available support 802.3ad.
If not explicitly configured (with ifconfig or ip link), the
MAC address of the bonding device is taken from its first slave
device. This MAC address is then passed to all following slaves and
-remains persistent (even if the the first slave is removed) until the
+remains persistent (even if the first slave is removed) until the
bonding device is brought down or reconfigured.
If you wish to change the MAC address, you can set it with
diff --git a/Documentation/networking/wan-router.txt b/Documentation/networking/wan-router.txt
index aea20cd2a56e..c96897aa08b6 100644
--- a/Documentation/networking/wan-router.txt
+++ b/Documentation/networking/wan-router.txt
@@ -355,7 +355,7 @@ REVISION HISTORY
There is no functional difference between the two packages
2.0.7 Aug 26, 1999 o Merged X25API code into WANPIPE.
- o Fixed a memeory leak for X25API
+ o Fixed a memory leak for X25API
o Updated the X25API code for 2.2.X kernels.
o Improved NEM handling.
@@ -514,7 +514,7 @@ beta2-2.2.0 Jan 8 2001
o Patches for 2.4.0 kernel
o Patches for 2.2.18 kernel
o Minor updates to PPP and CHLDC drivers.
- Note: No functinal difference.
+ Note: No functional difference.
beta3-2.2.9 Jan 10 2001
o I missed the 2.2.18 kernel patches in beta2-2.2.0
diff --git a/Documentation/pci.txt b/Documentation/pci.txt
index 76d28d033657..711210b38f5f 100644
--- a/Documentation/pci.txt
+++ b/Documentation/pci.txt
@@ -84,7 +84,7 @@ Each entry consists of:
Most drivers don't need to use the driver_data field. Best practice
for use of driver_data is to use it as an index into a static list of
-equivalant device types, not to use it as a pointer.
+equivalent device types, not to use it as a pointer.
Have a table entry {PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID}
to have probe() called for every PCI device known to the system.
diff --git a/Documentation/powerpc/eeh-pci-error-recovery.txt b/Documentation/powerpc/eeh-pci-error-recovery.txt
index 2bfe71beec5b..e75d7474322c 100644
--- a/Documentation/powerpc/eeh-pci-error-recovery.txt
+++ b/Documentation/powerpc/eeh-pci-error-recovery.txt
@@ -134,7 +134,7 @@ pci_get_device_by_addr() will find the pci device associated
with that address (if any).
The default include/asm-ppc64/io.h macros readb(), inb(), insb(),
-etc. include a check to see if the the i/o read returned all-0xff's.
+etc. include a check to see if the i/o read returned all-0xff's.
If so, these make a call to eeh_dn_check_failure(), which in turn
asks the firmware if the all-ff's value is the sign of a true EEH
error. If it is not, processing continues as normal. The grand
diff --git a/Documentation/s390/s390dbf.txt b/Documentation/s390/s390dbf.txt
index e24fdeada970..e321a8ed2a2d 100644
--- a/Documentation/s390/s390dbf.txt
+++ b/Documentation/s390/s390dbf.txt
@@ -468,7 +468,7 @@ The hex_ascii view shows the data field in hex and ascii representation
The raw view returns a bytestream as the debug areas are stored in memory.
The sprintf view formats the debug entries in the same way as the sprintf
-function would do. The sprintf event/expection fuctions write to the
+function would do. The sprintf event/expection functions write to the
debug entry a pointer to the format string (size = sizeof(long))
and for each vararg a long value. So e.g. for a debug entry with a format
string plus two varargs one would need to allocate a (3 * sizeof(long))
diff --git a/Documentation/scsi/ibmmca.txt b/Documentation/scsi/ibmmca.txt
index 2814491600ff..2ffb3ae0ef4d 100644
--- a/Documentation/scsi/ibmmca.txt
+++ b/Documentation/scsi/ibmmca.txt
@@ -344,7 +344,7 @@
/proc/scsi/ibmmca/<host_no>. ibmmca_proc_info() provides this information.
This table is quite informative for interested users. It shows the load
- of commands on the subsystem and wether you are running the bypassed
+ of commands on the subsystem and whether you are running the bypassed
(software) or integrated (hardware) SCSI-command set (see below). The
amount of accesses is shown. Read, write, modeselect is shown separately
in order to help debugging problems with CD-ROMs or tapedrives.
diff --git a/Documentation/sound/alsa/ALSA-Configuration.txt b/Documentation/sound/alsa/ALSA-Configuration.txt
index 5c49ba07e709..ebfcdf28485f 100644
--- a/Documentation/sound/alsa/ALSA-Configuration.txt
+++ b/Documentation/sound/alsa/ALSA-Configuration.txt
@@ -1459,7 +1459,7 @@ devices where %i is sound card number from zero to seven.
To auto-load an ALSA driver for OSS services, define the string
'sound-slot-%i' where %i means the slot number for OSS, which
corresponds to the card index of ALSA. Usually, define this
-as the the same card module.
+as the same card module.
An example configuration for a single emu10k1 card is like below:
----- /etc/modprobe.conf
diff --git a/Documentation/sparse.txt b/Documentation/sparse.txt
index f97841478459..5df44dc894e5 100644
--- a/Documentation/sparse.txt
+++ b/Documentation/sparse.txt
@@ -57,7 +57,7 @@ With BK, you can just get it from
and DaveJ has tar-balls at
- http://www.codemonkey.org.uk/projects/bitkeeper/sparse/
+ http://www.codemonkey.org.uk/projects/git-snapshots/sparse/
Once you have it, just do
diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt
index 136d817c01ba..baf17b381588 100644
--- a/Documentation/sysrq.txt
+++ b/Documentation/sysrq.txt
@@ -171,7 +171,7 @@ the header 'include/linux/sysrq.h', this will define everything else you need.
Next, you must create a sysrq_key_op struct, and populate it with A) the key
handler function you will use, B) a help_msg string, that will print when SysRQ
prints help, and C) an action_msg string, that will print right before your
-handler is called. Your handler must conform to the protoype in 'sysrq.h'.
+handler is called. Your handler must conform to the prototype in 'sysrq.h'.
After the sysrq_key_op is created, you can call the macro
register_sysrq_key(int key, struct sysrq_key_op *op_p) that is defined in
diff --git a/Documentation/uml/UserModeLinux-HOWTO.txt b/Documentation/uml/UserModeLinux-HOWTO.txt
index 0c7b654fec99..544430e39980 100644
--- a/Documentation/uml/UserModeLinux-HOWTO.txt
+++ b/Documentation/uml/UserModeLinux-HOWTO.txt
@@ -2176,7 +2176,7 @@
If you want to access files on the host machine from inside UML, you
can treat it as a separate machine and either nfs mount directories
from the host or copy files into the virtual machine with scp or rcp.
- However, since UML is running on the the host, it can access those
+ However, since UML is running on the host, it can access those
files just like any other process and make them available inside the
virtual machine without needing to use the network.
diff --git a/Documentation/usb/gadget_serial.txt b/Documentation/usb/gadget_serial.txt
index a938c3dd13d6..815f5c2301ff 100644
--- a/Documentation/usb/gadget_serial.txt
+++ b/Documentation/usb/gadget_serial.txt
@@ -20,7 +20,7 @@ License along with this program; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
MA 02111-1307 USA.
-This document and the the gadget serial driver itself are
+This document and the gadget serial driver itself are
Copyright (C) 2004 by Al Borchers (alborchers@steinerpoint.com).
If you have questions, problems, or suggestions for this driver
diff --git a/Documentation/video4linux/CARDLIST.bttv b/Documentation/video4linux/CARDLIST.bttv
index 62a12a08e2ac..ec785f9f15a3 100644
--- a/Documentation/video4linux/CARDLIST.bttv
+++ b/Documentation/video4linux/CARDLIST.bttv
@@ -126,10 +126,12 @@ card=124 - AverMedia AverTV DVB-T 761
card=125 - MATRIX Vision Sigma-SQ
card=126 - MATRIX Vision Sigma-SLC
card=127 - APAC Viewcomp 878(AMAX)
-card=128 - DVICO FusionHDTV DVB-T Lite
+card=128 - DViCO FusionHDTV DVB-T Lite
card=129 - V-Gear MyVCD
card=130 - Super TV Tuner
card=131 - Tibet Systems 'Progress DVR' CS16
card=132 - Kodicom 4400R (master)
card=133 - Kodicom 4400R (slave)
card=134 - Adlink RTV24
+card=135 - DViCO FusionHDTV 5 Lite
+card=136 - Acorp Y878F
diff --git a/Documentation/video4linux/CARDLIST.saa7134 b/Documentation/video4linux/CARDLIST.saa7134
index 1b5a3a9ffbe2..dc57225f39be 100644
--- a/Documentation/video4linux/CARDLIST.saa7134
+++ b/Documentation/video4linux/CARDLIST.saa7134
@@ -62,3 +62,6 @@
61 -> Philips TOUGH DVB-T reference design [1131:2004]
62 -> Compro VideoMate TV Gold+II
63 -> Kworld Xpert TV PVR7134
+ 64 -> FlyTV mini Asus Digimatrix [1043:0210,1043:0210]
+ 65 -> V-Stream Studio TV Terminator
+ 66 -> Yuan TUN-900 (saa7135)
diff --git a/Documentation/video4linux/CARDLIST.tuner b/Documentation/video4linux/CARDLIST.tuner
index f3302e1b1b9c..f5876be658a6 100644
--- a/Documentation/video4linux/CARDLIST.tuner
+++ b/Documentation/video4linux/CARDLIST.tuner
@@ -64,3 +64,4 @@ tuner=62 - Philips TEA5767HN FM Radio
tuner=63 - Philips FMD1216ME MK3 Hybrid Tuner
tuner=64 - LG TDVS-H062F/TUA6034
tuner=65 - Ymec TVF66T5-B/DFF
+tuner=66 - LG NTSC (TALN mini series)
diff --git a/Documentation/video4linux/Zoran b/Documentation/video4linux/Zoran
index 01425c21986b..52c94bd7dca1 100644
--- a/Documentation/video4linux/Zoran
+++ b/Documentation/video4linux/Zoran
@@ -222,7 +222,7 @@ was introduced in 1991, is used in the DC10 old
can generate: PAL , NTSC , SECAM
The adv717x, should be able to produce PAL N. But you find nothing PAL N
-specific in the the registers. Seem that you have to reuse a other standard
+specific in the registers. Seem that you have to reuse a other standard
to generate PAL N, maybe it would work if you use the PAL M settings.
==========================
diff --git a/Documentation/x86_64/boot-options.txt b/Documentation/x86_64/boot-options.txt
index 678e8f192db2..ffe1c062088b 100644
--- a/Documentation/x86_64/boot-options.txt
+++ b/Documentation/x86_64/boot-options.txt
@@ -11,6 +11,11 @@ Machine check
If your BIOS doesn't do that it's a good idea to enable though
to make sure you log even machine check events that result
in a reboot.
+ mce=tolerancelevel (number)
+ 0: always panic, 1: panic if deadlock possible,
+ 2: try to avoid panic, 3: never panic or exit (for testing)
+ default is 1
+ Can be also set using sysfs which is preferable.
nomce (for compatibility with i386): same as mce=off
OpenPOWER on IntegriCloud