diff options
author | Linus Torvalds <torvalds@g5.osdl.org> | 2006-06-26 13:33:14 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2006-06-26 13:33:14 -0700 |
commit | da206c9e68cb93fcab43592d46276c02889c1250 (patch) | |
tree | 21264cc26fa0322d668b398808f10bd93558d25f /Documentation | |
parent | 916d15445f4ad2a9018e5451760734f36083be77 (diff) | |
parent | 2e2d0dcc1bd7ca7c26ea5e29efb7f34bbd564f1c (diff) | |
download | talos-op-linux-da206c9e68cb93fcab43592d46276c02889c1250.tar.gz talos-op-linux-da206c9e68cb93fcab43592d46276c02889c1250.zip |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial
* git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial:
typo fixes
Clean up 'inline is not at beginning' warnings for usb storage
Storage class should be first
i386: Trivial typo fixes
ixj: make ixj_set_tone_off() static
spelling fixes
fix paniced->panicked typos
Spelling fixes for Documentation/atomic_ops.txt
move acknowledgment for Mark Adler to CREDITS
remove the bouncing email address of David Campbell
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/DocBook/kernel-locking.tmpl | 2 | ||||
-rw-r--r-- | Documentation/atomic_ops.txt | 28 | ||||
-rw-r--r-- | Documentation/driver-model/overview.txt | 2 | ||||
-rw-r--r-- | Documentation/kdump/gdbmacros.txt | 2 | ||||
-rw-r--r-- | Documentation/scsi/ppa.txt | 2 |
5 files changed, 17 insertions, 19 deletions
diff --git a/Documentation/DocBook/kernel-locking.tmpl b/Documentation/DocBook/kernel-locking.tmpl index 158ffe9bfade..644c3884fab9 100644 --- a/Documentation/DocBook/kernel-locking.tmpl +++ b/Documentation/DocBook/kernel-locking.tmpl @@ -1590,7 +1590,7 @@ the amount of locking which needs to be done. <para> Our final dilemma is this: when can we actually destroy the removed element? Remember, a reader might be stepping through - this element in the list right now: it we free this element and + this element in the list right now: if we free this element and the <symbol>next</symbol> pointer changes, the reader will jump off into garbage and crash. We need to wait until we know that all the readers who were traversing the list when we deleted the diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt index 23a1c2402bcc..2a63d5662a93 100644 --- a/Documentation/atomic_ops.txt +++ b/Documentation/atomic_ops.txt @@ -157,13 +157,13 @@ For example, smp_mb__before_atomic_dec() can be used like so: smp_mb__before_atomic_dec(); atomic_dec(&obj->ref_count); -It makes sure that all memory operations preceeding the atomic_dec() +It makes sure that all memory operations preceding the atomic_dec() call are strongly ordered with respect to the atomic counter -operation. In the above example, it guarentees that the assignment of +operation. In the above example, it guarantees that the assignment of "1" to obj->dead will be globally visible to other cpus before the atomic counter decrement. -Without the explicitl smp_mb__before_atomic_dec() call, the +Without the explicit smp_mb__before_atomic_dec() call, the implementation could legally allow the atomic counter update visible to other cpus before the "obj->dead = 1;" assignment. @@ -173,11 +173,11 @@ ordering with respect to memory operations after an atomic_dec() call (smp_mb__{before,after}_atomic_inc()). A missing memory barrier in the cases where they are required by the -atomic_t implementation above can have disasterous results. Here is -an example, which follows a pattern occuring frequently in the Linux +atomic_t implementation above can have disastrous results. Here is +an example, which follows a pattern occurring frequently in the Linux kernel. It is the use of atomic counters to implement reference counting, and it works such that once the counter falls to zero it can -be guarenteed that no other entity can be accessing the object: +be guaranteed that no other entity can be accessing the object: static void obj_list_add(struct obj *obj) { @@ -291,9 +291,9 @@ to the size of an "unsigned long" C data type, and are least of that size. The endianness of the bits within each "unsigned long" are the native endianness of the cpu. - void set_bit(unsigned long nr, volatils unsigned long *addr); - void clear_bit(unsigned long nr, volatils unsigned long *addr); - void change_bit(unsigned long nr, volatils unsigned long *addr); + void set_bit(unsigned long nr, volatile unsigned long *addr); + void clear_bit(unsigned long nr, volatile unsigned long *addr); + void change_bit(unsigned long nr, volatile unsigned long *addr); These routines set, clear, and change, respectively, the bit number indicated by "nr" on the bit mask pointed to by "ADDR". @@ -301,9 +301,9 @@ indicated by "nr" on the bit mask pointed to by "ADDR". They must execute atomically, yet there are no implicit memory barrier semantics required of these interfaces. - int test_and_set_bit(unsigned long nr, volatils unsigned long *addr); - int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr); - int test_and_change_bit(unsigned long nr, volatils unsigned long *addr); + int test_and_set_bit(unsigned long nr, volatile unsigned long *addr); + int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); + int test_and_change_bit(unsigned long nr, volatile unsigned long *addr); Like the above, except that these routines return a boolean which indicates whether the changed bit was set _BEFORE_ the atomic bit @@ -335,7 +335,7 @@ subsequent memory operation is made visible. For example: /* ... */; obj->killed = 1; -The implementation of test_and_set_bit() must guarentee that +The implementation of test_and_set_bit() must guarantee that "obj->dead = 1;" is visible to cpus before the atomic memory operation done by test_and_set_bit() becomes visible. Likewise, the atomic memory operation done by test_and_set_bit() must become visible before @@ -474,7 +474,7 @@ Now, as far as memory barriers go, as long as spin_lock() strictly orders all subsequent memory operations (including the cas()) with respect to itself, things will be fine. -Said another way, _atomic_dec_and_lock() must guarentee that +Said another way, _atomic_dec_and_lock() must guarantee that a counter dropping to zero is never made visible before the spinlock being acquired. diff --git a/Documentation/driver-model/overview.txt b/Documentation/driver-model/overview.txt index ac4a7a737e43..2050c9ffc629 100644 --- a/Documentation/driver-model/overview.txt +++ b/Documentation/driver-model/overview.txt @@ -18,7 +18,7 @@ Traditional driver models implemented some sort of tree-like structure (sometimes just a list) for the devices they control. There wasn't any uniformity across the different bus types. -The current driver model provides a comon, uniform data model for describing +The current driver model provides a common, uniform data model for describing a bus and the devices that can appear under the bus. The unified bus model includes a set of common attributes which all busses carry, and a set of common callbacks, such as device discovery during bus probing, bus diff --git a/Documentation/kdump/gdbmacros.txt b/Documentation/kdump/gdbmacros.txt index dcf5580380ab..9b9b454b048a 100644 --- a/Documentation/kdump/gdbmacros.txt +++ b/Documentation/kdump/gdbmacros.txt @@ -175,7 +175,7 @@ end document trapinfo Run info threads and lookup pid of thread #1 'trapinfo <pid>' will tell you by which trap & possibly - addresthe kernel paniced. + address the kernel panicked. end diff --git a/Documentation/scsi/ppa.txt b/Documentation/scsi/ppa.txt index 0dac88d86d87..5d9223bc1bd5 100644 --- a/Documentation/scsi/ppa.txt +++ b/Documentation/scsi/ppa.txt @@ -12,5 +12,3 @@ http://www.torque.net/parport/ Email list for Linux Parport linux-parport@torque.net -Email for problems with ZIP or ZIP Plus drivers -campbell@torque.net |