summaryrefslogtreecommitdiffstats
path: root/net/sunrpc/cache.c
Commit message (Collapse)AuthorAgeFilesLines
* svcrpc: ensure cache_check caller sees updated entryJ. Bruce Fields2011-01-041-1/+10
| | | | | | | | | | | | | | | | | | | | | | | Supposes cache_check runs simultaneously with an update on a different CPU: cache_check task doing update ^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^ 1. test for CACHE_VALID 1'. set entry->data & !CACHE_NEGATIVE 2. use entry->data 2'. set CACHE_VALID If the two memory writes performed in step 1' and 2' appear misordered with respect to the reads in step 1 and 2, then the caller could get stale data at step 2 even though it saw CACHE_VALID set on the cache entry. Add memory barriers to prevent this. Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* svcrpc: take lock on turning entry NEGATIVE in cache_checkJ. Bruce Fields2011-01-041-7/+18
| | | | | | | | | | | | | | We attempt to turn a cache entry negative in place. But that entry may already have been filled in by some other task since we last checked whether it was valid, so we could be modifying an already-valid entry. If nothing else there's a likely leak in such a case when the entry is eventually put() and contents are not freed because it has CACHE_NEGATIVE set. So, take the cache_lock just as sunrpc_cache_update() does. Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* svcrpc: avoid double reply caused by deferral raceJ. Bruce Fields2011-01-041-7/+11
| | | | | | | | | | | | | | | | | | | | | | | Commit d29068c431599fa "sunrpc: Simplify cache_defer_req and related functions." asserted that cache_check() could determine success or failure of cache_defer_req() by checking the CACHE_PENDING bit. This isn't quite right. We need to know whether cache_defer_req() created a deferred request, in which case sending an rpc reply has become the responsibility of the deferred request, and it is important that we not send our own reply, resulting in two different replies to the same request. And the CACHE_PENDING bit doesn't tell us that; we could have succesfully created a deferred request at the same time as another thread cleared the CACHE_PENDING bit. So, partially revert that commit, to ensure that cache_check() returns -EAGAIN if and only if a deferred request has been created. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Acked-by: NeilBrown <neilb@suse.de>
* Merge branch 'for-2.6.37' of git://linux-nfs.org/~bfields/linuxLinus Torvalds2010-10-261-87/+201
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-2.6.37' of git://linux-nfs.org/~bfields/linux: (99 commits) svcrpc: svc_tcp_sendto XPT_DEAD check is redundant svcrpc: no need for XPT_DEAD check in svc_xprt_enqueue svcrpc: assume svc_delete_xprt() called only once svcrpc: never clear XPT_BUSY on dead xprt nfsd4: fix connection allocation in sequence() nfsd4: only require krb5 principal for NFSv4.0 callbacks nfsd4: move minorversion to client nfsd4: delay session removal till free_client nfsd4: separate callback change and callback probe nfsd4: callback program number is per-session nfsd4: track backchannel connections nfsd4: confirm only on succesful create_session nfsd4: make backchannel sequence number per-session nfsd4: use client pointer to backchannel session nfsd4: move callback setup into session init code nfsd4: don't cache seq_misordered replies SUNRPC: Properly initialize sock_xprt.srcaddr in all cases SUNRPC: Use conventional switch statement when reclassifying sockets sunrpc/xprtrdma: clean up workqueue usage sunrpc: Turn list_for_each-s into the ..._entry-s ... Fix up trivial conflicts (two different deprecation notices added in separate branches) in Documentation/feature-removal-schedule.txt
| * sunrpc/cache: centralise handling of size limit on deferred list.NeilBrown2010-10-111-24/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We limit the number of 'defer' requests to DFR_MAX. The imposition of this limit is spread about a bit - sometime we don't add new things to the list, sometimes we remove old things. Also it is currently applied to requests which we are 'waiting' for rather than 'deferring'. This doesn't seem ideal as 'waiting' requests are naturally limited by the number of threads. So gather the DFR_MAX handling code to one place and only apply it to requests that are actually being deferred. This means that not all 'cache_deferred_req' structures go on the 'cache_defer_list, so we need to be careful when adding and removing things. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc: Simplify cache_defer_req and related functions.NeilBrown2010-10-111-36/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The return value from cache_defer_req is somewhat confusing. Various different error codes are returned, but the single caller is only interested in success or failure. In fact it can measure this success or failure itself by checking CACHE_PENDING, which makes the point of the code more explicit. So change cache_defer_req to return 'void' and test CACHE_PENDING after it completes, to see if the request was actually deferred or not. Similarly setup_deferral and cache_wait_req don't need a return value, so make them void and remove some code. The call to cache_revisit_request (to guard against a race) is only needed for the second call to setup_deferral, so move it out of setup_deferral to after that second call. With the first call the race is handled differently (by explicitly calling 'wait_for_completion'). Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc: fix race in new cache_wait code.NeilBrown2010-10-011-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we set up to wait for a cache item to be filled in, and then find that it is no longer pending, it could be that some other thread is in 'cache_revisit_request' and has moved our request to its 'pending' list. So when our setup_deferral calls cache_revisit_request it will find nothing to put on the pending list, and do nothing. We then return from cache_wait_req, thus leaving the 'sleeper' on-stack structure open to being corrupted by subsequent stack usage. However that 'sleeper' could still be on the 'pending' list that the other thread is looking at and so any corruption could cause it to behave badly. To avoid this race we simply take the same path as if the 'wait_for_completion_interruptible_timeout' was interrupted and if the sleeper is no longer on the list (which it won't be) we wait on the completion - which will ensure that any other cache_revisit_request will have let go of the sleeper. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc: Make the /proc/net/rpc appear in net namespacesPavel Emelyanov2010-09-271-3/+8
| | | | | | | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc: Add routines that allow registering per-net cachesPavel Emelyanov2010-09-271-8/+19
| | | | | | | | | | | | | | Existing calls do the same, but for the init_net. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc/cache: fix recent breakage of cache_clean_deferredNeilBrown2010-09-221-1/+3
| | | | | | | | | | | | | | | | | | | | | | commit 6610f720e9e8103c22d1f1ccf8fbb695550a571f broke cache_clean_deferred as entries are no longer added to the pending list for subsequent revisiting. So put those requests back on the pending list. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc/cache: don't use custom hex_to_bin() converterAndy Shevchenko2010-09-211-7/+13
| | | | | | | | | | | | | | Signed-off-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: linux-nfs@vger.kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc/cache: change deferred-request hash table to use hlist.NeilBrown2010-09-211-18/+10
| | | | | | | | | | | | | | | | | | | | | | Being a hash table, hlist is the best option. There is currently some ugliness were we treat "->next == NULL" as a special case to avoid having to initialise the whole array. This change nicely gets rid of that case. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * nfsd4: fix hang on fast-booting nfs serversJ. Bruce Fields2010-09-191-4/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last_close field of a cache_detail is initialized to zero, so the condition detail->last_close < seconds_since_boot() - 30 may be false even for a cache that was never opened. However, we want to immediately fail upcalls to caches that were never opened: in the case of the auth_unix_gid cache, especially, which may never be opened by mountd (if the --manage-gids option is not set), we want to fail the upcall immediately. Otherwise client requests will be dropped unnecessarily on reboot. Also document these conditions. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * svcrpc: cache deferral cleanupJ. Bruce Fields2010-09-071-64/+79
| | | | | | | | | | | | | | Attempt to make obvious the first-try-sleeping-then-try-deferral logic by putting that logic into a top-level function that calls helpers. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * svcrpc: minor cache cleanupJ. Bruce Fields2010-09-071-20/+24
| | | | | | | | | | | | Pull out some code into helper functions, fix a typo. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc/cache: allow threads to block while waiting for cache update.NeilBrown2010-09-071-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current practice of waiting for cache updates by queueing the whole request to be retried has (at least) two problems. 1/ With NFSv4, requests can be quite complex and re-trying a whole request when a later part fails should only be a last-resort, not a normal practice. 2/ Large requests, and in particular any 'write' request, will not be queued by the current code and doing so would be undesirable. In many cases only a very sort wait is needed before the cache gets valid data. So, providing the underlying transport permits it by setting ->thread_wait, arrange to wait briefly for an upcall to be completed (as reflected in the clearing of CACHE_PENDING). If the short wait was not long enough and CACHE_PENDING is still set, fall back on the old approach. The 'thread_wait' value is set to 5 seconds when there are spare threads, and 1 second when there are no spare threads. These values are probably much higher than needed, but will ensure some forward progress. Note that as we only request an update for a non-valid item, and as non-valid items are updated in place it is extremely unlikely that cache_check will return -ETIMEDOUT. Normally cache_defer_req will sleep for a short while and then find that the item is_valid. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * sunrpc: use seconds since boot in expiry cacheNeilBrown2010-09-071-17/+19
| | | | | | | | | | | | | | | | | | | | | | | | This protects us from confusion when the wallclock time changes. We convert to and from wallclock when setting or reading expiry times. Also use seconds since boot for last_clost time. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* | Merge branch 'llseek' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/bklLinus Torvalds2010-10-221-0/+2
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'llseek' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/bkl: vfs: make no_llseek the default vfs: don't use BKL in default_llseek llseek: automatically add .llseek fop libfs: use generic_file_llseek for simple_attr mac80211: disallow seeks in minstrel debug code lirc: make chardev nonseekable viotape: use noop_llseek raw: use explicit llseek file operations ibmasmfs: use generic_file_llseek spufs: use llseek in all file operations arm/omap: use generic_file_llseek in iommu_debug lkdtm: use generic_file_llseek in debugfs net/wireless: use generic_file_llseek in debugfs drm: use noop_llseek
| * | llseek: automatically add .llseek fopArnd Bergmann2010-10-151-0/+2
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All file_operations should get a .llseek operation so we can make nonseekable_open the default for future file operations without a .llseek pointer. The three cases that we can automatically detect are no_llseek, seq_lseek and default_llseek. For cases where we can we can automatically prove that the file offset is always ignored, we use noop_llseek, which maintains the current behavior of not returning an error from a seek. New drivers should normally not use noop_llseek but instead use no_llseek and call nonseekable_open at open time. Existing drivers can be converted to do the same when the maintainer knows for certain that no user code relies on calling seek on the device file. The generated code is often incorrectly indented and right now contains comments that clarify for each added line why a specific variant was chosen. In the version that gets submitted upstream, the comments will be gone and I will manually fix the indentation, because there does not seem to be a way to do that using coccinelle. Some amount of new code is currently sitting in linux-next that should get the same modifications, which I will do at the end of the merge window. Many thanks to Julia Lawall for helping me learn to write a semantic patch that does all this. ===== begin semantic patch ===== // This adds an llseek= method to all file operations, // as a preparation for making no_llseek the default. // // The rules are // - use no_llseek explicitly if we do nonseekable_open // - use seq_lseek for sequential files // - use default_llseek if we know we access f_pos // - use noop_llseek if we know we don't access f_pos, // but we still want to allow users to call lseek // @ open1 exists @ identifier nested_open; @@ nested_open(...) { <+... nonseekable_open(...) ...+> } @ open exists@ identifier open_f; identifier i, f; identifier open1.nested_open; @@ int open_f(struct inode *i, struct file *f) { <+... ( nonseekable_open(...) | nested_open(...) ) ...+> } @ read disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ read_no_fpos disable optional_qualifier exists @ identifier read_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off) { ... when != off } @ write @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; expression E; identifier func; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { <+... ( *off = E | *off += E | func(..., off, ...) | E = *off ) ...+> } @ write_no_fpos @ identifier write_f; identifier f, p, s, off; type ssize_t, size_t, loff_t; @@ ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off) { ... when != off } @ fops0 @ identifier fops; @@ struct file_operations fops = { ... }; @ has_llseek depends on fops0 @ identifier fops0.fops; identifier llseek_f; @@ struct file_operations fops = { ... .llseek = llseek_f, ... }; @ has_read depends on fops0 @ identifier fops0.fops; identifier read_f; @@ struct file_operations fops = { ... .read = read_f, ... }; @ has_write depends on fops0 @ identifier fops0.fops; identifier write_f; @@ struct file_operations fops = { ... .write = write_f, ... }; @ has_open depends on fops0 @ identifier fops0.fops; identifier open_f; @@ struct file_operations fops = { ... .open = open_f, ... }; // use no_llseek if we call nonseekable_open //////////////////////////////////////////// @ nonseekable1 depends on !has_llseek && has_open @ identifier fops0.fops; identifier nso ~= "nonseekable_open"; @@ struct file_operations fops = { ... .open = nso, ... +.llseek = no_llseek, /* nonseekable */ }; @ nonseekable2 depends on !has_llseek @ identifier fops0.fops; identifier open.open_f; @@ struct file_operations fops = { ... .open = open_f, ... +.llseek = no_llseek, /* open uses nonseekable */ }; // use seq_lseek for sequential files ///////////////////////////////////// @ seq depends on !has_llseek @ identifier fops0.fops; identifier sr ~= "seq_read"; @@ struct file_operations fops = { ... .read = sr, ... +.llseek = seq_lseek, /* we have seq_read */ }; // use default_llseek if there is a readdir /////////////////////////////////////////// @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier readdir_e; @@ // any other fop is used that changes pos struct file_operations fops = { ... .readdir = readdir_e, ... +.llseek = default_llseek, /* readdir is present */ }; // use default_llseek if at least one of read/write touches f_pos ///////////////////////////////////////////////////////////////// @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read.read_f; @@ // read fops use offset struct file_operations fops = { ... .read = read_f, ... +.llseek = default_llseek, /* read accesses f_pos */ }; @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, ... + .llseek = default_llseek, /* write accesses f_pos */ }; // Use noop_llseek if neither read nor write accesses f_pos /////////////////////////////////////////////////////////// @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; identifier write_no_fpos.write_f; @@ // write fops use offset struct file_operations fops = { ... .write = write_f, .read = read_f, ... +.llseek = noop_llseek, /* read and write both use no f_pos */ }; @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier write_no_fpos.write_f; @@ struct file_operations fops = { ... .write = write_f, ... +.llseek = noop_llseek, /* write uses no f_pos */ }; @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; identifier read_no_fpos.read_f; @@ struct file_operations fops = { ... .read = read_f, ... +.llseek = noop_llseek, /* read uses no f_pos */ }; @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @ identifier fops0.fops; @@ struct file_operations fops = { ... +.llseek = noop_llseek, /* no read or write fn */ }; ===== End semantic patch ===== Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Julia Lawall <julia@diku.dk> Cc: Christoph Hellwig <hch@infradead.org>
* | sunrpc: remove the big kernel lockArnd Bergmann2010-10-191-13/+2
|/ | | | | | | | | | The sunrpc cache_ioctl function does not need the big kernel lock because it uses its own queue_lock already. rpc_pipe_ioctl apparently should be using i_lock like the other operations on the pipe file descriptor do. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
* net: sunrpc: removed duplicated #includeAndrea Gelmini2010-08-061-1/+0
| | | | | Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* sunrpc: make the cache cleaner workqueue deferrableArtem Bityutskiy2010-07-061-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes the cache_cleaner workqueue deferrable, to prevent unnecessary system wake-ups, which is very important for embedded battery-powered devices. do_cache_clean() is called every 30 seconds at the moment, and often makes the system wake up from its power-save sleep state. With this change, when the workqueue uses a deferrable timer, the do_cache_clean() invocation will be delayed and combined with the closest "real" wake-up. This improves the power consumption situation. Note, I tried to create a DECLARE_DELAYED_WORK_DEFERRABLE() helper macro, similar to DECLARE_DELAYED_WORK(), but failed because of the way the timer wheel core stores the deferrable flag (it is the LSBit in the time->base pointer). My attempt to define a static variable with this bit set ended up with the "initializer element is not constant" error. Thus, I have to use run-time initialization, so I created a new cache_initialize() function which is called once when sunrpc is being initialized. Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* Merge branch 'bkl/ioctl' of ↵Linus Torvalds2010-05-241-3/+10
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing * 'bkl/ioctl' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing: uml: Pushdown the bkl from harddog_kern ioctl sunrpc: Pushdown the bkl from sunrpc cache ioctl sunrpc: Pushdown the bkl from ioctl autofs4: Pushdown the bkl from ioctl uml: Convert to unlocked_ioctls to remove implicit BKL ncpfs: BKL ioctl pushdown coda: Clean-up whitespace problems in pioctl.c coda: BKL ioctl pushdown drivers: Push down BKL into various drivers isdn: Push down BKL into ioctl functions scsi: Push down BKL into ioctl functions dvb: Push down BKL into ioctl functions smbfs: Push down BKL into ioctl function coda/psdev: Remove BKL from ioctl function um/mmapper: Remove BKL usage sn_hwperf: Kill BKL usage hfsplus: Push down BKL into ioctl function
| * sunrpc: Pushdown the bkl from sunrpc cache ioctlFrederic Weisbecker2010-05-221-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | Pushdown the bkl to cache_ioctl_pipefs. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Neil Brown <neilb@suse.de> Cc: Nfs <linux-nfs@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: John Kacur <jkacur@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de>
* | Merge branch 'for-2.6.35' of git://linux-nfs.org/~bfields/linuxLinus Torvalds2010-05-191-16/+29
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-2.6.35' of git://linux-nfs.org/~bfields/linux: (45 commits) Revert "nfsd4: distinguish expired from stale stateids" nfsd: safer initialization order in find_file() nfs4: minor callback code simplification, comment NFSD: don't report compiled-out versions as present nfsd4: implement reclaim_complete nfsd4: nfsd4_destroy_session must set callback client under the state lock nfsd4: keep a reference count on client while in use nfsd4: mark_client_expired nfsd4: introduce nfs4_client.cl_refcount nfsd4: refactor expire_client nfsd4: extend the client_lock to cover cl_lru nfsd4: use list_move in move_to_confirmed nfsd4: fold release_session into expire_client nfsd4: rename sessionid_lock to client_lock nfsd4: fix bare destroy_session null dereference nfsd4: use local variable in nfs4svc_encode_compoundres nfsd: further comment typos sunrpc: centralise most calls to svc_xprt_received nfsd4: fix unlikely race in session replay case nfsd4: fix filehandle comment ...
| * | sunrpc/cache: fix module refcnt leak in a failure pathLi Zefan2010-03-241-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't forget to release the module refcnt if seq_open() returns failure. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Cc: J. Bruce Fields <bfields@fieldses.org> Cc: Neil Brown <neilb@suse.de> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
| * | sunrpc: never return expired entries in sunrpc_cache_lookupNeilBrown2010-03-141-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If sunrpc_cache_lookup finds an expired entry, remove it from the cache and return a freshly created non-VALID entry instead. This ensures that we only ever get a usable entry, or an entry that will become usable once an update arrives. i.e. we will never need to repeat the lookup. This allows us to remove the 'is_expired' test from cache_check (i.e. from cache_is_valid). cache_check should never get an expired entry as 'lookup' will never return one. If it does happen - due to inconvenient timing - then just accept it as still valid, it won't be very much past it's use-by date. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
| * | sunrpc/cache: factor out cache_is_expiredNeilBrown2010-03-141-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | This removes a tiny bit of code duplication, but more important prepares for following patch which will perform the expiry check in cache_lookup and the rest of the validity check in cache_check. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
| * | sunrpc: don't keep expired entries in the auth caches.NeilBrown2010-03-141-8/+5
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | currently expired entries remain in the auth caches as long as there is a reference. This was needed long ago when the auth_domain cache used the same cache infrastructure. But since that (being a very different sort of cache) was separated, this test is no longer needed. So remove the test on refcnt and tidy up the surrounding code. This allows the cache_dequeue call (which needed to be there to drop a potentially awkward reference) can be moved outside of the spinlock which is a better place for it. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* | sunrpc: Include missing smp_lock.hFrederic Weisbecker2010-05-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that cache_ioctl_procfs() calls the bkl explicitly, we need to include the relevant header as well. This fixes the following build error: net/sunrpc/cache.c: In function 'cache_ioctl_procfs': net/sunrpc/cache.c:1355: error: implicit declaration of function 'lock_kernel' net/sunrpc/cache.c:1359: error: implicit declaration of function 'unlock_kernel' Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* | procfs: Push down the bkl from ioctlFrederic Weisbecker2010-05-171-4/+10
|/ | | | | | | | | | | | | | | | | | | | | Push down the bkl from procfs's ioctl main handler to its users. Only three procfs users implement an ioctl (non unlocked) handler. Turn them into unlocked_ioctl and push down the Devil inside. v2: PDE(inode)->data doesn't need to be under bkl v3: And don't forget to git-add the result v4: Use wrappers to pushdown instead of an invasive and error prone handlers surgery. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: John Kacur <jkacur@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Alexey Dobriyan <adobriyan@gmail.com>
* net: Move && and || to end of previous lineJoe Perches2009-11-291-3/+2
| | | | | | | | | | | Not including net/atm/ Compiled tested x86 allyesconfig only Added a > 80 column line or two, which I ignored. Existing checkpatch plaints willfully, cheerfully ignored. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* sunrpc/cache: avoid variable over-loading in cache_defer_reqNeilBrown2009-09-181-9/+9
| | | | | | | | | | | | In cache_defer_req, 'dreq' is used for two significantly different values that happen to be of the same type. This is both confusing, and makes it hard to extend the range of one of the values as we will in the next patch. So introduce 'discard' to take one of the values. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* sunrpc/cache: use list_del_init for the list_head entries in cache_deferred_reqNeilBrown2009-09-181-4/+4
| | | | | | | | | Using list_del_init is generally safer than list_del, and it will allow us, in a subsequent patch, to see if an entry has already been processed or not. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* sunrpc/cache: simplify cache_fresh_locked and cache_fresh_unlocked.NeilBrown2009-09-111-13/+10
| | | | | | | | | | | | | | | | | The extra call to cache_revisit_request in cache_fresh_unlocked is not needed, as should have been fairly clear at the time of commit 4013edea9a0b6cdcb1fdf5d4011e47e068fd6efb If there are requests to be revisited, then we can be sure that CACHE_PENDING is set, so the second call is sufficient. So remove the first call. Then remove the 'new' parameter, then remove the return value for cache_fresh_locked which is only used to provide the value for 'new'. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* sunrpc/cache: change cache_defer_req to return -ve error, not boolean.NeilBrown2009-09-111-5/+5
| | | | | | | | | | | | | As "cache_defer_req" does not sound like a predicate, having it return a boolean value can be confusing. It is more consistent to return 0 for success and negative for error. Exactly what error code to return is not important as we don't differentiate between reasons why the request wasn't deferred, we only care about whether it was deferred or not. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* Merge branch 'nfs-for-2.6.32' of ↵J. Bruce Fields2009-08-211-149/+473
|\ | | | | | | | | | | | | git://git.linux-nfs.org/projects/trondmy/nfs-2.6 into for-2.6.32-incoming Conflicts: net/sunrpc/cache.c
| * SUNRPC: cache must take a reference to the cache detail's module on open()Trond Myklebust2009-08-191-4/+76
| | | | | | | | | | | | | | Otherwise we Oops if the module containing the cache detail is removed before all cache readers have closed the file. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * SUNRPC: Add an rpc_pipefs front end for the sunrpc cache codeTrond Myklebust2009-08-091-0/+126
| | | | | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * SUNRPC: Move procfs-specific stuff out of the generic sunrpc cache codeTrond Myklebust2009-08-091-129/+190
| | | | | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * SUNRPC: Allow the cache_detail to specify alternative upcall mechanismsTrond Myklebust2009-08-091-8/+18
| | | | | | | | | | | | | | | | | | | | | | | | For events that are rare, such as referral DNS lookups, it makes limited sense to have a daemon constantly listening for upcalls on a channel. An alternative in those cases might simply be to run the app that fills the cache using call_usermodehelper_exec() and friends. The following patch allows the cache_detail to specify alternative upcall mechanisms for these particular cases. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * SUNRPC: Remove the global temporary write buffer in net/sunrpc/cache.cTrond Myklebust2009-08-091-25/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While we do want to protect against multiple concurrent readers and writers on each upcall/downcall pipe, we don't want to limit concurrent reading and writing to separate caches. This patch therefore replaces the static buffer 'write_buf', which can only be used by one writer at a time, with use of the page cache as the temporary buffer for downcalls. We still fall back to using the the old global buffer if the downcall is larger than PAGE_CACHE_SIZE, since this is apparently needed by the SPKM security context initialisation. It then replaces the use of the global 'queue_io_mutex' with the inode->i_mutex in cache_read() and cache_write(). Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * SUNRPC: Ensure we initialise the cache_detail before creating procfs filesTrond Myklebust2009-08-091-10/+20
| | | | | | | | | | | | | | Also ensure that we destroy those files before we destroy the cache_detail. Otherwise, user processes might attempt to write into uninitialised caches. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * NFSD: Clean up the idmapper warning...Trond Myklebust2009-08-091-1/+1
| | | | | | | | | | | | What part of 'internal use' is so hard to understand? Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | sunrpc/cache: recheck cache validity after cache_defer_reqNeilBrown2009-08-041-20/+33
| | | | | | | | | | | | | | | | | | If cache_defer_req did not leave the request on a queue, then it could possibly have waited long enough that the cache became valid. So check the status after the call. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* | sunrpc/cache: make sure deferred requests eventually get revisited.NeilBrown2009-08-041-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While deferred requests normally get revisited quite quickly, it is possible for a request to remain in the deferral queue when the cache item is discarded. We can easily make sure that doesn't happen by calling cache_revisit_request just before the final 'put'. Also there is a small chance that a race would cause one thread to defer a request against a cache item while another thread is failing to queue an upcall for that item. So when the upcall fails, make sure to revisit all deferred requests. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* | sunrpc/cache: rename queue_loose to cache_dequeueNeilBrown2009-08-041-4/+4
|/ | | | | | | | | 'loose' was a mis-spelling of 'lose', and even that wasn't a good word choice. So give this function a more useful name. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* sunrpc: align cache_clean work's timerAnton Blanchard2009-06-151-1/+1
| | | | | | | | | | | | Align cache_clean work. Signed-off-by: Anton Blanchard <anton@samba.org> Cc: Neil Brown <neilb@suse.de> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
* proc 2/2: remove struct proc_dir_entry::ownerAlexey Dobriyan2009-03-311-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Setting ->owner as done currently (pde->owner = THIS_MODULE) is racy as correctly noted at bug #12454. Someone can lookup entry with NULL ->owner, thus not pinning enything, and release it later resulting in module refcount underflow. We can keep ->owner and supply it at registration time like ->proc_fops and ->data. But this leaves ->owner as easy-manipulative field (just one C assignment) and somebody will forget to unpin previous/pin current module when switching ->owner. ->proc_fops is declared as "const" which should give some thoughts. ->read_proc/->write_proc were just fixed to not require ->owner for protection. rmmod'ed directories will be empty and return "." and ".." -- no harm. And directories with tricky enough readdir and lookup shouldn't be modular. We definitely don't want such modular code. Removing ->owner will also make PDE smaller. So, let's nuke it. Kudos to Jeff Layton for reminding about this, let's say, oversight. http://bugzilla.kernel.org/show_bug.cgi?id=12454 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
* SUNRPC: The sunrpc server code should not be used by out-of-tree modulesTrond Myklebust2009-01-071-10/+10
| | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
OpenPOWER on IntegriCloud