| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Al Viro has cleaned up the way ops are processed and waited for,
now orangefs.txt has an overview of how it works. Several recent
related commits have added to the comments in the code as well.
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
orangefs contains a helper function to calculate the difference
between two timeval structures. We are trying to remove all
instances of timespec from the kernel, and this one is not
used at all, so let's remove it now.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The new orangefs code uses a helper function to read a time field to
its private structures from struct iattr. This will conflict with the
move to 64-bit timestamps in the kernel and is generally not necessary.
This replaces the conversion with a simple cast to time64_t that shows
what is going on. As the orangefs-internal representation already uses
64-bit timestamps, there should be no ambiguity to negative values,
and the cast ensures that we treat them as times before 1970 on both
32-bit and 64-bit architectures, rather than times after 2038. This
patch keeps that behavior.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Size and type are read-only and not in the mask. The times were left
unset despite being in the mask.
We zero-fill the times since the server will fill them in and we will
get the correct time when we fill the inode with getattr.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
I have verified that there is nothing in the userspace daemon version we
are implementing this protocol against that ever looks at this field.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We only need it while the service operation is actually in progress
since it is only used to co-ordinate the client-core's memory use. The
kernel allocates its own space.
Also clean up some comments which mislead the reader into thinking
the readdir buffers are shared memory.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
not needed anymore
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* turn all those list_del(&op->list) into list_del_init()
* don't pick ops that are already given up in control device
->read()/->write_iter().
* have orangefs_clean_interrupted_operation() notice if op is currently
being copied to/from daemon (by said ->read()/->write_iter()) and
wait for that to finish.
* when we are done copying to/from daemon and find that it had been
given up while we were doing that, wake the waiting ..._clean_interrupted_...
As the result, we are guaranteed that orangefs_clean_interrupted_operation(op)
doesn't return until nobody else can see op. Moreover, we don't need to play
with op refcounts anymore.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... and clean the end of control device ->write_iter() while we are at it
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
shouldn't be needed now
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
new waiting-for-slot logics:
* make request for slot wait for bufmap to be set up if it
comes before it's installed *OR* while it's running down
* make closing control device wait for all slots to be freed
* waiting itself rewritten to (open-coded) analogues of wait_event_...
primitives - we would need wait_event_locked() and, pardon an obscenely
long name, wait_event_interruptible_exclusive_timeout_locked().
* we never wait for more than slot_timeout_secs in total and,
if during the wait the daemon goes away, we only allow
ORANGEFS_BUFMAP_WAIT_TIMEOUT_SECS for it to come back.
* (cosmetical) bitmap is used instead of an array of zeroes and ones
* old (and only reached if we are about to corrupt memory) waiting
for daemon restart in service_operation() removed.
[Martin's fixes folded]
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... just hold the spinlock while fetching the field in question.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* checking that daemon is running (to decide whether we want to limit
the timeout) should be done *after* the damn thing is included into
the list; doing that before means that if the daemon gets shut down
in between, we'll end up waiting indefinitely (== up to kill -9).
* cancels should go into the head of the queue - the sooner they
are picked, the less work daemon has to do and the sooner we get to
free the slot held by aborted operation.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
turn op->waitq into struct completion...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Suggestion from Dan Carpenter.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Make cancels reuse the aborted read/write op, to make sure they do not
fail on lack of memory.
Don't issue a cancel unless the daemon has seen our read/write, has not
replied and isn't being shut down.
If cancel *is* issued, don't wait for it to complete; stash the slot
in there and just have it freed when cancel is finally replied to or
purged (and delay dropping the reference until then, obviously).
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
it's always equal to __orangefs_bufmap and the latter can't change
until we are done
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
the second caller never needs to cancel, actually
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Thus d_revalidate is not obliged to check on as much, which will
eventually lead the way to hammering the filesystem servers much less.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A couple of caches were no longer needed:
- iov_iter improvements to orangefs_devreq_write_iter eliminated
the need for the dev_req_cache.
- removal (months ago) of the old AIO code eliminated the need
for the kiocb_cache.
Also, deobfuscation of use of GFP_KERNEL when calling kmem_cache_(z)alloc
for remaining caches.
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There were two just alike, making it hard maybe to tell which one
you were looking at in syslog... so I changed it a little by adding
some extra interesting tidbits to it...
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously, it would update a live inode. This was fixed, but it did not
ever check that the inode attributes in the dcache are correct. This
checks all inode attributes and rejects any that are not correct, which
causes a lookup and thus a new getattr.
Perhaps inode_operations->permission should replace or augment some of
this.
There is no actual caching, and this does a rather excessive amount of
network operations back to the filesystem server.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fold orangefs_op_initialize() in there, don't bother locking something
nobody else could've seen yet, use kmem_cache_zalloc() instead of
explicit memset()...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... we are not going to get woken up anyway, so it's just going to time out
and whine.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
All timeouts are in _seconds_, so all calls are of form
MSECS_TO_JIFFIES(n * 1000), which is a convoluted way to
spell n * HZ.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
reorder if branches...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* create with refcount 1
* make op_release() decrement and free if zero (i.e. old put_op()
has become that).
* mark when submitter has given up waiting; from that point nobody
else can move between the lists, change state, etc.
* have daemon read/write_iter grab a reference when picking op
and *always* give it up in the end
* don't put into hash until we know it's been successfully passed to
daemon
* move op->lock _lower_ than htab_in_progress_lock (and make sure
to take it in purge_inprogress_ops())
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|