summaryrefslogtreecommitdiffstats
path: root/fs/cifs/transport.c
Commit message (Collapse)AuthorAgeFilesLines
* cifs: store the real expected sequence number in the midJeff Layton2013-05-041-1/+1
| | | | | | | | | | | | | | Currently, the signing routines take a pointer to a place to store the expected sequence number for the mid response. It then stores a value that's one below what that sequence number should be, and then adds one to it when verifying the signature on the response. Increment the sequence number before storing the value in the mid, and eliminate the "+1" when checking the signature. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: on send failure, readjust server sequence number downwardJeff Layton2013-05-041-0/+13
| | | | | | | | | | If sending a call to the server fails for some reason (for instance, the sending thread caught a signal), then we must readjust the sequence number downward again or the next send will have it too high. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: remove ENOSPC handling in smb_sendvJeff Layton2013-05-041-7/+1
| | | | | | | | | To my knowledge, no one ever reported seeing this pop. Acked-by: Suresh Jayaraman <sjayaraman@novell.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Steve French <smfrench@gmail.com>
* [CIFS] cifs: Rename cERROR and cFYI to cifs_dbgJoe Perches2013-05-041-30/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's not obvious from reading the macro names that these macros are for debugging. Convert the names to a single more typical kernel style cifs_dbg macro. cERROR(1, ...) -> cifs_dbg(VFS, ...) cFYI(1, ...) -> cifs_dbg(FYI, ...) cFYI(DBG2, ...) -> cifs_dbg(NOISY, ...) Move the terminating format newline from the macro to the call site. Add CONFIG_CIFS_DEBUG function cifs_vfs_err to emit the "CIFS VFS: " prefix for VFS messages. Size is reduced ~ 1% when CONFIG_CIFS_DEBUG is set (default y) $ size fs/cifs/cifs.ko* text data bss dec hex filename 265245 2525 132 267902 4167e fs/cifs/cifs.ko.new 268359 2525 132 271016 422a8 fs/cifs/cifs.ko.old Other miscellaneous changes around these conversions: o Miscellaneous typo fixes o Add terminating \n's to almost all formats and remove them from the macros to be more kernel style like. A few formats previously had defective \n's o Remove unnecessary OOM messages as kmalloc() calls dump_stack o Coalesce formats to make grep easier, added missing spaces when coalescing formats o Use %s, __func__ instead of embedded function name o Removed unnecessary "cifs: " prefixes o Convert kzalloc with multiply to kcalloc o Remove unused cifswarn macro Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: move check for NULL socket into smb_send_rqstJeff Layton2012-12-301-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cai reported this oops: [90701.616664] BUG: unable to handle kernel NULL pointer dereference at 0000000000000028 [90701.625438] IP: [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.632167] PGD fea319067 PUD 103fda4067 PMD 0 [90701.637255] Oops: 0000 [#1] SMP [90701.640878] Modules linked in: des_generic md4 nls_utf8 cifs dns_resolver binfmt_misc tun sg igb iTCO_wdt iTCO_vendor_support lpc_ich pcspkr i2c_i801 i2c_core i7core_edac edac_core ioatdma dca mfd_core coretemp kvm_intel kvm crc32c_intel microcode sr_mod cdrom ata_generic sd_mod pata_acpi crc_t10dif ata_piix libata megaraid_sas dm_mirror dm_region_hash dm_log dm_mod [90701.677655] CPU 10 [90701.679808] Pid: 9627, comm: ls Tainted: G W 3.7.1+ #10 QCI QSSC-S4R/QSSC-S4R [90701.688950] RIP: 0010:[<ffffffff814a343e>] [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.698383] RSP: 0018:ffff88177b431bb8 EFLAGS: 00010206 [90701.704309] RAX: ffff88177b431fd8 RBX: 00007ffffffff000 RCX: ffff88177b431bec [90701.712271] RDX: 0000000000000003 RSI: 0000000000000006 RDI: 0000000000000000 [90701.720223] RBP: ffff88177b431bc8 R08: 0000000000000004 R09: 0000000000000000 [90701.728185] R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000001 [90701.736147] R13: ffff88184ef92000 R14: 0000000000000023 R15: ffff88177b431c88 [90701.744109] FS: 00007fd56a1a47c0(0000) GS:ffff88105fc40000(0000) knlGS:0000000000000000 [90701.753137] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [90701.759550] CR2: 0000000000000028 CR3: 000000104f15f000 CR4: 00000000000007e0 [90701.767512] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [90701.775465] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [90701.783428] Process ls (pid: 9627, threadinfo ffff88177b430000, task ffff88185ca4cb60) [90701.792261] Stack: [90701.794505] 0000000000000023 ffff88177b431c50 ffff88177b431c38 ffffffffa014fcb1 [90701.802809] ffff88184ef921bc 0000000000000000 00000001ffffffff ffff88184ef921c0 [90701.811123] ffff88177b431c08 ffffffff815ca3d9 ffff88177b431c18 ffff880857758000 [90701.819433] Call Trace: [90701.822183] [<ffffffffa014fcb1>] smb_send_rqst+0x71/0x1f0 [cifs] [90701.828991] [<ffffffff815ca3d9>] ? schedule+0x29/0x70 [90701.834736] [<ffffffffa014fe6d>] smb_sendv+0x3d/0x40 [cifs] [90701.841062] [<ffffffffa014fe96>] smb_send+0x26/0x30 [cifs] [90701.847291] [<ffffffffa015801f>] send_nt_cancel+0x6f/0xd0 [cifs] [90701.854102] [<ffffffffa015075e>] SendReceive+0x18e/0x360 [cifs] [90701.860814] [<ffffffffa0134a78>] CIFSFindFirst+0x1a8/0x3f0 [cifs] [90701.867724] [<ffffffffa013f731>] ? build_path_from_dentry+0xf1/0x260 [cifs] [90701.875601] [<ffffffffa013f731>] ? build_path_from_dentry+0xf1/0x260 [cifs] [90701.883477] [<ffffffffa01578e6>] cifs_query_dir_first+0x26/0x30 [cifs] [90701.890869] [<ffffffffa015480d>] initiate_cifs_search+0xed/0x250 [cifs] [90701.898354] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.904486] [<ffffffffa01554cb>] cifs_readdir+0x45b/0x8f0 [cifs] [90701.911288] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.917410] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.923533] [<ffffffff81195970>] ? fillonedir+0x100/0x100 [90701.929657] [<ffffffff81195848>] vfs_readdir+0xb8/0xe0 [90701.935490] [<ffffffff81195b9f>] sys_getdents+0x8f/0x110 [90701.941521] [<ffffffff815d3b99>] system_call_fastpath+0x16/0x1b [90701.948222] Code: 66 90 55 65 48 8b 04 25 f0 c6 00 00 48 89 e5 53 48 83 ec 08 83 fe 01 48 8b 98 48 e0 ff ff 48 c7 80 48 e0 ff ff ff ff ff ff 74 22 <48> 8b 47 28 ff 50 68 65 48 8b 14 25 f0 c6 00 00 48 89 9a 48 e0 [90701.970313] RIP [<ffffffff814a343e>] kernel_setsockopt+0x2e/0x60 [90701.977125] RSP <ffff88177b431bb8> [90701.981018] CR2: 0000000000000028 [90701.984809] ---[ end trace 24bd602971110a43 ]--- This is likely due to a race vs. a reconnection event. The current code checks for a NULL socket in smb_send_kvec, but that's too late. By the time that check is done, the socket will already have been passed to kernel_setsockopt. Move the check into smb_send_rqst, so that it's checked earlier. In truth, this is a bit of a half-assed fix. The -ENOTSOCK error return here looks like it could bubble back up to userspace. The locking rules around the ssocket pointer are really unclear as well. There are cases where the ssocket pointer is changed without holding the srv_mutex, but I'm not clear whether there's a potential race here yet or not. This code seems like it could benefit from some fundamental re-think of how the socket handling should behave. Until then though, this patch should at least fix the above oops in most cases. Cc: <stable@vger.kernel.org> # 3.7+ Reported-and-Tested-by: CAI Qian <caiqian@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* [CIFS] WARN_ON_ONCE if kernel_sendmsg() returns -ENOSPCSteve French2012-10-071-0/+6
| | | | | | | | | | | | | | kernel_sendmsg() is less likely to return -ENOSPC and it might be a bug to do so. However, in the past there might have been cases where a -ENOSPC was returned from a low level driver. Add a WARN_ON_ONCE() to ensure that it is safe to assume that -ENOSPC is no longer returned. This -ENOSPC specific handling will be removed once we are sure it is no longer returned. Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Suresh Jayaraman <sjayaraman@suse.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: change cifs_call_async to use smb_rqst structsJeff Layton2012-09-241-28/+28
| | | | | | | | | | | For now, none of the callers populate rq_pages. That will be done for writes in a later patch. While we're at it, change the prototype of setup_async_request not to need a return pointer argument. Just return the pointer to the mid_q_entry or an ERR_PTR. Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: teach signing routines how to deal with arrays of pages in a smb_rqstJeff Layton2012-09-241-1/+1
| | | | | | | | | Use the smb_send_rqst helper function to kmap each page in the array and update the hash for that chunk. Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: teach smb_send_rqst how to handle arrays of pagesJeff Layton2012-09-241-2/+54
| | | | | | | | | | | | Add code that allows smb_send_rqst to send an array of pages after the initial kvec array has been sent. For now, we simply kmap the page array and send it using the standard smb_send_kvec function. Eventually, we may want to convert this code to use kernel_sendpage under the hood and avoid the kmap altogether for the page data. Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: cork the socket before a send and uncork it afterwardJeff Layton2012-09-241-0/+12
| | | | | | | | | | | | | | | We want to send SMBs as "atomically" as possible. Prior to sending any data on the socket, cork it to make sure that no non-full frames go out. Afterward, uncork it to make sure all of the data gets pushed out to the wire. Note that this more or less renders the socket=TCP_NODELAY mount option obsolete. When TCP_CORK and TCP_NODELAY are used on the same socket, TCP_NODELAY is essentially ignored. Acked-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: convert send code to use smb_rqst structsJeff Layton2012-09-241-45/+90
| | | | | | | | | | | | | | Again, just a change in the arguments and some function renaming here. In later patches, we'll change this code to deal with page arrays. In this patch, we add a new smb_send_rqst wrapper and have smb_sendv call that. Then we move most of the existing smb_sendv code into a new function -- smb_send_kvec. This seems a little redundant, but later we'll flesh this out to deal with arrays of pages. Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: change signing routines to deal with smb_rqst structsJeff Layton2012-09-241-1/+3
| | | | | | | | | | | | | | | | We need a way to represent a call to be sent on the wire that does not require having all of the page data kmapped. Behold the smb_rqst struct. This new struct represents an array of kvecs immediately followed by an array of pages. Convert the signing routines to use these structs under the hood and turn the existing functions for this into wrappers around that. For now, we're just changing these functions to take different args. Later, we'll teach them how to deal with arrays of pages. Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* CIFS: Enable signing in SMB2Pavel Shilovsky2012-09-241-12/+12
| | | | | | | | | | | | | | Use hmac-sha256 and rather than hmac-md5 that is used for CIFS/SMB. Signature field in SMB2 header is 16 bytes instead of 8 bytes. Automatically enable signing by client when requested by the server when signing ability is available to the client. Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: print error code if smb signature verification failsSteve French2012-08-191-3/+6
| | | | | | | | | | While trying to debug a SMB signature related issue with Windows Servers figured out it might be easier to debug if we print the error code from cifs_verify_signature(). Also, fix indendation while at it. Signed-off-by: Suresh Jayaraman <sjayaraman@suse.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* CIFS: Setup async request in ops structPavel Shilovsky2012-07-241-2/+2
| | | | | Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Steve French <smfrench@gmail.com>
* CIFS: Make transport routines work with SMB2Pavel Shilovsky2012-07-241-7/+6
| | | | | | Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <smfrench@gmail.com>
* CIFS: Extend credit mechanism to process request typePavel Shilovsky2012-07-241-31/+39
| | | | | | | | | | | | | | Split all requests to echos, oplocks and others - each group uses its own credit slot. This is indicated by new flags CIFS_ECHO_OP and CIFS_OBREAK_OP that are not used now for CIFS. This change is required to support SMB2 protocol because of different processing of these commands. Acked-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: rename cifs_sign_smb2 to cifs_sign_smbvJeff Layton2012-07-231-2/+2
| | | | | | | | "smb2" makes me think of the SMB2.x protocol, which isn't at all what this function is for... Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* cifs: remove bogus reset of smb_buf_length in smb_send routinesJeff Layton2012-07-231-4/+0
| | | | | | | | There's a comment here about how we don't want to modify this length, but nothing in this function actually does. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* Initialise mid_q_entry before putting it on the pending queueSachin Prabhu2012-07-161-12/+14
| | | | | | | | | | | | | | A user reported a crash in cifs_demultiplex_thread() caused by an incorrectly set mid_q_entry->callback() function. It appears that the callback assignment made in cifs_call_async() was not flushed back to memory suggesting that a memory barrier was required here. Changing the code to make sure that the mid_q_entry structure was completely initialised before it was added to the pending queue fixes the problem. Signed-off-by: Sachin Prabhu <sprabhu@redhat.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Signed-off-by: Steve French <smfrench@gmail.com>
* CIFS: Move get_next_mid to ops structPavel Shilovsky2012-06-011-1/+1
| | | | | | Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org> Signed-off-by: Steve French <sfrench@us.ibm.com>
* CIFS: Move add/set_credits and get_credits_field to ops structurePavel Shilovsky2012-05-231-11/+12
| | | | | | | Acked-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* CIFS: Move protocol specific part from SendReceive2 to ops structPavel Shilovsky2012-05-231-3/+4
| | | | | | | Acked-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: convert send_nt_cancel into a version specific opJeff Layton2012-05-161-38/+8
| | | | | | | | For SMB2, this should be a no-op. Obviously if we wanted to do something for the SMB2 case, we could also define an operation here for it. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
* CIFS: Change mid_q_entry structure fieldsPavel Shilovsky2012-03-231-26/+26
| | | | | | | to be protocol-unspecific and big enough to keep both CIFS and SMB2 values. Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
* CIFS: Separate protocol-specific code from demultiplex codePavel Shilovsky2012-03-231-2/+2
| | | | Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
* CIFS: Separate protocol-specific code from transport routinesPavel Shilovsky2012-03-231-72/+99
| | | | | | that lets us use this functions for SMB2. Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
* CIFS: Prepare credits code for a slot reservationPavel Shilovsky2012-03-211-8/+14
| | | | | | | | that is essential for CIFS/SMB/SMB2 oplock breaks and SMB2 echos. Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* CIFS: Make wait_for_free_request killablePavel Shilovsky2012-03-211-1/+6
| | | | | | | | | to let us kill the proccess if it hangs waiting for a credit when the session is down and echo is disabled. Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* CIFS: Introduce credit-based flow controlPavel Shilovsky2012-03-211-24/+20
| | | | | | | | | | and send no more than credits value requests at once. For SMB/CIFS it's trivial: increment this value by receiving any message and decrement by sending one. Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* CIFS: Simplify inFlight logicPavel Shilovsky2012-03-211-22/+23
| | | | | | | | | by making it as unsigned integer and surround access with req_lock from server structure. Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* CIFS: Respect negotiated MaxMpxCountPavel Shilovsky2012-03-201-2/+2
| | | | | | | | | | | Some servers sets this value less than 50 that was hardcoded and we lost the connection if when we exceed this limit. Fix this by respecting this value - not sending more than the server allows. Cc: stable@kernel.org Reviewed-by: Jeff Layton <jlayton@samba.org> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <stevef@smf-gateway.(none)>
* cifs, freezer: add wait_event_freezekillable and have cifs use itJeff Layton2011-10-191-1/+2
| | | | | | | | | | | | | | | | CIFS currently uses wait_event_killable to put tasks to sleep while they await replies from the server. That function though does not allow the freezer to run. In many cases, the network interface may be going down anyway, in which case the reply will never come. The client then ends up blocking the computer from suspending. Fix this by adding a new wait_event_freezable variant -- wait_event_freezekillable. The idea is to combine the behavior of wait_event_killable and wait_event_freezable -- put the task to sleep and only allow it to be awoken by fatal signals, but also allow the freezer to do its job. Signed-off-by: Jeff Layton <jlayton@redhat.com>
* cifs: add a callback function to receive the rest of the frameJeff Layton2011-10-191-2/+3
| | | | | | | | | | | | | | | In order to handle larger SMBs for readpages and other calls, we want to be able to read into a preallocated set of buffers. Rather than changing all of the existing code to preallocate buffers however, we instead add a receive callback function to the MID. cifsd will call this function once the mid_q_entry has been identified in order to receive the rest of the SMB. If the mid can't be identified or the receive pointer is unset, then the standard 3rd phase receive function will be called. Reviewed-and-Tested-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com>
* cifs: consolidate signature generating codeJeff Layton2011-10-121-3/+8
| | | | | | | | | | We have two versions of signature generating code. A vectorized and non-vectorized version. Eliminate a large chunk of cut-and-paste code by turning the non-vectorized version into a wrapper around the vectorized one. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
* [CIFS] Cleanup use of CONFIG_CIFS_STATS2 ifdef to make transport routines ↵Steve French2011-08-111-34/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | more readable Christoph had requested that the stats related code (in CONFIG_CIFS_STATS2) be moved into helpers to make code flow more readable. This patch should help. For example the following section from transport.c spin_unlock(&GlobalMid_Lock); atomic_inc(&ses->server->num_waiters); wait_event(ses->server->request_q, atomic_read(&ses->server->inFlight) < cifs_max_pending); atomic_dec(&ses->server->num_waiters); spin_lock(&GlobalMid_Lock); becomes simpler (with the patch below): spin_unlock(&GlobalMid_Lock); cifs_num_waiters_inc(server); wait_event(server->request_q, atomic_read(&server->inFlight) < cifs_max_pending); cifs_num_waiters_dec(server); spin_lock(&GlobalMid_Lock); Reviewed-by: Jeff Layton <jlayton@redhat.com> CC: Christoph Hellwig <hch@infradead.org> Signed-off-by: Steve French <sfrench@us.ibm.com> Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
* CIFS: Fix missing a decrement of inFlight valuePavel Shilovsky2011-08-031-0/+2
| | | | | | | | if we failed on getting mid entry in cifs_call_async. Cc: stable@kernel.org Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* [CIFS] Rename three structures to avoid camel caseSteve French2011-05-271-10/+10
| | | | | | | | | | secMode to sec_mode and cifsTconInfo to cifs_tcon and cifsSesInfo to cifs_ses Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: don't call mid_q_entry->callback under the Global_MidLock (try #5)Jeff Layton2011-05-241-15/+8
| | | | | | | | | | | | | | | | | | | | | Minor revision to the last version of this patch -- the only difference is the fix to the cFYI statement in cifs_reconnect. Holding the spinlock while we call this function means that it can't sleep, which really limits what it can do. Taking it out from under the spinlock also means less contention for this global lock. Change the semantics such that the Global_MidLock is not held when the callback is called. To do this requires that we take extra care not to have sync_mid_result remove the mid from the list when the mid is in a state where that has already happened. This prevents list corruption when the mid is sitting on a private list for reconnect or when cifsd is coming down. Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com> Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: add ignore_pend flag to cifs_call_asyncJeff Layton2011-05-231-2/+3
| | | | | | | | | | The current code always ignores the max_pending limit. Have it instead only optionally ignore the pending limit. For CIFSSMBEcho, we need to ignore it to make sure they always can go out. For async reads, writes and potentially other calls, we need to respect it. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: make cifs_send_async take a kvec arrayJeff Layton2011-05-231-6/+7
| | | | | | | | | | We'll need this for async writes, so convert the call to take a kvec array. CIFSSMBEcho is changed to put a kvec on the stack and pass in the SMB buffer using that. Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: consolidate SendReceive response checksJeff Layton2011-05-231-116/+41
| | | | | | | | | | | | | | | | | | | Further consolidate the SendReceive code by moving the checks run over the packet into a separate function that all the SendReceive variants can call. We can also eliminate the check for a receive_len that's too big or too small. cifs_demultiplex_thread already checks that and disconnects the socket if that occurs, while setting the midStatus to MALFORMED. It'll never call this code if that's the case. Finally do a little cleanup. Use "goto out" on errors so that the flow of code in the normal case is more evident. Also switch the logErr variable in map_smb_to_linux_error to a bool. Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: keep BCC in little-endian formatJeff Layton2011-05-191-18/+1
| | | | | | | | | | | | | | | | | | | | | | | This is the same patch as originally posted, just with some merge conflicts fixed up... Currently, the ByteCount is usually converted to host-endian on receive. This is confusing however, as we need to keep two sets of routines for accessing it, and keep track of when to use each routine. Munging received packets like this also limits when the signature can be calulated. Simplify the code by keeping the received ByteCount in little-endian format. This allows us to eliminate a set of routines for accessing it and we can now drop the *_le suffixes from the accessor functions since that's now implied. While we're at it, switch all of the places that read the ByteCount directly to use the get_bcc inline which should also clean up some unaligned accesses. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* consistently use smb_buf_length as be32 for cifs (try 3)Steve French2011-05-191-26/+21
| | | | | | | | | | | | | | | | | | | There is one big endian field in the cifs protocol, the RFC1001 length, which cifs code (unlike in the smb2 code) had been handling as u32 until the last possible moment, when it was converted to be32 (its native form) before sending on the wire. To remove the last sparse endian warning, and to make this consistent with the smb2 implementation (which always treats the fields in their native size and endianness), convert all uses of smb_buf_length to be32. This version incorporates Christoph's comment about using be32_add_cpu, and fixes a typo in the second version of the patch. Signed-off-by: Steve French <sfrench@us.ibm.com> Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: don't always drop malformed replies on the floor (try #3)Jeff Layton2011-02-111-0/+3
| | | | | | | | | | | | | | | | | | | | | Slight revision to this patch...use min_t() instead of conditional assignment. Also, remove the FIXME comment and replace it with the explanation that Steve gave earlier. After receiving a packet, we currently check the header. If it's no good, then we toss it out and continue the loop, leaving the caller waiting on that response. In cases where the packet has length inconsistencies, but the MID is valid, this leads to unneeded delays. That's especially problematic now that the client waits indefinitely for responses. Instead, don't immediately discard the packet if checkSMB fails. Try to find a matching mid_q_entry, mark it as having a malformed response and issue the callback. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: enable signing flag in SMB header when server has it onJeff Layton2011-02-041-0/+4
| | | | | | | | | | | | | | cifs_sign_smb only generates a signature if the correct Flags2 bit is set. Make sure that it gets set correctly if we're sending an async call. This patch fixes: https://bugzilla.kernel.org/show_bug.cgi?id=28142 Reported-and-Tested-by: JG <jg@cms.ac> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: don't pop a printk when sending on a socket is interruptedJeff Layton2011-01-311-2/+2
| | | | | | | | | If we kill the process while it's sending on a socket then the kernel_sendmsg will return -EINTR. This is normal. No need to spam the ring buffer with this info. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: send an NT_CANCEL request when a process is signalledJeff Layton2011-01-311-3/+12
| | | | | | | | | | | | Use the new send_nt_cancel function to send an NT_CANCEL when the process is delivered a fatal signal. This is a "best effort" enterprise however, so don't bother to check the return code. There's nothing we can reasonably do if it fails anyway. Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Reviewed-by: Suresh Jayaraman <sjayaraman@suse.de> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: handle cancelled requests betterJeff Layton2011-01-311-7/+36
| | | | | | | | | | | | | | | | | | | | Currently, when a request is cancelled via signal, we delete the mid immediately. If the request was already transmitted however, the client is still likely to receive a response. When it does, it won't recognize it however and will pop a printk. It's also a little dangerous to just delete the mid entry like this. We may end up reusing that mid. If we do then we could potentially get the response from the first request confused with the later one. Prevent the reuse of mids by marking them as cancelled and keeping them on the pending_mid_q list. If the reply comes in, we'll delete it from the list then. If it never comes, then we'll delete it at reconnect or when cifsd comes down. Reviewed-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* cifs: use get/put_unaligned functions to access ByteCountJeff Layton2011-01-201-5/+4
| | | | | | | | | | | | | | | | It's possible that when we access the ByteCount that the alignment will be off. Most CPUs deal with that transparently, but there's usually some performance impact. Some CPUs raise an exception on unaligned accesses. Fix this by accessing the byte count using the get_unaligned and put_unaligned inlined functions. While we're at it, fix the types of some of the variables that end up getting returns from these functions. Acked-by: Pavel Shilovsky <piastryyy@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
OpenPOWER on IntegriCloud