Cluster Coherent NFS and Byte Range Locking

From Linux NFS

(Difference between revisions)
Jump to: navigation, search
Line 5: Line 5:
Clustered filesystems with exports to NFS clients face several issues with providing byte-range locking over NFS.
Clustered filesystems with exports to NFS clients face several issues with providing byte-range locking over NFS.
-
NFS advisory locking is performed by LOCKD or the NFSv4 server on the exporting node. In the current implementation, LOCKD calls the VFS posix locking layer even if the underlying filesystem provides its own ->lock() locking routine. This is because LOCKD is single-threaded, so LOCKD is not able to block, waiting on communication with another cluster node.
+
First, linux filesystems that need to implement their own locking method (as opposed to using the generic locking code in the VFS) must define a ->lock() method. However, this method is currently synchronous--it cannot return until the lock has actually succeeded or failed.  Cluster filesystems may block for a long time when performing locks, since they may need to wait on communication with other nodes.  This is undesireable because LOCKD or nfsd threads that call the ->lock() method may then be tied up for a long time. This is especially problematic for LOCKD, which is currently single-threaded.  We need to solve this problem, probably by modifying the VFS lock interface so that it can return results asynchronously.
-
The VFS posix locking layer provides an asynchronous lock manager callback, fl_notify(), that allows LOCKD to queue blocking lock requests and continue to service other client requests.
+
Second, NFSv4 differs from other filesystems in that NFSv4 clients waiting on contended locks must poll for the lock.  This presents fairness problems.  This problem is actually a problem for local filesystems as well as cluster filesystems, but the solution was expected to be closely tied to the solution to the synchronous ->lock() problem.
-
 
+
-
The NFSv4 server simply treats all blocking locks as non-blocking, choosing not to implement another lock request queue.
+
'''NFSv4 Blocking Locks'''
'''NFSv4 Blocking Locks'''
-
The NFSv4 server needs to implement blocking-locks. Unlike NLM clients, NFSv4 clients do not register a blocking lock callback with the server. Instead, they poll the server to see if the blocked lock is available. This presents a fairness problem, and the NFSv4 spec suggests that the server should maintain an ordered list of pending blocking locks. To really solve the fairness problem, all consumers of a lock should share such an ordered list e.g. local lock, LOCKD, and NFSv4 server lock requests.
+
The NFSv4 server needs to implement blocking locks. Unlike NLM clients, NFSv4 clients do not receive a "grant" callback from the server when a lock becomes available. Instead, they poll the server to see if the blocked lock is available. This presents a fairness problem, because a local process waiting for the lock will almost always acquire the contended lock before an NFSv4 client.  To solve this problem, the NFSv4 spec suggests that the server should maintain an ordered list of pending blocking locks. To really solve the fairness problem, everyone that may wait on a lock (local processes, LOCKD, and the NFSv4 server) should share such an ordered list.
'''Tasks'''
'''Tasks'''

Revision as of 18:06, 5 October 2006

Cluster Coherent NFS and Byte Range Locking

Background

Clustered filesystems with exports to NFS clients face several issues with providing byte-range locking over NFS.

First, linux filesystems that need to implement their own locking method (as opposed to using the generic locking code in the VFS) must define a ->lock() method. However, this method is currently synchronous--it cannot return until the lock has actually succeeded or failed. Cluster filesystems may block for a long time when performing locks, since they may need to wait on communication with other nodes. This is undesireable because LOCKD or nfsd threads that call the ->lock() method may then be tied up for a long time. This is especially problematic for LOCKD, which is currently single-threaded. We need to solve this problem, probably by modifying the VFS lock interface so that it can return results asynchronously.

Second, NFSv4 differs from other filesystems in that NFSv4 clients waiting on contended locks must poll for the lock. This presents fairness problems. This problem is actually a problem for local filesystems as well as cluster filesystems, but the solution was expected to be closely tied to the solution to the synchronous ->lock() problem.

NFSv4 Blocking Locks

The NFSv4 server needs to implement blocking locks. Unlike NLM clients, NFSv4 clients do not receive a "grant" callback from the server when a lock becomes available. Instead, they poll the server to see if the blocked lock is available. This presents a fairness problem, because a local process waiting for the lock will almost always acquire the contended lock before an NFSv4 client. To solve this problem, the NFSv4 spec suggests that the server should maintain an ordered list of pending blocking locks. To really solve the fairness problem, everyone that may wait on a lock (local processes, LOCKD, and the NFSv4 server) should share such an ordered list.

Tasks

   * Implement a shared blocking lock fair queue
   * Implement the NFSv4 server fl_notify and use the fair queue

Progress

We have written patches that change the semantics of the existing file_lock->fl_block queue to integrate it with the NFSv4 server and to make it more 'fair'. This queue holds all blocking locks in requesting order; new blockers are added to the tail.

These patches have not been reviewed by the wider kernel community.

We have however identified a number of spec and implementation problems along the way and had fixes incorporated into the linux kernel and the new NFSv4.1 draft.

The existing fl_block semantics:

When the lock is released, we traverse the fl_block list and wake each blocker, resulting in a 'scrum' to get the lock. The winner then places all losers on its fl_block list. So, this queue is 'fair' in the sense that the blokers wake in order. It's not fair in the sense that LOCKD has bookeeping tasks to perform prior to actually grabbing the lock, giving local lockers an advantage. It's even worse for NFSv4; since NFSv4 clients must poll for blocking locks, the NFSv4 server is forced to wait for the client in question to poll again before it can attempt to acquire the lock.

The new 'fair' fl_block semantics:

We tried modifying the VFS lock code so that it actually *applies* all locks it can on behalf of the waiters. We wake those waiters whose locks succeed, and return others to the fl_block list.

We also added a kernel lock to protect the fl_block list during this processing. We immediately ran into a few problems:

   * Claiming the lock means calling posix_lock_file which calls kmalloc which can sleep, 
     not possible when under a spinlock; so we'd have to use a semaphore or mutex; but
   * For the purposes of mandatory lock checking, this new lock must be obtained in the
     read/write path to check for lock compliance, and adding a semaphore or mutex to
     the performance-critical read/write path is thought to be inefficient.

We investigated alternative locking schemes, however we soon identified a critical problem: an NFSv4 client that has been polling for a lock may stop polling at any time without notice (if, for example, someone signals the client process that is blocking on the lock). Therefore it is incorrect for the VFS code to grant a lock to waiter on our behalf, when it is possible that "waiter" may end up not actually wanting the lock.

If the server grants the lock early, and the client chooses not to poll again, then there is no way for the server to cancel the lock that it has already granted. (If the lock has downgraded or coalesced existing locks, then it may not be possible to undo its effect with a simple unlock.)

Correct support for blocking NFSv4 locks therefore *requires* the ability to apply a new kind of byte-range lock to the backend filesystem that allows us to temporarily block other lock requests, but that does not downgrade or coalesce existing posix locks, to allow us to later remove the lock safely if the client does not return.

Our patches add such a lock type to the VFS lock code. After these patches, the VFS lock code again walks through the fl_block list, applying those locks it can, and waking waiters for newly acquired locks. This time, however, we instead apply our new type of non-coalescing byte-range lock. We do not upgrade the lock to a real posix byte-range lock until the waiter wakes up and requests the lock, at which point we also give the waiter the option of cancelling the lock.

This has the added advantage of simplifying the kernel locking problems we saw with our previous scheme, since our new lock type is simple enough to be applied without requiring memory allocations.

As a side benefit of this work we also identified some problems with the NFSv4 protocol that we were able to fix:

   * The NFSv4 protocol has no equivalent to the NLM "cancel" call.  This means
     that when a client process stops blocking on a lock, the server may wait up to a
     lease period (typically about a minute) before giving up and allowing another
     waiting client to take the lock.  We found a solution that is backwards
     compatible (and thus implementable by current NFSv4.0 clients and servers), and
     also added language describing this solution to the new NFSv4.1 draft
   * The NFSv4 protocol has no equivalent to the "grant" call; clients must thus poll
     very frequently if they wish to acquire contended locks in a timely manner.
     The traditional NLM grant call is, however, has a number of known races, and is
     known to be problematic.  We therefore have proposed an alternative mechanism
     which allows a server to notify a client that a lock is available without
     committing the server to granting the lock to the client.  Speicific language
     for NFSv4.1 has been proposed and met with interest, but is awaiting working
     group concensus.


Cluster Filesystem ->lock() Interface

There is currently a filesystem ->lock() method, but it is defined only by a few filesystems that are not exported via NFS. So none of the lock routines that are used by LOCKD or the NFSv4 server bother to call those methods. Cluster filesystems would like to NFS to call their own lock methods which keep a consistant view of a lock across cluster filesystem nodes. But the current ->lock() interface is not suitable for cluster filesystems in a couple of ways.

   * We'd rather not block the NFSv4 server or LOCKD threads for longer than necessary,
     so it'd be nice to have a way to make lock requests asynchronously. This is
     particularly helpful for non-blocking locks, which do not have the option of
     returning a temporary "blocked" response and then responding with a granted callback
     later.
   * Given that in the blocking case we want the filesystem to be able to return from ->lock() 
     without having necessarily acquired the lock, we need to be able to handle the case where 
     a process on the client is interrupted and the client cancels the lock.

Tasks

   * Design and implement an asynchronous ->lock() interface
   * Have LOCKD and the NFSv4 server test for and call the new ->lock()

Progress

Since acquiring a filesystem lock may require comminication with remote hosts, and to avoid blocking lock manager threads during such communication, we allow the results to be returned asynchronously.

When a filesystem ->lock() call needs to block due to a delay in satisfying a non-blocking lock request, the file system will return -EINPROGRESS, and then later return the results with a callback registered via the lock_manager_operations struct.

An FL_CANCEL flag is added to the struct file_lock to indicate to the file system that the caller wants to cancel the provided lock.

New routines vfs_lock_file, vfs_test_lock, and vfs_cancel_lock replace posix_lock_file, posix_test_file, and posix_cancel_lock in LOCKD and the NFSv4 server. They call the new filesystem ->lock() method if it exists, else call the posix conterparts.

Status

Our solution has been tested with the GPFS file system. The relevant patches have been submitted to the Linux community, and we are responding to comments.

A major issue for acceptance is the lack of a consumer in the Linux kernel - e.g. a cluster file system with byte-range locking.

Personal tools