Server 4.0 and 4.1 issues
From Linux NFS
(→4.1) |
(→Fix reboot recovery) |
||
Line 203: | Line 203: | ||
=== Fix reboot recovery === | === Fix reboot recovery === | ||
- | The existing reboot recovery mechanism for NFSv4.0 has some architectural problems, and the core kernel developers have asked us to replace it. The transition between the new and old system will be awkward, and the earlier | + | The existing reboot recovery mechanism for NFSv4.0 has some architectural problems, and the core kernel developers have asked us to replace it. The transition between the new and old system will be awkward, and the earlier it's done the better. |
+ | We have a basic design for [[nfsd4 server recovery]]. | ||
=== Fix changeid === | === Fix changeid === |
Revision as of 23:34, 27 September 2010
This is a list of what must be done before considering the #4.0 and #4.1 linux server implementation to be minimally acceptable.
For the implementation to be minimally acceptable:
- someone upgrading from the previous version should experience no loss in functionality;
- server behavior should be close enough to the spec that clients will not be forced into undocumented workarounds.
This may not be a simple as looking through the rfc's for "MUST"'s, since there are a few cases of features that are mandated by the rfc's, but that nevertheless are rarely (or never) implemented, and whose absence is easily worked around on the client.
4.1
Currently bfields is actively working on #BIND_CONN_TO_SESSION and #Trunking, and his reviewing patches from Neil Brown to address #deferral fixes. Other tasks are unclaimed.
Highest priority
These are items which must be completed before the 4.1 server can be considered minimally acceptable.
current stateid
See sections 8.2.3 and 16.2.3.1.2.
Also check that we're correctly handling the other new special stateid types introduced in 4.1.
BIND_CONN_TO_SESSION
This is mandatory to implement on the server.
This is not just for exotic multi-connection setups.
If a client opts for #SP4_MACH_CRED protection, and if its tcp connection is broken for some reason, then it may choose to give up all its state and start from scratch. That may be good enough for very minimal first 4.1 implementations. However, clients that wish to reconnect without giving up their session state (e.g., the reply cache) will need to use BIND_CONN_TO_SESSION to associate the new connection to the old session.
Even with SP4_NONE, clients will want to be able to reconnect without losing the backchannel. Again, that will require BIND_CONN_TO_SESSION.
Trunking
XXX: We need to break this down into a more careful list of the minimum we need to implement.
Both clientid (multiple sessions per client) and session (multiple connections per session) trunking are mandatory for a server to support. Therefore a client would be within its rights to simply refuse to interoperate with a server that didn't support either.
We could ask whether it is actually likely that a client will do that, and if there are instead obvious errors we could return that any client is likely to be able to handle gracefully.
On the first question: Supporting trunking really just means doing what the spec says when we receive multiple exchange_id's, create_sessions, or transport connections from the same client. These can arise in simple situations. For example, multi-homed servers need to know how to handle the former. Client recovery of various kinds (see BIND_CONN_TO_SESSION above) may also require that multiple connections be associated with a single session over time, even if only one is in use at a time.
On the second question: I don't see any reliable way to error out: neither BIND_CONN_TO_SESSION nor BACKCHANNEL_CTL have allowable errors that seem reasonable to me in this case. CREATE_SESSION does at least allow returning NOSPC in the case where we can't commit to the additional DRC memory, so maybe we could get away with using that in the case where we don't want to provide any more sessions.
So I think if we don't implement this soon we'll end up with idiosyncratic behavior that will be hard for clients to work around.
We'll also need some pynfs tests to make sure we're getting this right.
GSS
Even though kerberos is mandatory, the fact is that every implementation is capable of running without it. So we could dodge some of these requirements by temporarily turning off support for the combination of GSS and 4.1.
However, I'd rather avoid the confusion that would come with turning off a preexisting major feature in a new protocol version.
So, we need to look at requirements for correct GSS support on 4.1.
The CREATE_SESSION operation allows the client to request certain security on the backchannel (with the csa_sec_parms field), and doesn't give the server any way to negotiate this (other than failing the whole request). So, if we support GSS, we should support this.
The same argument applies to SSV: the client requests a certain kind of state protection, and we don't have any reasonable way to refuse it. Unfortunately, it's unclear whether others are going to implement SSV. So for now it may make sense to give up and return SERVERFAULT in this case, and leave future clients to deal with that behavior. To avoid returning the same SERVERFAULT error for a variety of features first used on SETCLIENTID (trunking, backchannel security), we need to be careful to support those other features.
Others do appear to be supporting SP4_MACH_CRED, unlike SSV, so that would be a useful minimum for us to support.
More details for gss backchannel support:
We must allow the client to pass the server gss contexts to use on the backchannel.
SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRING and SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRED should be set when required.
See the end of section 18.36.4 for more implementation details.
DRC limit checking
We check for replies that are too big only *after* performing the operation in question. Depending on the operation, that may be too late to return NFS4ERR_REP_TOO_BIG_TO_CACHE. (For example, an irreversible filesystem operation may already have been performed.) We need to figure out how to estimate the size of the response before performing an operation, at least for operations that actually change the filesystem.
Callback failure handling
The server is required to set SEQ4_STATUS_CB_PATH_DOWN as long as it lacks any usable backchannel for the client. (Also, CB_PATH_DOWN should be returned on DESTROY_SESSION when appropriate.)
SEQ4_STATUS_CB_PATH_DOWN_SESSION is required when unable to retry a callback due to lack of a callback for that particular session.
Set SEQ4_STATUS_BACKCHANNEL_FAULT on encountering "unrecoverable fault with the backchannel (e.g. it has lost track of the sequence ID for a slot in the backchannel)."
Miscellaneous Mandatory Operations
DESTROY_CLIENTID, FREE_STATEID, SECINFO_NO_NAME, and TEST_STATEID are not currently used by clients, but will be (and the spec recommends their use in common cases), and clients should not be expected to know how to recover from the case where they are not supported. They should also be fairly easy to implement.
SEQ4_STATUS_RECALLABLE_STATE_REVOKED
Set SEQ4_STATUS_RECALLABLE_STATE_REVOKED when a client's failure to return a recallable object causes us to revoke the object, and be prepared to handle a FREE_STATEID from the client as acknowledgement.
(None of the STATE_REVOKED bits should be required as long as we don't partially revoke state (which we don't, under 4.0 or 4.1).)
ACL retention bits
We have no plans to really implementing these, but it appears that we at least need to accept them from clients. So for now we'll ignore them when they're set on an ACL, and return them always zero on any read of an ACL.
Check 4.0/4.1 interactions
Catch clients that attempt to send a mixture of 4.0 and 4.1 compounds from the same clientid.
We don't have any particular responsibilities to such a client, as it would be operating out of spec. Mainly I want to make sure the code won't implicitly assuming clients aren't doing that, and allowing possible corruption of data structures.
While we're here: see also section 2.4.1 of rfc 5661.
deferral fixes
The current code returns ERR_DELAY whenever an upcall is required instead of using the server's deferral mechanism, since that mechanism replays a request internally, causing SEQUENCE to fail on the second time through.
These DELAYS are hard on clients, and will cause unacceptable delays in some cases. Fix the deferral code to sleep at least a little before giving up and returning ERR_DELAY.
While we're here: see also section 2.4.1 of rfc 5661.
backchannel attribute negotiation
See comment in alloc_init_session(). I think all we need to do is sanity-check the values and fail the CREATE_SESSION if they don't meet our minimal requirements.
Done, needs testing
compound op ordering enforcement
DESTROY_SESSION must be the final operation in a compound request, nfs4err_not_only_op should be returned when appropriate. Make sure a session is defined whenever the code expects it.
The risk here is that there may be nasty DOS's (or worse) against a server that doesn't check this kind of thing carefully.
Keep client from expiring while in use by session
The session associated with a compound may be implicitly referred to by individual operations. For example, RECLAIM_COMPLETE implicitly applies to the client associated with the current session. However, we don't currently do anything to prevent the client from being freed partway through processing a compound.
Server Reboot Recovery
We need at least basic RECLAIM_COMPLETE support.
Question: do we need to set SEQ4_STATUS_RESTART_RECLAIM_NEEDED on any new session created by a preexisting client during the grace period? Seems like that should be necessary only if we implement persistent sessions, but I suppose it can't harm to set it otherwise.
The reboot recovery system common to 4.0 and 4.1 needs some work, but that's a preexisting 4.0 problem.
Not needed immediately
This is stuff that is still a high priority, but that we can temporarily get away without doing on the grounds that they aren't absolutely required for minimal interoperability, and/or they don't introduce any new problems that don't already exist in the 4.0 implementation.
Referring triples
So, the particular requirement, from 2.10.6.3, is below (and in the below you can take the "client operation" to be an open, and the "associated object" to be a delegation created by that open):
"For each client operation which might result in some sort of server callback, the server SHOULD "remember" the { session ID, slot ID, sequence ID } triple of the client request until the slot ID retirement rules allow the server to determine that the client has, in fact, seen the server's reply. Until the time the { session ID, slot ID, sequence ID } request triple can be retired, any recalls of the associated object MUST carry an array of these referring identifiers (in the CB_SEQUENCE operation's arguments), for the benefit of the client."
If we ignore that "MUST", the result will be for the client to return a BADHANDLE or BADSTATEID error, as in v4.0. We have code to handle that case (by retrying) on the server. So if we ignore this requirement, the resulting behavior will be no worse than in 4.0. So I think we can get away with keeping this a *slightly* lower priority than the other stuff.
(I'd still like to see this done--if possible, at about the time it's done on the client. But it's a higher priority task on the client because there it really is mandatory: a server that lists the referring triples correctly does have a right not to have to handle those temporary BADHANDLE/BADSTATEID errors.)
Fix ERROR_RESOURCE and BADXDR returns
We shouldn't be return RESOURCE to 4.1 clients at all, and most of our BADXDR returns are probably also incorrect--instead we should be returning NFS4ERR_REP_TOO_BIG, NFS4ERR_REQ_TOO_BIG, NFS4ERR_TOO_MANY_OPS, etc.
SSV
This is still listed as mandatory in the spec, and while clients and other servers don't seem to be working on implementing this, it's not yet clear to me that there's a consensus to drop it.
4.0
Highest priority
Required for the 4.0 server to be minimally acceptable.
We may accept new features into 4.1 without requiring these be fixed, but it will be a huge problem if they aren't somehow fixed soon.
Breaking delegations when required
Our delegation implementation does not currently revoke delegations on rename or unlink of a delegated file, leading to stale client caches in some cases.
We have CITI patches to address this problem in the VFS. They still have some bugs, and the design needs to be revisited.
See [[1]] for discussion.
Fix reboot recovery
The existing reboot recovery mechanism for NFSv4.0 has some architectural problems, and the core kernel developers have asked us to replace it. The transition between the new and old system will be awkward, and the earlier it's done the better.
We have a basic design for nfsd4 server recovery.
Fix changeid
We're still relying on ctime for this, inadequate especially for ext3 (with 1-second resolution). Newer filesystems are fixing this, but some more work is needed to take advantage of improvements (for example to improve ext4's native changeid feature.)
Lower priority
Still important, but not required for the server to be minimally acceptable.
Accepting more compounds
Out-of-spec compound restrictions: we don't, for example, currently allow the client to send more than one IO (read, write, readdir) operation in a single compound. Some day adventurous clients may run across these cases.
Stateowner DOS protection
We don't remove lockowners until close, release_lockowner, or client expiration, making it possible to DOS the server by opening a file and repeatedly locking it with a different lockowner each time, without closing the file.
(Also check treatment of open owners.)