NewNfsManPage

From Linux NFS

(Difference between revisions)
Jump to: navigation, search
(Copy mount(8) style for titling each option)
(Correct a minor editing mistake)
 
(43 intermediate revisions not shown)
Line 3: Line 3:
== NAME ==
== NAME ==
-
nfs - '''nfs''' and '''nfs4''' fstab format and options
+
nfs - fstab format and options for the '''nfs''' and '''nfs4''' file systems
== SYNOPSIS ==
== SYNOPSIS ==
Line 13: Line 13:
NFS is an Internet Standard protocol invented by Sun Microsystems in the 1980s to share files between systems residing on a local area network.  The Linux NFS client supports three versions of the NFS protocol: NFS version 2 [RFC1094], NFS version 3 [RFC1813], and NFS version 4 [RFC3530].
NFS is an Internet Standard protocol invented by Sun Microsystems in the 1980s to share files between systems residing on a local area network.  The Linux NFS client supports three versions of the NFS protocol: NFS version 2 [RFC1094], NFS version 3 [RFC1813], and NFS version 4 [RFC3530].
-
The ''/etc/fstab'' file describes how a system's file name hierarchy is assembled from various independent file systems, including remote NFS shares. The '''mount'''(8) command attaches a file system to the system's name space hierarchy at a given ''mount point''. Each line in the ''/etc/fstab'' file describes a single file system and its mount point.
+
The ''/etc/fstab'' file describes how a system's file name hierarchy is assembled from various independent file systems, including remote NFS shares. The '''mount'''(8) command attaches a file system to the system's name space hierarchy at a given ''mount point''. Each line in the ''/etc/fstab'' file describes a single file system, its mount point, and a set of default mount options for that mount point.
For NFS file system mounts, a line in the ''/etc/fstab'' file specifies the server name, the path name of the exported server directory to mount, the local directory that is the mount point, the type of file system that is being mounted, and a list of mount options that control the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point:
For NFS file system mounts, a line in the ''/etc/fstab'' file specifies the server name, the path name of the exported server directory to mount, the local directory that is the mount point, the type of file system that is being mounted, and a list of mount options that control the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point:
Line 21: Line 21:
The server's hostname and the export pathname are separated by a colon, the mount options are separated by commas, and the remaining fields are separated by blanks or tabs. The server's hostname can be an unqualified hostname, a fully qualified domain name, or a dotted quad IPv4 address. The ''fstype'' field contains either "nfs" for version 2 or version 3 NFS mounts, or "nfs4" for NFS version 4 mounts. The '''nfs''' and '''nfs4''' file system types share similar mount options, which are described below.
The server's hostname and the export pathname are separated by a colon, the mount options are separated by commas, and the remaining fields are separated by blanks or tabs. The server's hostname can be an unqualified hostname, a fully qualified domain name, or a dotted quad IPv4 address. The ''fstype'' field contains either "nfs" for version 2 or version 3 NFS mounts, or "nfs4" for NFS version 4 mounts. The '''nfs''' and '''nfs4''' file system types share similar mount options, which are described below.
-
== OPTIONS ==
+
== MOUNT OPTIONS ==
-
See '''mount'''(8) for a description of generic mount options available for all file systems.
+
See '''mount'''(8) for a description of generic mount options available for all file systems. Use the '''defaults''' generic option if you do not need to specify any mount options.
-
 
+
-
Some options take the form of ''keyword=value'' while other options are boolean, taking either the form of ''keyword'' or ''nokeyword.'' All options which do not use the ''keyword=value'' form use the boolean form, except for '''hard/soft''', '''udp/tcp''', and '''fg/bg'''.
+
=== Valid options for either the nfs or nfs4 file system type ===
=== Valid options for either the nfs or nfs4 file system type ===
Line 34: Line 32:
|-
|-
|valign="top"| '''soft''' / '''hard'''
|valign="top"| '''soft''' / '''hard'''
-
|Determines the recovery behavior of the RPC client after an RPC request times out.  If neither option is specified, or if the ''hard'' option is specified, the RPC is retried indefinitely.  If the ''soft'' option is specified, then the RPC client fails the RPC request after a major timeout occurs, and causes the NFS client to return an error to the calling application.
+
|Determines the recovery behavior of the RPC client after an RPC request times out.  If neither option is specified, or if the '''hard''' option is specified, the RPC is retried indefinitely.  If the '''soft''' option is specified, then the RPC client fails the RPC request after a major timeout occurs, and causes the NFS client to return an error to the calling application.
 +
 
 +
''NB:'' A so-called "soft" timeout can cause silent data corruption in certain cases, so you should use the '''soft''' option only in cases where client responsiveness is more important than data integrity.  Using NFS over TCP or lengthening your retransmit timeout via the '''timeo=''' option may mitigate some of the risk of using the '''soft''' mount option.
|-
|-
|valign="top"|'''timeo'''=''n''
|valign="top"|'''timeo'''=''n''
-
|The value, in tenths of a second, before timing out an RPC request. The default value is 600 (60 seconds) for NFS over TCP. On a UDP transport, the Linux RPC client uses an adaptive algorithm to estimate the time out value for frequently used request types such as READ and WRITE, and uses the ''timeo='' setting for infrequently used requests such as FSINFO. The ''timeo='' value defaults to 7 tenths of a second for NFS over UDP.  After each timeout, the RPC client may retransmit the timed out request, or it may take some other action depending on the settings of the ''hard'' or ''retrans='' options.
+
|The value, in tenths of a second, before timing out an RPC request. The default value is 600 (60 seconds) for NFS over TCP. On a UDP transport, the Linux RPC client uses an adaptive algorithm to estimate the time out value for frequently used request types such as READ and WRITE, and uses the '''timeo=''' setting for infrequently used requests such as FSINFO. The '''timeo=''' value defaults to 7 tenths of a second for NFS over UDP.  After each timeout, the RPC client may retransmit the timed out request, or it may take some other action depending on the settings of the '''hard''' or '''retrans=''' options.
|-
|-
|valign="top"|'''retrans'''=''n''
|valign="top"|'''retrans'''=''n''
Line 43: Line 43:
|-
|-
|valign="top"|'''rsize'''=''n''
|valign="top"|'''rsize'''=''n''
-
|The maximum number of bytes in each network READ request that the NFS client can use when reading data from a file on an NFS server; the actual data payload size of each NFS READ request is equal to or smaller than the ''rsize'' value. The ''rsize'' value is a positive integral multiple of 1024, and the largest value supported by the Linux NFS client is 1,048,576 bytes. Specified values outside of this range are rounded down to the closest multiple of 1024, and specified values smaller than 1024 are replaced with a default of 4096. If an ''rsize'' value is not specified, or if a value is specified but is larger than the maximums either the client or server support, the client and server negotiate the largest ''rsize'' value that both will support. The ''rsize'' option as specified on the '''mount'''(8) command line appears in the ''/etc/mtab'' file, but the effective ''rsize'' value negotiated by the client and server is reported in the ''/proc/mounts'' file.
+
|The maximum number of bytes in each network READ request that the NFS client can use when reading data from a file on an NFS server; the actual data payload size of each NFS READ request is equal to or smaller than the '''rsize''' value. The '''rsize''' value is a positive integral multiple of 1024, and the largest value supported by the Linux NFS client is 1,048,576 bytes. Specified values outside of this range are rounded down to the closest multiple of 1024, and specified values smaller than 1024 are replaced with a default of 4096. If an '''rsize''' value is not specified, or if a value is specified but is larger than the maximums either the client or server support, the client and server negotiate the largest '''rsize''' value that both will support. The ''rsize'' option as specified on the '''mount'''(8) command line appears in the ''/etc/mtab'' file, but the effective '''rsize''' value negotiated by the client and server is reported in the ''/proc/mounts'' file.
|-
|-
|valign="top"|'''wsize'''=''n''
|valign="top"|'''wsize'''=''n''
-
|The maximum number of bytes per network WRITE request that the NFS client can use when writing data to a file on an NFS server. See the description of the ''rsize'' option for more details.
+
|The maximum number of bytes per network WRITE request that the NFS client can use when writing data to a file on an NFS server. See the description of the '''rsize''' option for more details.
|-
|-
|valign="top"|'''acregmin'''=''n''
|valign="top"|'''acregmin'''=''n''
Line 61: Line 61:
|-
|-
|valign="top"|'''actimeo'''=''n''
|valign="top"|'''actimeo'''=''n''
-
|Using actimeo sets all of ''acregmin'', ''acregmax'', ''acdirmin'', and ''acdirmax'' to the same value. There is no default value.
+
|Using actimeo sets all of '''acregmin''', '''acregmax''', '''acdirmin''', and '''acdirmax''' to the same value. There is no default value.
|-
|-
|valign="top"|'''ac''' / '''noac'''
|valign="top"|'''ac''' / '''noac'''
-
|Disable attribute caching, and force synchronous writes.  The ''noac'' option is synonymous with using ''actimeo=0,sync''.  If this option is not specified, the default behavior is ''ac''.  Using the ''noac'' option provides much greater cache coherency among NFS clients accessing the same files, but it extracts a significant performance penalty.  Judicious use of file locking is encouraged instead.
+
|Disable attribute caching, and force synchronous writes.  The '''noac''' option is synonymous with using '''actimeo=0,sync'''.  If this option is not specified, the default behavior is '''ac'''.  Using the '''noac''' option provides much greater cache coherency among NFS clients accessing the same files, but it extracts a significant performance penalty.  Judicious use of file locking is encouraged instead. The DATA AND METADATA COHERENCY section contains a detailed discussion of these trade-offs.
|-
|-
|valign="top"| '''bg''' / '''fg'''
|valign="top"| '''bg''' / '''fg'''
-
|This mount option determines how the '''mount'''(8) command behaves if an attempt to mount a remote share fails. The ''fg'' option causes '''mount'''(8) to exit with an error status if any part of the mount request times out or fails outright. This is called a "foreground" mount, and is the default behavior if neither ''fg'' nor ''bg'' is specified. If the ''bg'' option is specified, a timeout or failure causes the '''mount'''(8) command to fork a child which continues to attempt to mount the remote share.  The parent immediately returns with a zero exit code. This is known as a "background" mount.
+
|This mount option determines how the '''mount'''(8) command behaves if an attempt to mount a remote share fails. The '''fg''' option causes '''mount'''(8) to exit with an error status if any part of the mount request times out or fails outright. This is called a "foreground" mount, and is the default behavior if neither '''fg''' nor '''bg''' is specified. If the '''bg''' option is specified, a timeout or failure causes the '''mount'''(8) command to fork a child which continues to attempt to mount the remote share.  The parent immediately returns with a zero exit code. This is known as a "background" mount. If the local mount point directory is missing, the '''mount'''(8) command treats that as if the mount request timed out. This permits nested NFS mounts specified in ''/etc/fstab'' to proceed in any order during system initialization, even if the NFS server is not yet available. Alternatively these issues can be addressed using an automounter (see '''automount'''(8) for details).
-
If the local mount point directory is missing, the '''mount'''(8) command treats that as if the mount request timed out. This permits nested NFS mounts specified in ''/etc/fstab'' to proceed in any order during system initialization.
+
|-
|-
|valign="top"|'''retry'''=''n''
|valign="top"|'''retry'''=''n''
-
|The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for foreground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week.
+
|The number of minutes to retry an NFS mount operation in the foreground or background before giving up. If this option is not specified, the default value for foreground mounts is 2 minutes, and the default value for background mounts is 10000 minutes, which is roughly one week.
|-
|-
|valign="top"|'''sec'''=''mode''
|valign="top"|'''sec'''=''mode''
-
|The RPCGSS security flavor to use for accessing files on this mount point. If the ''sec='' option is not specified, or if ''sec=sys'' is specified, the RPC client uses the AUTH_SYS security flavor for all RPC operations on this mount point. Valid security flavors are '''none''', '''sys''', '''krb5''', '''krb5i''', '''krb5p''', '''lkey''', '''lkeyi''', '''lkeyp''', '''spkm''', '''spkmi''', and '''spkmp'''. See the SECURITY CONSIDERATIONS section for details.
+
|The RPCGSS security flavor to use for accessing files on this mount point. If the '''sec=''' option is not specified, or if '''sec=sys''' is specified, the RPC client uses the AUTH_SYS security flavor for all RPC operations on this mount point. Valid security flavors are '''none''', '''sys''', '''krb5''', '''krb5i''', '''krb5p''', '''lkey''', '''lkeyi''', '''lkeyp''', '''spkm''', '''spkmi''', and '''spkmp'''. See the SECURITY CONSIDERATIONS section for details.
|-
|-
|valign="top"|'''sharecache''' / '''nosharecache'''
|valign="top"|'''sharecache''' / '''nosharecache'''
-
|Determines how the client's data cache is shared between mount points that mount the same remote share. If the option is not specified, or the ''sharecache'' option is specified, then all mounts of the same remote share on a client use the same data cache. If the ''nosharecache'' option is specified, then files under that mount point are cached separately from files under other mount points that may be accessing the same remote share. As of kernel 2.6.18, this is legacy caching behavior, and is considered a data risk since multiple cached copies of the same file on the same client can become out of sync following an update of one of the copies.
+
|Determines how the client's data cache is shared between mount points that mount the same remote share. If the option is not specified, or the '''sharecache''' option is specified, then all mounts of the same remote share on a client use the same data cache. If the '''nosharecache''' option is specified, then files under that mount point are cached separately from files under other mount points that may be accessing the same remote share. As of kernel 2.6.18, this is legacy caching behavior, and is considered a data risk since multiple cached copies of the same file on the same client can become out of sync following an update of one of the copies.
|}
|}
Line 86: Line 85:
{|
{|
|-
|-
-
|valign="top"|proto=''netid''
+
|valign="top"|'''proto'''=''netid''
-
|The transport protocol used by the RPC client to transmit requests to the NFS server for this mount point. The value of ''netid'' can be either '''udp''' or '''tcp'''. Each transport protocol uses different default ''retrans'' and ''timeo'' settings; see the description of these two mount options for details.
+
|The transport protocol used by the RPC client to transmit requests to the NFS server for this mount point. The value of ''netid'' can be either '''udp''' or '''tcp'''. Each transport protocol uses different default '''retrans''' and '''timeo''' settings; see the description of these two mount options for details.
-
''NB:'' This mount option controls both how the '''mount'''(8) command communicates with the portmapper and the MNT and NFS server, and what transport protocol the in-kernel NFS client uses to transmit requests to the NFS server. Specifying ''proto=tcp'' forces all traffic from the mount command and the NFS client to use TCP. Specifying ''proto=udp'' forces all traffic types to use UDP. If the ''proto='' mount option is not specified, the '''mount'''(8) command chooses the best transport for each type of request (GETPORT, MNT, and NFS), and by default the in-kernel NFS client uses the TCP protocol. If the server doesn't support one or the other protocol, the '''mount'''(8) command attempts to discover which protocol is supported and use that one.
+
''NB:'' This mount option controls both how the '''mount'''(8) command communicates with the server's portmapper and its MNT and NFS services, and what transport protocol the NFS client uses to transmit requests to the NFS server. Specifying '''proto=tcp''' forces all traffic from the mount command and the NFS client to use TCP. Specifying '''proto=udp''' forces all traffic types to use UDP. If the '''proto=''' mount option is not specified, the '''mount'''(8) command chooses the best transport for each type of request (GETPORT, MNT, and NFS), and by default the NFS client uses the TCP protocol. If the server doesn't support one or the other protocol, the '''mount'''(8) command attempts to discover which protocol is supported and use that one.  See TRANSPORT METHODS for more details.
|-
|-
-
|valign="top"|udp
+
|valign="top"|'''udp'''
-
|The ''udp'' option is an alternative to ''proto=udp'' and is included for compatibility with other operating systems.
+
|The '''udp''' option is an alternative to '''proto=udp''' and is included for compatibility with other operating systems.
|-
|-
-
|valign="top"|tcp
+
|valign="top"|'''tcp'''
-
|The ''tcp'' option is an alternative to ''proto=tcp'' and is included for compatibility with other operating systems.
+
|The '''tcp''' option is an alternative to '''proto=tcp''' and is included for compatibility with other operating systems.
|-
|-
-
|valign="top"|port=''n''
+
|valign="top"|'''port'''=''n''
-
|The numeric value of the port used by the remote NFS service. If the ''port='' option is not specified, or if the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.
+
|The numeric value of the port used by the remote NFS service. If this option is not specified, or if the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.
|-
|-
-
|valign="top"|namlen=''n''
+
|valign="top"|'''namlen'''=''n''
|The maximum filename length on this mount.  If this option is not specified, the maximum length is negotiated with the server and is usually 255 characters.  Some early versions of NFS did not support this negotiation.  This option can be used to ensure that '''pathconf'''(3) reports the proper maximum to applications in this circumstance.
|The maximum filename length on this mount.  If this option is not specified, the maximum length is negotiated with the server and is usually 255 characters.  Some early versions of NFS did not support this negotiation.  This option can be used to ensure that '''pathconf'''(3) reports the proper maximum to applications in this circumstance.
|-
|-
-
|valign="top"|mountport=''n''
+
|valign="top"|'''mountport'''=''n''
-
|The numeric value of the server's '''mountd''' port.  If this option is not specified, the client discovers the port by contacting the server's rpcbind daemon.  This option can be used when mounting an NFS server through a firewall that denies access to RPC portmapper requests.
+
|The numeric value of the server's '''mountd''' port.  If this option is not specified, the client discovers the port by contacting the server's portmapper.  This option can be used when mounting an NFS server through a firewall that denies access to RPC portmapper requests.
|-
|-
-
|valign="top"|mounthost=''name''
+
|valign="top"|'''mounthost'''=''name''
|The name of the host running '''mountd'''.
|The name of the host running '''mountd'''.
|-
|-
-
|valign="top"|mountprog=''n''
+
|valign="top"|'''mountvers'''=''n''
-
|The RPC program number used to contact the server's '''mountd'''.  If this option is not specified, the client uses the standard program number for the RPC mountd protocol, which is 100005.  This option is useful when multiple NFS services are running on the same remote server host.
+
-
|-
+
-
|valign="top"|mountvers=''n''
+
|The RPC version number used to contact the server's '''mountd'''.  If this option is not specified, the client uses a version number appropriate to the requested NFS version.  This option is useful when multiple NFS services are running on the same remote server host.
|The RPC version number used to contact the server's '''mountd'''.  If this option is not specified, the client uses a version number appropriate to the requested NFS version.  This option is useful when multiple NFS services are running on the same remote server host.
|-
|-
-
|valign="top"|nfsprog=''n''
+
|valign="top"|'''nfsvers'''=''n''
-
|The RPC program number used to contact the NFS service on the remote host.  If this option is not specified, the client uses the standard program number for the NFS protocol, which is 100003.  This option is useful when multiple NFS services are running on the same remote server host.
+
-
|-
+
-
|valign="top"|nfsvers=''n''
+
|The NFS protocol version number to contact the NFS daemon on the remote host.  The Linux client supports version 2 and version 3 of the NFS protocol when using the '''nfs''' file system type.  If this option is not specified, the client attempts to use version 3, but will negotiate with the server if version 3 is not supported.
|The NFS protocol version number to contact the NFS daemon on the remote host.  The Linux client supports version 2 and version 3 of the NFS protocol when using the '''nfs''' file system type.  If this option is not specified, the client attempts to use version 3, but will negotiate with the server if version 3 is not supported.
|-
|-
-
|valign="top"|vers=''n''
+
|valign="top"|'''vers'''=''n''
-
|''vers='' is an alternative to ''nfsvers'' and is included for compatibility with other operating systems.
+
|This option is an alternative to the '''nfsvers''' option.  It is included for compatibility with other operating systems.
|-
|-
-
|valign="top"|lock
+
|valign="top"|'''lock''' / '''nolock'''
-
|Use or do not use the NLM protocol to lock files on the server.  If this option is not specified, the default is to use NLM locking for this mount point.  NLM locking must be disabled with the ''nolock'' option when using NFS to mount '''/var''' because '''/var''' contains files used by the NLM implementation on Linux.  Using the ''nolock'' option is also required when mounting shares on NFS servers that do not support the NLM protocol.  When using the ''nolock'' option, applications can lock files, but such locks provide exclusion only against other applications running on the same client.  Remote applications will not be affected by these locks.
+
|Selects whether to use the NLM protocol to lock files on the server.  If this option is not specified, the default is to use NLM locking for this mount point.  NLM locking must be disabled with the '''nolock''' option when using NFS to mount ''/var'' because ''/var'' contains files used by the NLM implementation on Linux.  Using the '''nolock''' option is also required when mounting shares on NFS servers that do not support the NLM protocol.  When using the '''nolock''' option, applications can lock files, but such locks provide exclusion only against other applications running on the same client.  Remote applications will not be affected by these locks.
|-
|-
-
|valign="top"|intr
+
|valign="top"|'''intr''' / '''nointr'''
-
|Allow or do not allow signals to interrupt file operations on this mount point.  When an NFS operation is interrupted, the system returns EINTR.  If the ''intr'' option is not specified, signals do not interrupt NFS file operations (''nointr'' is default).  Using the ''intr'' option is preferred to using the ''soft'' option because it is significantly less likely to result in data corruption.
+
|Selects whether or not to allow signals to interrupt file operations on this mount point.  When a system call is interrupted while an NFS operation is outstancing, the system call returns EINTR.  If the '''intr''' option is not specified, signals do not interrupt NFS file operations ('''nointr''' is default).  Using the '''intr''' option is preferred to using the '''soft''' option because it is significantly less likely to result in data corruption.
|-
|-
-
|valign="top"|posix
+
|valign="top"|'''cto''' / '''nocto'''
-
|Enable or do not enable POSIX filename semantics on this mount point.  If this option is not specified, we're not sure what happensThis enables the NFS client to support the '''pathconf'''(3) system call by querying the mount server for the maximum length of a filename.  Some early versions of NFS did not support this negotiation.  This option can be used to ensure that '''pathconf'''(3) reports the proper maximum to applications in this circumstance.
+
|Selects whether or not to use close-to-open cache coherency semantics.  If this option is not specified, the default is to use close-to-open cache coherency semanticsUsing the '''nocto''' option may improve performance for read-only mounts if the data on the server changes only occasionally. The DATA AND METADATA COHERENCY section discusses the behavior of this option in more detail.
|-
|-
-
|valign="top"|cto
+
|valign="top"|'''acl''' / '''noacl'''
-
|Use close-to-open cache coherency semantics.  If this option is not specified, the default is to use close-to-open cache coherency semanticsUsing the ''nocto'' option may improve performance for read-only mounts if the data on the server changes only occasionally.
+
|Selects whether or not to use the NFSACL sideband protocol on this mount point.  The NFSACL sideband protocol is a proprietary protocol implemented in Solaris that manages Access Control Lists.  It was never part of the standardized NFS version 3 protocol.  If this option is not specified, the NFS client negotiates with the server to see if the NFSACL protocol is supported, and uses it if the server supports itDisabling the NFSACL sideband protocol may be necessary if the negotiation causes problems on the client or server.  See the SECURITY CONSIDERATIONS section for more details.
|-
|-
-
|valign="top"|acl
+
|valign="top"|'''rdirplus''' / '''nordirplus'''
-
|Use or do not use the NFSACL sideband protocol on this mount point.  If this option is not specified, the NFS client negotiates with the server to see if the NFSACL protocol is supported, and uses it if the server supports it.
+
|Selects whether or not to use NFSv3 READDIRPLUS RPCs.  If this option is not specified, the NFS client uses READDIRPLUS requests on NFS version 3 mounts to read small directories.  Some applications perform better if the client uses only READDIR requests for all directories.
-
|-
+
-
|valign="top"|rdirplus
+
-
|Use or do not use NFSv3 READDIRPLUS RPCs.  If this option is not specified, the NFS client uses READDIRPLUS requests on NFS version 3 mounts to read small directories.  Some applications perform better if the client uses only READDIR requests for all directories.
+
|}
|}
Line 148: Line 138:
{|
{|
|-
|-
-
|valign="top"|proto=''netid''
+
|valign="top"|'''proto'''=''netid''
-
|The transport protocol used by the RPC client to transmit requests to the NFS server. The value of ''netid'' can be either '''udp''' or '''tcp'''. All NFS version 4 servers are required to support TCP, so the default transport protocol for NFS version 4 is TCP.
+
|The transport protocol used by the RPC client to transmit requests to the NFS server. The value of ''netid'' can be either '''udp''' or '''tcp'''. All NFS version 4 servers are required to support TCP, so if this mount option is not specified, the default transport protocol for NFS version 4 is TCP.  See the TRANSPORT METHODS section for more details.
 +
|-
 +
|valign="top"|'''port'''=''n''
 +
|The numeric value of the port used by the remote NFS service. If this mount option is not specified, the NFS client uses the standard NFS port number of 2049 without checking the remote portmapper service. If the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.
|-
|-
-
|valign="top"|port=''n''
+
|valign="top"|'''intr''' / '''nointr'''
-
|The numeric value of the port used by the remote NFS service. If the ''port='' option is not specified, the NFS client uses the standard NFS port number of 2049 without checking the remote portmapper service. If the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.
+
|Selects whether or not to allow signals to interrupt file operations on this mount point.  When a system call is interrupted during an outstanding NFS operation, the system call returns EINTR. If the '''intr''' option is not specified, signals can interrupt NFS file operations ('''intr''' is default). Using the '''intr''' option is preferred to using the '''soft''' option because it is significantly less likely to result in data corruption.
|-
|-
-
|valign="top"|intr
+
|valign="top"|'''cto''' / '''nocto'''
-
|Allow or do not allow signals to interrupt file operations on this mount point. When an NFS operation is interrupted, the system returns EINTR.  If the ''intr'' option is not specified, signals can interrupt NFS file operations (''intr'' is default). Using the ''intr'' option is preferred to using the ''soft'' option because it is significantly less likely to result in data corruption.
+
|Selects whether or not to use close-to-open cache coherency semantics for NFS directories on this mount point. If this option is not specified, the default is to use close-to-open cache coherency semantics for directories. Using the nocto option may improve performance for read-only mounts if the data on the server changes only occasionally. The DATA AND METADATA COHERENCY section discusses the behavior of this option in more detail.
|-
|-
-
|valign="top"|clientaddr=''n.n.n.n''
+
|valign="top"|'''clientaddr'''=''n.n.n.n''
|Specifies a single IPv4 address in dotted-quad form that the NFS client advertises to allow servers to perform NFSv4 callback requests against files on this mount point. If the server is not able to establish callback connections to clients, performance may degrade, or accesses to files may temporarily hang.
|Specifies a single IPv4 address in dotted-quad form that the NFS client advertises to allow servers to perform NFSv4 callback requests against files on this mount point. If the server is not able to establish callback connections to clients, performance may degrade, or accesses to files may temporarily hang.
If this option is not specified, the '''mount'''(8) command attempts to discover an appropriate callback address automatically.  The automatic discovery process is not perfect, however.  In the presence of multiple client network interfaces, special routing policies, or atypical network topologies, the exact address to use for callbacks may be nontrivial to determine.
If this option is not specified, the '''mount'''(8) command attempts to discover an appropriate callback address automatically.  The automatic discovery process is not perfect, however.  In the presence of multiple client network interfaces, special routing policies, or atypical network topologies, the exact address to use for callbacks may be nontrivial to determine.
|}
|}
 +
 +
== TRANSPORT METHODS ==
 +
 +
The '''mount'''(8) command, the NFS client, and the NFS server can usually automatically negotiate proper transport and data transfer size settings for a mount point.  There are some cases, however, where it pays to specify these settings explicitly using mount options.  This section provides some advice on how to specify appropriate mount options to control transport and data transfer size settings.
 +
 +
NFS clients send requests to NFS servers via Remote Procedure Calls, or RPCs. RPCs handle per-request authentication, adjust request parameters for different byte endianness on client and server, and retransmit requests that may have been lost by the network or server.  RPC requests and replies flow over a network transport.
 +
 +
TCP is the default transport for all modern NFS implementations, and should be your starting place.  It performs well in almost every conceivable network environment and provides excellent guarantees against data corruption due to network unreliability.  TCP is often a requirement for mounting a server through a network firewall.
 +
 +
The UDP transport has many limitations that prevent smooth operation and good performance in some common deployment environments.  However, UDP can be quite effective in specialized settings where the network's MTU is large relative to NFS's data transfer size.  This includes the use of jumbo Ethernet frames or high bandwidth local area networks.  Thus, with UDP, trimming the '''rsize''' and '''wsize''' settings so that each NFS read or write request fits in just a few network frames, or even in a single frame, is advised.  This reduces the probability that the loss of a single MTU-sized network frame results in the loss of an entire NFS request.  TCP itself manages the reliability of network transmissions, thus '''rsize''' and '''wsize''' can safely be allowed to default to the largest settings supported by both client and server.
 +
 +
Because TCP manages network-related packet loss, the NFS client does not need to retransmit NFS requests frequently over TCP.  Reasonable timeout and retransmit settings for NFS over TCP are in the one to several minute range.  Shorter timeouts for NFS over TCP are usually asking for trouble, since a retransmit can result in the replay of NFS requests out of order.  For UDP, packet loss can result in the loss of a whole NFS request, thus retransmit timeouts are usually in the subsecond range.  The Linux RPC client employs an RTT estimator that dynamically manages the timeout settings for requests sent via UDP.
== DATA AND METADATA COHERENCY ==
== DATA AND METADATA COHERENCY ==
-
Perfect cache coherency among disparate NFS clients is very expensive to achieve, so NFS settles for something weaker that satisfies the requirements of most everyday types of file sharing. Everyday file sharing is most often completely sequential: first client A opens a file, writes something to it, then closes it; then client B opens the same file, and reads the changes.
+
Some modern cluster file systems provide perfect cache coherence among their clients.  Perfect cache coherency among disparate NFS clients is expensive to achieve, especially on wide area networks. Thus NFS settles for weaker cache coherency that satisfies the requirements of most everyday types of file sharing. Everyday file sharing is commonly completely sequential: first client A opens a file, writes something to it, then closes it; then client B opens the same file, and reads the changes.
=== Close-to-open cache consistency ===
=== Close-to-open cache consistency ===
-
When an application opens a file stored in NFS, the NFS client checks that it still exists on the server, and is permitted to the opener, by sending a GETATTR or ACCESS operation. When the application closes the file, the NFS client writes back any pending changes to the file so that the next opener can view the changes. This also gives the NFS client an opportunity to report any server write errors to the application via the return code from '''close'''(2). This behavior is referred to as ''close-to-open cache consistency''.
+
When an application opens a file stored on an NFS server, the NFS client checks that it still exists on the server and is permitted to the opener by sending a GETATTR or ACCESS request. When the application closes the file, the NFS client writes back any pending changes to the file so that the next opener can view the changes. This also gives the NFS client an opportunity to report any server write errors to the application via the return code from '''close'''(2). The behavior of checking at open time and flushing at close time is referred to as ''close-to-open cache consistency''.
-
 
+
-
Linux implements close-to-open cache consistency by comparing the results of a GETATTR operation done just after the file is closed to the results of a GETATTR operation done when the file is next opened. If the results are the same, the client will assume its data cache is still valid; otherwise, the cache is purged.
+
=== Weak cache consistency ===
=== Weak cache consistency ===
-
There are still opportunities for a client's data cache to contain stale data. The NFS version 3 protocol introduced "weak cache consistency" (also known as WCC) which provides a way of checking a file's attributes before and after an operation to allow a client to identify changes that could have been made by other clients. Unfortunately when a client is using many concurrent operations that update the same file at the same time, it is impossible to tell whether it was that client's updates or some other client's updates that changed the file.
+
There are still opportunities for a client's data cache to contain stale data. The NFS version 3 protocol introduced "weak cache consistency" (also known as WCC) which provides a way of checking a file's attributes before and after a single request to allow a client to help identify changes that could have been made by other clients. Unfortunately when a client is using many concurrent operations that update the same file at the same time, it is impossible to tell whether it was that client's updates or some other client's updates that changed the file.
=== Attribute caching ===
=== Attribute caching ===
-
Use the "noac" mount option to achieve attribute cache coherency among multiple clients. Almost every client operation checks file attribute information. Usually the client keeps this information cached for a period of time to reduce network and server load. When "noac" is in effect, a client's file attribute cache is disabled, so each operation that needs to check a file's attributes is forced to go back to the server. This permits a client to see changes to a file very quickly, at the cost of many extra network operations.
+
Use the "noac" mount option to achieve attribute cache coherency among multiple clients. Almost every client request checks file attribute information. Usually the client keeps this information cached for a period of time to reduce network and server load. When "noac" is in effect, a client's file attribute cache is disabled, so each operation that needs to check a file's attributes is forced to go back to the server. This permits a client to see changes to a file very quickly, at the cost of many extra network operations.
-
Be careful not to confuse "noac" with "no data caching." The "noac" mount option will keep file attributes up-to-date with the server, but there are still races that may result in data incoherency between client and server.
+
Be careful not to confuse "noac" with "no data caching." The "noac" mount option keeps file attributes up-to-date with the server, but there are still races that may result in data incoherency between client and server.
-
The NFS protocol is not designed to support true cluster file system cache coherency without some type of application serialization.  If you need absolute cache coherency among clients, applications can use file locking, where a client purges file data when a file is locked, and flushes changes back to the server before unlocking a file; or applications can open their files with the O_DIRECT flag to disable data caching entirely.
+
The NFS protocol is not designed to support true cluster file system cache coherency without some type of application serialization.  If absolute cache coherency among clients is required, applications should use file locking, where a client purges file data when a file is locked, and flushes changes back to the server before unlocking a file; or applications can open their files with the O_DIRECT flag to disable data caching entirely.
 +
 
 +
=== The sync mount option ===
 +
 
 +
The NFS client treats the '''sync''' mount option differently than some other file systems (see '''mount'''(8) for a description of the generic '''sync''' and '''async''' mount options).  If neither option is specified, or the '''async''' option is specified, the NFS client delays writes to the server until system memory pressure forces reclamation of memory resources, or an application invokes '''close'''(2) or flushes the file data explicitly, or the file is locked or unlocked via '''fcntl'''(2).  In other words, under normal circumstances, data written by an application may not immediately appear on the server that hosts the file.
 +
 
 +
If the '''sync''' option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call returns control to the application.  This provides greater data coherency among clients, but at a significant performance cost.
 +
 
 +
=== Using file locks with NFS ===
 +
 
 +
A separate side-band protocol, known as the Network Lock Manager protocol, is used to manage file locks in NFS version 2 and version 3.  To support lock recovery after a client or server reboot, a second side-band protocol, known as the Network Status Manager protocol, is also required.  In NFS version 4, file locking is supported directly in the main NFS protocol, and the NLM and NSM side-band protocols are not used.
 +
 
 +
The NLM and NSM services are usually started automatically, and no extra configuration is required.  Configure all NFS clients with fully-qualified domain names to ensure that NFS servers can find clients to notify them of server reboots.
 +
 
 +
NLM supports advisory file locks only.  To lock NFS files, use '''fcntl'''(2) with the F_GETLK and F_SETLK commands.  The NFS client converts file locks obtained via '''flock'''(2) to advisory locks.
 +
 
 +
When mounting servers that do not support the NLM protocol, or when mounting an NFS server through a firewall that blocks the NLM service port, specify the '''nolock''' mount option.  Specifying the '''nolock''' option may also be advised to improve the performance of a proprietary application which runs on a single client and uses file locks extensively.
== SECURITY CONSIDERATIONS ==
== SECURITY CONSIDERATIONS ==
-
NFS provides access control for data, but depends on its RPC implementation to provide authentication of NFS requests. Traditional NFS access control mimics the standard mode bit access control provided in local file systems. Traditional RPC authentication uses a number to represent each user (usually the user's own uid), a number to represent the user's group (the user's gid), and a set of up to 16 auxiliary
+
The NFS server controls access to data in files, but depends on its RPC implementation to provide authentication of NFS requests. Traditional NFS access control mimics the standard mode bit access control provided in local file systems. Traditional RPC authentication uses a number to represent each user (usually the user's own uid), a number to represent the user's group (the user's gid), and a set of up to 16 auxiliary group numbers to represent other groups of which the user may be a member. Typically, file data and user ID values appear in the clear on the network.  Moreover, NFS versions 2 and 3 use separate sideband protocols for mounting, for locking and unlocking files, and for reporting system status of clients and servers. These auxiliary protocols use no authentication. Linux also implements the proprietary NFSACL sideband protocol on NFS version 3 mounts (see below).
-
group numbers to represent other groups of which the user may be a member. File data and user ID values appear in the clear on the network.
+
 
 +
In addition to combining these sideband protocols into a single protocol, NFS version 4 introduces more advanced forms of access control, authentication, and in-transit data protection.  The NFS version 4 specification mandates NFSv4 ACLs, RPCGSS authentication, and RPCGSS security flavors that provide per-RPC integrity checking and encryption. The new security features therefore apply to all NFS version 4 operations including mounting, file locking, and so on.
 +
 
 +
The ''sec'' mount option selects the RPCGSS security mode that is in effect on a given NFS mount point. Using the ''sec=krb5'' mount option provides a cryptographic proof of a user's identity in each RPC request that passes between client and server. This makes a very strong guarantee about who is accessing what data on the server.  Note that additional configuration, besides adding a new mount option, is required in order to enable Kerberos security.  See the '''rpc.gssd'''(8) for details.
 +
 
 +
Two other flavors of Kerberos security are supported as well.  The '''krb5i''' security flavor provides a cryptographically strong guarantee that the data in each RPC request has not been tampered with. The '''krbp''' security flavor encrypts every RPC request so data is not exposed during transit on networks between NFS client and server. There can be some performance impact when using integrity checking or encryption, however.  Similar support for other forms of cryptographic security are also available, including '''lipkey''' and '''SPKM3'''.
 +
 
 +
The NFS version 4 specification allows clients and servers to negotiate among multiple security flavors during mount processing.  However, Linux does not yet implement security mode negotiation between NFS version 4 clients and servers.  The Linux client specifies a single security flavor at mount time which remains in effect for the lifetime of the mount.  If the server does not support this flavor, the initial mount request is rejected by the server.
-
Moreover, NFS versions 2 and 3 use separate protocols for mounting, for locking and unlocking files, and for reporting system
+
=== Mounting through a firewall ===
-
status of clients and servers. These auxiliary protocols use no authentication.
+
-
In addition to combining all the auxiliary protocols into a single protocol, NFS version 4 introduces more advanced forms of access control, authentication, and in-transit data protection. Linux also implements the proprietary NFSv3 access control list implementation built into Solaris, but never standardized, and allows the use of advanced authentication modes for NFS version 2 and version 3 mounts.
+
Sometimes, a firewall may reside between an NFS client and server, or the client or server may block its own ports.  It is still possible to mount an NFS server through a firewall, though some of the '''mount'''(8) command's automatic service endpoint discovery mechanisms may not work, requiring a system administrator to provide specific details via NFS mount options.
-
The NFS version 4 specification mandates NFSv4 ACLs, RPCGSS authentication, and RPCGSS security flavors that provide per-RPC integrity checking and encryption, and it applies to all NFS version 4 operations including mounting, file locking, and so on. Note that Linux does not yet implement security mode negotiation between NFS version 4 clients and servers.
+
Usually a server administrator fixes the port number of NFS-related services so that the firewall can block all ports but the specific NFS service ports.  In this case, the port number for the mountd service may need to be specified via NFS mount options. It may also be necessary to enforce the use of TCP or UDP if the firewall blocks one of those transports.
-
A mount option enables the RPCGSS security mode that is in effect on a given NFS mount point. Using the ''sec=krb5'' mount option provides a cryptographic proof of a user's identity in each RPC request that passes between client and server. This makes a very strong guarantee about who is accessing what data on the server.
+
=== NFS Access Control Lists ===
-
Two other flavors of Kerberos security are supported as well'''krb5i''' provides a cryptographically strong guarantee that the data in each RPC request has not been tampered with. And '''krbp''' encrypts every RPC request so the data is not exposed at all during transit on networks between NFS client and server. There can be some performance impact when using integrity checking or encryption, however.
+
Solaris provides NFS version 3 clients access to POSIX Access Control Lists stored in its local file systems via a proprietary sideband protocol known as NFSACLThis protocol provides finer grained access control than mode bits.  Linux NFS clients implement this protocol for compatibility with Solaris NFS version 3 servers. The NFSACL protocol never became part of the NFS version 3 specification, however, thus it may not appear in NFS servers implemented by other vendors.
-
Support for other forms of cryptographic security are also available, including lipkey and SPKM3.
+
The NFS version 4 specification mandates a new version of Access Control Lists that are semantically richer than POSIX ACLs.  Support of NFS version 4 ACLs is required in all NFS version 4 implementations.  NFS version 4 ACLs are not fully compatible with POSIX ACLs, thus some translation between the two is required in an environment that mixes POSIX ACLs and NFS version 4.
== EXAMPLES ==
== EXAMPLES ==
Line 221: Line 246:
This example can be used to mount ''/usr'' over NFS.
This example can be used to mount ''/usr'' over NFS.
-
   server:/export/share    /usr            nfs            ro,nocto,nolock
+
   server:/export/share    /usr            nfs            ro,nocto,nolock,actimeo=3600
== FILES ==
== FILES ==
-
''/etc/fstab'' file system table
+
:''/etc/fstab'' file system table
 +
 
 +
== BUGS ==
 +
 
 +
The generic '''remount''' option is not fully supported.  Generic options, such as '''rw''' and '''ro''' can be modified using the '''remount''' option, but NFS-specific options are not all supported.  The underlying transport or NFS version cannot be changed by a remount, for example.  Performing a remount on an NFS file system mounted with '''noac''' may have unintended consequences.  The '''noac''' option is a mixture of a generic option ('''sync''') and an NFS-specific option ('''actimeo=0''').
 +
 
 +
Before 2.4.7, the Linux NFS client did not support NFS over TCP.
 +
 
 +
Before 2.4.20, the Linux NFS client used a heuristic to determine whether cached file data was still valid rather than using the standard close-to-open cache coherency method described above.
 +
 
 +
Starting with 2.4.22, the Linux NFS client uses a Van Jacobsen-based RTT estimator to set RPC timeouts when using NFS over UDP.  The '''timeo''' option controls only the timeout of infrequently used NFS requests, such as FSINFO, on UDP transports.
 +
 
 +
Before 2.6.0, the Linux NFS client did not support NFS version 4.
 +
 
 +
Before 2.6.8, the Linux NFS client used only synchronous reads and writes when the rsize and wsize settings were smaller than the system's page size.
== SEE ALSO ==
== SEE ALSO ==

Latest revision as of 17:57, 26 October 2007

Discussion page is active. See Talk:NewNfsManPage to discuss possible changes to this page.

Contents

NAME

nfs - fstab format and options for the nfs and nfs4 file systems

SYNOPSIS

/etc/fstab

DESCRIPTION

NFS is an Internet Standard protocol invented by Sun Microsystems in the 1980s to share files between systems residing on a local area network. The Linux NFS client supports three versions of the NFS protocol: NFS version 2 [RFC1094], NFS version 3 [RFC1813], and NFS version 4 [RFC3530].

The /etc/fstab file describes how a system's file name hierarchy is assembled from various independent file systems, including remote NFS shares. The mount(8) command attaches a file system to the system's name space hierarchy at a given mount point. Each line in the /etc/fstab file describes a single file system, its mount point, and a set of default mount options for that mount point.

For NFS file system mounts, a line in the /etc/fstab file specifies the server name, the path name of the exported server directory to mount, the local directory that is the mount point, the type of file system that is being mounted, and a list of mount options that control the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point:

 server:path     /mountpoint     fstype     option,option,...

The server's hostname and the export pathname are separated by a colon, the mount options are separated by commas, and the remaining fields are separated by blanks or tabs. The server's hostname can be an unqualified hostname, a fully qualified domain name, or a dotted quad IPv4 address. The fstype field contains either "nfs" for version 2 or version 3 NFS mounts, or "nfs4" for NFS version 4 mounts. The nfs and nfs4 file system types share similar mount options, which are described below.

MOUNT OPTIONS

See mount(8) for a description of generic mount options available for all file systems. Use the defaults generic option if you do not need to specify any mount options.

Valid options for either the nfs or nfs4 file system type

These options are valid to use when mounting either nfs or nfs4 file systems, imply the same behavior and have the same default on both file systems.

soft / hard Determines the recovery behavior of the RPC client after an RPC request times out. If neither option is specified, or if the hard option is specified, the RPC is retried indefinitely. If the soft option is specified, then the RPC client fails the RPC request after a major timeout occurs, and causes the NFS client to return an error to the calling application.

NB: A so-called "soft" timeout can cause silent data corruption in certain cases, so you should use the soft option only in cases where client responsiveness is more important than data integrity. Using NFS over TCP or lengthening your retransmit timeout via the timeo= option may mitigate some of the risk of using the soft mount option.

timeo=n The value, in tenths of a second, before timing out an RPC request. The default value is 600 (60 seconds) for NFS over TCP. On a UDP transport, the Linux RPC client uses an adaptive algorithm to estimate the time out value for frequently used request types such as READ and WRITE, and uses the timeo= setting for infrequently used requests such as FSINFO. The timeo= value defaults to 7 tenths of a second for NFS over UDP. After each timeout, the RPC client may retransmit the timed out request, or it may take some other action depending on the settings of the hard or retrans= options.
retrans=n The number of RPC timeouts that must occur before a major timeout occurs. The default is 3 timeouts. If the file system is mounted with the hard option, the RPC client will generate a "server not responding" message after a major timeout, then continue to retransmit the request. If the file system is mounted with the soft option, the RPC client will abandon the request after a major timeout, and cause NFS to return an error to the application.
rsize=n The maximum number of bytes in each network READ request that the NFS client can use when reading data from a file on an NFS server; the actual data payload size of each NFS READ request is equal to or smaller than the rsize value. The rsize value is a positive integral multiple of 1024, and the largest value supported by the Linux NFS client is 1,048,576 bytes. Specified values outside of this range are rounded down to the closest multiple of 1024, and specified values smaller than 1024 are replaced with a default of 4096. If an rsize value is not specified, or if a value is specified but is larger than the maximums either the client or server support, the client and server negotiate the largest rsize value that both will support. The rsize option as specified on the mount(8) command line appears in the /etc/mtab file, but the effective rsize value negotiated by the client and server is reported in the /proc/mounts file.
wsize=n The maximum number of bytes per network WRITE request that the NFS client can use when writing data to a file on an NFS server. See the description of the rsize option for more details.
acregmin=n The minimum time in seconds that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 3 seconds.
acregmax=n The maximum time in seconds that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 60 seconds.
acdirmin=n The minimum time in seconds that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. The default is 30 seconds.
acdirmax=n The maximum time in seconds that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. The default is 60 seconds.
actimeo=n Using actimeo sets all of acregmin, acregmax, acdirmin, and acdirmax to the same value. There is no default value.
ac / noac Disable attribute caching, and force synchronous writes. The noac option is synonymous with using actimeo=0,sync. If this option is not specified, the default behavior is ac. Using the noac option provides much greater cache coherency among NFS clients accessing the same files, but it extracts a significant performance penalty. Judicious use of file locking is encouraged instead. The DATA AND METADATA COHERENCY section contains a detailed discussion of these trade-offs.
bg / fg This mount option determines how the mount(8) command behaves if an attempt to mount a remote share fails. The fg option causes mount(8) to exit with an error status if any part of the mount request times out or fails outright. This is called a "foreground" mount, and is the default behavior if neither fg nor bg is specified. If the bg option is specified, a timeout or failure causes the mount(8) command to fork a child which continues to attempt to mount the remote share. The parent immediately returns with a zero exit code. This is known as a "background" mount. If the local mount point directory is missing, the mount(8) command treats that as if the mount request timed out. This permits nested NFS mounts specified in /etc/fstab to proceed in any order during system initialization, even if the NFS server is not yet available. Alternatively these issues can be addressed using an automounter (see automount(8) for details).
retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. If this option is not specified, the default value for foreground mounts is 2 minutes, and the default value for background mounts is 10000 minutes, which is roughly one week.
sec=mode The RPCGSS security flavor to use for accessing files on this mount point. If the sec= option is not specified, or if sec=sys is specified, the RPC client uses the AUTH_SYS security flavor for all RPC operations on this mount point. Valid security flavors are none, sys, krb5, krb5i, krb5p, lkey, lkeyi, lkeyp, spkm, spkmi, and spkmp. See the SECURITY CONSIDERATIONS section for details.
sharecache / nosharecache Determines how the client's data cache is shared between mount points that mount the same remote share. If the option is not specified, or the sharecache option is specified, then all mounts of the same remote share on a client use the same data cache. If the nosharecache option is specified, then files under that mount point are cached separately from files under other mount points that may be accessing the same remote share. As of kernel 2.6.18, this is legacy caching behavior, and is considered a data risk since multiple cached copies of the same file on the same client can become out of sync following an update of one of the copies.

Valid options for the nfs file system type

These options, along with the options in the above subsection, are valid to use when mounting nfs file system.

proto=netid The transport protocol used by the RPC client to transmit requests to the NFS server for this mount point. The value of netid can be either udp or tcp. Each transport protocol uses different default retrans and timeo settings; see the description of these two mount options for details.

NB: This mount option controls both how the mount(8) command communicates with the server's portmapper and its MNT and NFS services, and what transport protocol the NFS client uses to transmit requests to the NFS server. Specifying proto=tcp forces all traffic from the mount command and the NFS client to use TCP. Specifying proto=udp forces all traffic types to use UDP. If the proto= mount option is not specified, the mount(8) command chooses the best transport for each type of request (GETPORT, MNT, and NFS), and by default the NFS client uses the TCP protocol. If the server doesn't support one or the other protocol, the mount(8) command attempts to discover which protocol is supported and use that one. See TRANSPORT METHODS for more details.

udp The udp option is an alternative to proto=udp and is included for compatibility with other operating systems.
tcp The tcp option is an alternative to proto=tcp and is included for compatibility with other operating systems.
port=n The numeric value of the port used by the remote NFS service. If this option is not specified, or if the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.
namlen=n The maximum filename length on this mount. If this option is not specified, the maximum length is negotiated with the server and is usually 255 characters. Some early versions of NFS did not support this negotiation. This option can be used to ensure that pathconf(3) reports the proper maximum to applications in this circumstance.
mountport=n The numeric value of the server's mountd port. If this option is not specified, the client discovers the port by contacting the server's portmapper. This option can be used when mounting an NFS server through a firewall that denies access to RPC portmapper requests.
mounthost=name The name of the host running mountd.
mountvers=n The RPC version number used to contact the server's mountd. If this option is not specified, the client uses a version number appropriate to the requested NFS version. This option is useful when multiple NFS services are running on the same remote server host.
nfsvers=n The NFS protocol version number to contact the NFS daemon on the remote host. The Linux client supports version 2 and version 3 of the NFS protocol when using the nfs file system type. If this option is not specified, the client attempts to use version 3, but will negotiate with the server if version 3 is not supported.
vers=n This option is an alternative to the nfsvers option. It is included for compatibility with other operating systems.
lock / nolock Selects whether to use the NLM protocol to lock files on the server. If this option is not specified, the default is to use NLM locking for this mount point. NLM locking must be disabled with the nolock option when using NFS to mount /var because /var contains files used by the NLM implementation on Linux. Using the nolock option is also required when mounting shares on NFS servers that do not support the NLM protocol. When using the nolock option, applications can lock files, but such locks provide exclusion only against other applications running on the same client. Remote applications will not be affected by these locks.
intr / nointr Selects whether or not to allow signals to interrupt file operations on this mount point. When a system call is interrupted while an NFS operation is outstancing, the system call returns EINTR. If the intr option is not specified, signals do not interrupt NFS file operations (nointr is default). Using the intr option is preferred to using the soft option because it is significantly less likely to result in data corruption.
cto / nocto Selects whether or not to use close-to-open cache coherency semantics. If this option is not specified, the default is to use close-to-open cache coherency semantics. Using the nocto option may improve performance for read-only mounts if the data on the server changes only occasionally. The DATA AND METADATA COHERENCY section discusses the behavior of this option in more detail.
acl / noacl Selects whether or not to use the NFSACL sideband protocol on this mount point. The NFSACL sideband protocol is a proprietary protocol implemented in Solaris that manages Access Control Lists. It was never part of the standardized NFS version 3 protocol. If this option is not specified, the NFS client negotiates with the server to see if the NFSACL protocol is supported, and uses it if the server supports it. Disabling the NFSACL sideband protocol may be necessary if the negotiation causes problems on the client or server. See the SECURITY CONSIDERATIONS section for more details.
rdirplus / nordirplus Selects whether or not to use NFSv3 READDIRPLUS RPCs. If this option is not specified, the NFS client uses READDIRPLUS requests on NFS version 3 mounts to read small directories. Some applications perform better if the client uses only READDIR requests for all directories.

Valid options for the nfs4 file system type

These options, along with the options in the first subsection above, are valid to use when mounting nfs4 file system.

proto=netid The transport protocol used by the RPC client to transmit requests to the NFS server. The value of netid can be either udp or tcp. All NFS version 4 servers are required to support TCP, so if this mount option is not specified, the default transport protocol for NFS version 4 is TCP. See the TRANSPORT METHODS section for more details.
port=n The numeric value of the port used by the remote NFS service. If this mount option is not specified, the NFS client uses the standard NFS port number of 2049 without checking the remote portmapper service. If the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.
intr / nointr Selects whether or not to allow signals to interrupt file operations on this mount point. When a system call is interrupted during an outstanding NFS operation, the system call returns EINTR. If the intr option is not specified, signals can interrupt NFS file operations (intr is default). Using the intr option is preferred to using the soft option because it is significantly less likely to result in data corruption.
cto / nocto Selects whether or not to use close-to-open cache coherency semantics for NFS directories on this mount point. If this option is not specified, the default is to use close-to-open cache coherency semantics for directories. Using the nocto option may improve performance for read-only mounts if the data on the server changes only occasionally. The DATA AND METADATA COHERENCY section discusses the behavior of this option in more detail.
clientaddr=n.n.n.n Specifies a single IPv4 address in dotted-quad form that the NFS client advertises to allow servers to perform NFSv4 callback requests against files on this mount point. If the server is not able to establish callback connections to clients, performance may degrade, or accesses to files may temporarily hang.

If this option is not specified, the mount(8) command attempts to discover an appropriate callback address automatically. The automatic discovery process is not perfect, however. In the presence of multiple client network interfaces, special routing policies, or atypical network topologies, the exact address to use for callbacks may be nontrivial to determine.

TRANSPORT METHODS

The mount(8) command, the NFS client, and the NFS server can usually automatically negotiate proper transport and data transfer size settings for a mount point. There are some cases, however, where it pays to specify these settings explicitly using mount options. This section provides some advice on how to specify appropriate mount options to control transport and data transfer size settings.

NFS clients send requests to NFS servers via Remote Procedure Calls, or RPCs. RPCs handle per-request authentication, adjust request parameters for different byte endianness on client and server, and retransmit requests that may have been lost by the network or server. RPC requests and replies flow over a network transport.

TCP is the default transport for all modern NFS implementations, and should be your starting place. It performs well in almost every conceivable network environment and provides excellent guarantees against data corruption due to network unreliability. TCP is often a requirement for mounting a server through a network firewall.

The UDP transport has many limitations that prevent smooth operation and good performance in some common deployment environments. However, UDP can be quite effective in specialized settings where the network's MTU is large relative to NFS's data transfer size. This includes the use of jumbo Ethernet frames or high bandwidth local area networks. Thus, with UDP, trimming the rsize and wsize settings so that each NFS read or write request fits in just a few network frames, or even in a single frame, is advised. This reduces the probability that the loss of a single MTU-sized network frame results in the loss of an entire NFS request. TCP itself manages the reliability of network transmissions, thus rsize and wsize can safely be allowed to default to the largest settings supported by both client and server.

Because TCP manages network-related packet loss, the NFS client does not need to retransmit NFS requests frequently over TCP. Reasonable timeout and retransmit settings for NFS over TCP are in the one to several minute range. Shorter timeouts for NFS over TCP are usually asking for trouble, since a retransmit can result in the replay of NFS requests out of order. For UDP, packet loss can result in the loss of a whole NFS request, thus retransmit timeouts are usually in the subsecond range. The Linux RPC client employs an RTT estimator that dynamically manages the timeout settings for requests sent via UDP.

DATA AND METADATA COHERENCY

Some modern cluster file systems provide perfect cache coherence among their clients. Perfect cache coherency among disparate NFS clients is expensive to achieve, especially on wide area networks. Thus NFS settles for weaker cache coherency that satisfies the requirements of most everyday types of file sharing. Everyday file sharing is commonly completely sequential: first client A opens a file, writes something to it, then closes it; then client B opens the same file, and reads the changes.

Close-to-open cache consistency

When an application opens a file stored on an NFS server, the NFS client checks that it still exists on the server and is permitted to the opener by sending a GETATTR or ACCESS request. When the application closes the file, the NFS client writes back any pending changes to the file so that the next opener can view the changes. This also gives the NFS client an opportunity to report any server write errors to the application via the return code from close(2). The behavior of checking at open time and flushing at close time is referred to as close-to-open cache consistency.

Weak cache consistency

There are still opportunities for a client's data cache to contain stale data. The NFS version 3 protocol introduced "weak cache consistency" (also known as WCC) which provides a way of checking a file's attributes before and after a single request to allow a client to help identify changes that could have been made by other clients. Unfortunately when a client is using many concurrent operations that update the same file at the same time, it is impossible to tell whether it was that client's updates or some other client's updates that changed the file.

Attribute caching

Use the "noac" mount option to achieve attribute cache coherency among multiple clients. Almost every client request checks file attribute information. Usually the client keeps this information cached for a period of time to reduce network and server load. When "noac" is in effect, a client's file attribute cache is disabled, so each operation that needs to check a file's attributes is forced to go back to the server. This permits a client to see changes to a file very quickly, at the cost of many extra network operations.

Be careful not to confuse "noac" with "no data caching." The "noac" mount option keeps file attributes up-to-date with the server, but there are still races that may result in data incoherency between client and server.

The NFS protocol is not designed to support true cluster file system cache coherency without some type of application serialization. If absolute cache coherency among clients is required, applications should use file locking, where a client purges file data when a file is locked, and flushes changes back to the server before unlocking a file; or applications can open their files with the O_DIRECT flag to disable data caching entirely.

The sync mount option

The NFS client treats the sync mount option differently than some other file systems (see mount(8) for a description of the generic sync and async mount options). If neither option is specified, or the async option is specified, the NFS client delays writes to the server until system memory pressure forces reclamation of memory resources, or an application invokes close(2) or flushes the file data explicitly, or the file is locked or unlocked via fcntl(2). In other words, under normal circumstances, data written by an application may not immediately appear on the server that hosts the file.

If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call returns control to the application. This provides greater data coherency among clients, but at a significant performance cost.

Using file locks with NFS

A separate side-band protocol, known as the Network Lock Manager protocol, is used to manage file locks in NFS version 2 and version 3. To support lock recovery after a client or server reboot, a second side-band protocol, known as the Network Status Manager protocol, is also required. In NFS version 4, file locking is supported directly in the main NFS protocol, and the NLM and NSM side-band protocols are not used.

The NLM and NSM services are usually started automatically, and no extra configuration is required. Configure all NFS clients with fully-qualified domain names to ensure that NFS servers can find clients to notify them of server reboots.

NLM supports advisory file locks only. To lock NFS files, use fcntl(2) with the F_GETLK and F_SETLK commands. The NFS client converts file locks obtained via flock(2) to advisory locks.

When mounting servers that do not support the NLM protocol, or when mounting an NFS server through a firewall that blocks the NLM service port, specify the nolock mount option. Specifying the nolock option may also be advised to improve the performance of a proprietary application which runs on a single client and uses file locks extensively.

SECURITY CONSIDERATIONS

The NFS server controls access to data in files, but depends on its RPC implementation to provide authentication of NFS requests. Traditional NFS access control mimics the standard mode bit access control provided in local file systems. Traditional RPC authentication uses a number to represent each user (usually the user's own uid), a number to represent the user's group (the user's gid), and a set of up to 16 auxiliary group numbers to represent other groups of which the user may be a member. Typically, file data and user ID values appear in the clear on the network. Moreover, NFS versions 2 and 3 use separate sideband protocols for mounting, for locking and unlocking files, and for reporting system status of clients and servers. These auxiliary protocols use no authentication. Linux also implements the proprietary NFSACL sideband protocol on NFS version 3 mounts (see below).

In addition to combining these sideband protocols into a single protocol, NFS version 4 introduces more advanced forms of access control, authentication, and in-transit data protection. The NFS version 4 specification mandates NFSv4 ACLs, RPCGSS authentication, and RPCGSS security flavors that provide per-RPC integrity checking and encryption. The new security features therefore apply to all NFS version 4 operations including mounting, file locking, and so on.

The sec mount option selects the RPCGSS security mode that is in effect on a given NFS mount point. Using the sec=krb5 mount option provides a cryptographic proof of a user's identity in each RPC request that passes between client and server. This makes a very strong guarantee about who is accessing what data on the server. Note that additional configuration, besides adding a new mount option, is required in order to enable Kerberos security. See the rpc.gssd(8) for details.

Two other flavors of Kerberos security are supported as well. The krb5i security flavor provides a cryptographically strong guarantee that the data in each RPC request has not been tampered with. The krbp security flavor encrypts every RPC request so data is not exposed during transit on networks between NFS client and server. There can be some performance impact when using integrity checking or encryption, however. Similar support for other forms of cryptographic security are also available, including lipkey and SPKM3.

The NFS version 4 specification allows clients and servers to negotiate among multiple security flavors during mount processing. However, Linux does not yet implement security mode negotiation between NFS version 4 clients and servers. The Linux client specifies a single security flavor at mount time which remains in effect for the lifetime of the mount. If the server does not support this flavor, the initial mount request is rejected by the server.

Mounting through a firewall

Sometimes, a firewall may reside between an NFS client and server, or the client or server may block its own ports. It is still possible to mount an NFS server through a firewall, though some of the mount(8) command's automatic service endpoint discovery mechanisms may not work, requiring a system administrator to provide specific details via NFS mount options.

Usually a server administrator fixes the port number of NFS-related services so that the firewall can block all ports but the specific NFS service ports. In this case, the port number for the mountd service may need to be specified via NFS mount options. It may also be necessary to enforce the use of TCP or UDP if the firewall blocks one of those transports.

NFS Access Control Lists

Solaris provides NFS version 3 clients access to POSIX Access Control Lists stored in its local file systems via a proprietary sideband protocol known as NFSACL. This protocol provides finer grained access control than mode bits. Linux NFS clients implement this protocol for compatibility with Solaris NFS version 3 servers. The NFSACL protocol never became part of the NFS version 3 specification, however, thus it may not appear in NFS servers implemented by other vendors.

The NFS version 4 specification mandates a new version of Access Control Lists that are semantically richer than POSIX ACLs. Support of NFS version 4 ACLs is required in all NFS version 4 implementations. NFS version 4 ACLs are not fully compatible with POSIX ACLs, thus some translation between the two is required in an environment that mixes POSIX ACLs and NFS version 4.

EXAMPLES

To mount a remote share using NFS version 2, use the nfs file system type and specify the nfsvers=2 mount option. To mount using NFS version 3, use the nfs file system type and specify the nfsvers=3 mount option. To mount using NFS version 4, use the nfs4 file system type (the nfsvers mount option is not supported for the nfs4 file system type).

The following example from an /etc/fstab file causes the mount command to negotiate reasonable defaults for NFS behavior.

 server:/export/share    /mnt            nfs             defaults

Here is an example from an /etc/fstab file for an NFS version 2 mount over UDP.

 server:/export/share    /mnt            nfs             nfsvers=2,proto=udp

Try this example to mount using NFS version 4 over TCP with Kerberos 5 mutual authentication.

 server:/export/share    /mnt            nfs4            sec=krb5

This example can be used to mount /usr over NFS.

 server:/export/share    /usr            nfs             ro,nocto,nolock,actimeo=3600

FILES

/etc/fstab file system table

BUGS

The generic remount option is not fully supported. Generic options, such as rw and ro can be modified using the remount option, but NFS-specific options are not all supported. The underlying transport or NFS version cannot be changed by a remount, for example. Performing a remount on an NFS file system mounted with noac may have unintended consequences. The noac option is a mixture of a generic option (sync) and an NFS-specific option (actimeo=0).

Before 2.4.7, the Linux NFS client did not support NFS over TCP.

Before 2.4.20, the Linux NFS client used a heuristic to determine whether cached file data was still valid rather than using the standard close-to-open cache coherency method described above.

Starting with 2.4.22, the Linux NFS client uses a Van Jacobsen-based RTT estimator to set RPC timeouts when using NFS over UDP. The timeo option controls only the timeout of infrequently used NFS requests, such as FSINFO, on UDP transports.

Before 2.6.0, the Linux NFS client did not support NFS version 4.

Before 2.6.8, the Linux NFS client used only synchronous reads and writes when the rsize and wsize settings were smaller than the system's page size.

SEE ALSO

fstab(5), mount(8), umount(8), mount.nfs(5), umount.nfs(5), exports(5), nfsd(8), rpc.idmapd(8), rpc.gssd(8), rpc.svcgssd(8), kerberos(1)

  • RFC 768 for the UDP specification.
  • RFC 793 for the TCP specification.
  • RFC 1094 for the NFS version 2 specification.
  • RFC 1813 for the NFS version 3 specification.
  • RFC 1832 for the XDR specification.
  • RFC 1833 for the RPC bind specification.
  • RFC 2203 for the RPCSEC GSS API protocol specification.
  • RFC 3530 for the NFS version 4 specification.
Personal tools