NFS Howto Client

From Linux NFS

(Difference between revisions)
Jump to: navigation, search
Line 1: Line 1:
-
Placeholder.
+
==== Mounting Remote Directories ====
 +
Before beginning, you should double-check to make sure your mount program is new enough (version 2.10m if you want to use Version 3 NFS), and that the client machine supports NFS mounting, though most standard distributions do. If you are using a 2.2 or later kernel with the ''/proc'' filesystem you can check the latter by reading the file ''/proc/filesystems'' and making sure there is a line containing nfs. If not, typing '''insmod nfs''' may make it magically appear if NFS has been compiled as a module; otherwise, you will need to build (or download) a kernel that has NFS support built in. In general, kernels that do not have NFS compiled in will give a very specific error when the '''mount''' command below is run.
 +
 
 +
To begin using machine as an NFS client, you will need the portmapper running on that machine, and to use NFS file locking, you will also need '''rpc.statd''' and '''rpc.lockd''' running on both the client and the server. Most recent distributions start those services by default at boot time; if yours doesn't, see Section 3.2 for information on how to start them up.
 +
 
 +
With '''portmap''', '''lockd''', and '''statd''' running, you should now be able to mount the remote directory from your server just the way you mount a local hard drive, with the mount command. Continuing our example from the previous section, suppose our server above is called ''master.foo.com'',and we want to mount the ''/home'' directory on ''slave1.foo.com''. Then, all we have to do, from the root prompt on ''slave1.foo.com'', is type:
 +
<pre>
 +
  # mount master.foo.com:/home /mnt/home
 +
</pre>
 +
and the directory ''/home'' on master will appear as the directory ''/mnt/home'' on ''slave1''. (Note that this assumes we have created the directory ''/mnt/home'' as an empty mount point beforehand.)
 +
 
 +
If this does not work, see the Troubleshooting section (Section 7).
 +
 
 +
You can get rid of the file system by typing
 +
<pre>
 +
# umount /mnt/home
 +
</pre>
 +
just like you would for a local file system.
 +
 
 +
==== Getting NFS File Systems to Be Mounted at Boot Time ====
 +
NFS file systems can be added to your ''/etc/fstab'' file the same way local file systems can, so that they mount when your system starts up. The only difference is that the file system type will be set to ''nfs'' and the dump and fsck order (the last two entries) will have to be set to zero. So for our example above, the entry in ''/etc/fstab'' would look like:
 +
<pre>
 +
# device      mountpoint    fs-type    options      dump fsckorder
 +
  ...
 +
  master.foo.com:/home  /mnt    nfs          rw            0    0
 +
  ...
 +
</pre>
 +
See the man pages for ''fstab'' if you are unfamiliar with the syntax of this file. If you are using an automounter such as ''amd'' or ''autofs'', the options in the corresponding fields of your mount listings should look very similar if not identical.
 +
 
 +
At this point you should have NFS working, though a few tweaks may still be necessary to get it to work well. You should also read Section 6 to be sure your setup is reasonably secure.
 +
 
 +
==== Soft vs. Hard Mounting ====
 +
There are some options you should consider adding at once. They govern the way the NFS client handles a server crash or network outage. One of the cool things about NFS is that it can handle this gracefully. If you set up the clients right. There are two distinct failure modes:
 +
 
 +
'''soft''':If a file request fails, the NFS client will report an error to the process on the client machine requesting the file access. Some programs can handle this with composure, most won't. We do not recommend using this setting; it is a recipe for corrupted files and lost data. You should especially not use this for mail disks --- if you value your mail, that is.<br/>
 +
'''hard''':The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed (except by a "sure kill") unless you also specify '''intr'''. When the NFS server is back online the program will continue undisturbed from where it was. We recommend using '''hard,intr''' on all NFS mounted file systems.
 +
 
 +
===== /etc/hosts.allow and /etc/hosts.deny =====
 +
These two files specify which computers on the network can use services on your machine. Each line of the file contains a single entry listing a service and a set of machines. When the server gets a request from a machine, it does the following:
 +
*It first checks ''hosts.allow'' to see if the machine matches a description listed in there. If it does, then the machine is allowed access.
 +
*If the machine does not match an entry in ''hosts.allow'', the server then checks ''hosts.deny'' to see if the client matches a listing in there. If it does then the machine is denied access.
 +
*If the client matches no listings in either file, then it is allowed access.
 +
 
 +
In addition to controlling access to services handled by ''inetd'' (such as telnet and FTP), this file can also control access to NFS by restricting connections to the daemons that provide NFS services. Restrictions are done on a per-service basis.
 +
 
 +
The first daemon to restrict access to is the portmapper. This daemon essentially just tells requesting clients how to find all the NFS services on the system. Restricting access to the portmapper is the best defense against someone breaking into your system through NFS because completely unauthorized clients won't know where to find the NFS daemons. However, there are two things to watch out for. First, restricting portmapper isn't enough if the intruder already knows for some reason how to find those daemons. And second, if you are running NIS, restricting portmapper will also restrict requests to NIS. That should usually be harmless since you usually want to restrict NFS and NIS in a similar way, but just be cautioned. (Running NIS is generally a good idea if you are running NFS, because the client machines need a way of knowing who owns what files on the exported volumes. Of course there are other ways of doing this such as syncing password files. See the [http://www.linuxdoc.org/HOWTO/NIS-HOWTO.html NIS HOWTO] for information on setting up NIS.)
 +
 
 +
In general it is a good idea with NFS (as with most internet services) to explicitly deny access to IP addresses that you don't need to allow access to.
 +
 
 +
The first step in doing this is to add the followng entry to ''/etc/hosts.deny'':
 +
<pre>
 +
  portmap:ALL
 +
</pre>
 +
Starting with nfs-utils 0.2.0, you can be a bit more careful by controlling access to individual daemons. It's a good precaution since an intruder will often be able to weasel around the portmapper. If you have a newer version of nfs-utils, add entries for each of the NFS daemons (see the next section to find out what these daemons are; for now just put entries for them in ''hosts.deny''):
 +
<pre>
 +
  lockd:ALL
 +
  mountd:ALL
 +
  rquotad:ALL
 +
  statd:ALL
 +
</pre>
 +
Even if you have an older version of nfs-utils, adding these entries is at worst harmless (since they will just be ignored) and at best will save you some trouble when you upgrade. Some sys admins choose to put the entry '''ALL:ALL''' in the file ''/etc/hosts.deny'', which causes any service that looks at these files to deny access to all hosts unless it is explicitly allowed. While this is more secure behavior, it may also get you in trouble when you are installing new services, you forget you put it there, and you can't figure out for the life of you why they won't work.
 +
 
 +
Next, we need to add an entry to ''hosts.allow'' to give any hosts access that we want to have access. (If we just leave the above lines in ''hosts.deny'' then nobody will have access to NFS.) Entries in hosts.allow follow the format:
 +
<pre>
 +
  service: host [or network/netmask] , host [or network/netmask]
 +
</pre>
 +
Here, host is IP address of a potential client; it may be possible in some versions to use the DNS name of the host, but it is strongly discouraged.
 +
 
 +
Suppose we have the setup above and we just want to allow access to ''slave1.foo.com'' and ''slave2.foo.com'', and suppose that the IP addresses of these machines are ''192.168.0.1'' and ''192.168.0.2'', respectively. We could add the following entry to ''/etc/hosts.allow'':
 +
<pre>
 +
  portmap: 192.168.0.1 , 192.168.0.2
 +
</pre>
 +
For recent nfs-utils versions, we would also add the following (again, these entries are harmless even if they are not supported):
 +
<pre>
 +
  lockd: 192.168.0.1 , 192.168.0.2
 +
  rquotad: 192.168.0.1 , 192.168.0.2
 +
  mountd: 192.168.0.1 , 192.168.0.2
 +
  statd: 192.168.0.1 , 192.168.0.2
 +
</pre>
 +
If you intend to run NFS on a large number of machines in a local network, ''/etc/hosts.allow'' also allows for network/netmask style entries in the same manner as ''/etc/exports'' above.
 +
 
 +
==== Getting the services started ====
 +
===== Pre-Requisities =====
 +
The NFS server should now be configured and we can start it running. First, you will need to have the appropriate packages installed. This consists mainly of a new enough kernel and a new enough version of the nfs-utils package. See Section 2.4 if you are in doubt.
 +
 
 +
Next, before you can start NFS, you will need to have TCP/IP networking functioning correctly on your machine. If you can use telnet, FTP, and so on, then chances are your TCP networking is fine.
 +
 
 +
That said, with most recent Linux distributions you may be able to get NFS up and running simply by rebooting your machine, and the startup scripts should detect that you have set up your ''/etc/exports'' file and will start up NFS correctly. If you try this, see Section 3.4 Verifying that NFS is running. If this does not work, or if you are not in a position to reboot your machine, then the following section will tell you which daemons need to be started in order to run NFS services. If for some reason '''nfsd''' was already running when you edited your configuration files above, you will have to flush your configuration; see Section 3.5 for details.
 +
===== Starting the Portmapper =====
 +
NFS depends on the portmapper daemon, either called '''portmap''' or '''rpc.portmap'''. It will need to be started first. It should be located in ''/sbin'' but is sometimes in ''/usr/sbin''. Most recent Linux distributions start this daemon in the boot scripts, but it is worth making sure that it is running before you begin working with NFS (just type '''ps aux | grep portmap''').
 +
===== The Daemons =====
 +
NFS serving is taken care of by five daemons: '''rpc.nfsd''', which does most of the work; '''rpc.lockd''' and '''rpc.statd''', which handle file locking; '''rpc.mountd''', which handles the initial mount requests, and '''rpc.rquotad''', which handles user file quotas on exported volumes. Starting with 2.2.18, lockd is called by '''nfsd''' upon demand, so you do not need to worry about starting it yourself. '''statd''' will need to be started separately. Most recent Linux distributions will have startup scripts for these daemons.
 +
 
 +
The daemons are all part of the nfs-utils package, and may be either in the ''/sbin'' directory or the ''/usr/sbin'' directory.
 +
 
 +
If your distribution does not include them in the startup scripts, then then you should add them, configured to start in the following order:
 +
 
 +
#rpc.portmap
 +
#rpc.mountd, rpc.nfsd
 +
#rpc.statd, rpc.lockd (if necessary), and rpc.rquotad
 +
 
 +
The nfs-utils package has sample startup scripts for RedHat and Debian. If you are using a different distribution, in general you can just copy the RedHat script, but you will probably have to take out the line that says:
 +
<pre>
 +
. ../init.d/functions
 +
</pre>
 +
===== Verifying that NFS is Running =====
 +
To do this, query the portmapper with the command ''rpcinfo -p'' to find out what services it is providing. You should get something like this:
 +
<pre>
 +
program vers proto  port
 +
    100000    2  tcp    111  portmapper
 +
    100000    2  udp    111  portmapper
 +
    100011    1  udp    749  rquotad
 +
    100011    2  udp    749  rquotad
 +
    100005    1  udp    759  mountd
 +
    100005    1  tcp    761  mountd
 +
    100005    2  udp    764  mountd
 +
    100005    2  tcp    766  mountd
 +
    100005    3  udp    769  mountd
 +
    100005    3  tcp    771  mountd
 +
    100003    2  udp  2049  nfs
 +
    100003    3  udp  2049  nfs
 +
    300019    1  tcp    830  amd
 +
    300019    1  udp    831  amd
 +
    100024    1  udp    944  status
 +
    100024    1  tcp    946  status
 +
    100021    1  udp  1042  nlockmgr
 +
    100021    3  udp  1042  nlockmgr
 +
    100021    4  udp  1042  nlockmgr
 +
    100021    1  tcp  1629  nlockmgr
 +
    100021    3  tcp  1629  nlockmgr
 +
    100021    4  tcp  1629  nlockmgr
 +
</pre>
 +
This says that we have NFS versions 2 and 3, ''rpc.statd'' version 1, network lock manager (the service name for ''rpc.lockd'') versions 1, 3, and 4. There are also different service listings depending on whether NFS is travelling over TCP or UDP. Linux systems use UDP by default unless TCP is explicitly requested; however other OSes such as Solaris default to TCP.
 +
 
 +
If you do not at least see a line that says portmapper, a line that says ''nfs'', and a line that says ''mountd'' then you will need to backtrack and try again to start up the daemons (see Section 7, Troubleshooting, if this still doesn't work).
 +
 
 +
If you do see these services listed, then you should be ready to set up NFS clients to access files from your server.
 +
===== Making changes to /etc/exports later on =====
 +
If you come back and change your ''/etc/exports'' file, the changes you make may not take effect immediately. You should run the command '''exportfs -ra''' to force '''nfsd''' to re-read the /etc/exports file. If you can't find the '''exportfs''' command, then you can kill '''nfsd''' with the '''-HUP''' flag (see the man pages for ''kill'' for details).
 +
 
 +
If that still doesn't work, don't forget to check ''hosts.allow'' to make sure you haven't forgotten to list any new client machines there. Also check the host listings on any firewalls you may have set up (see Section 7 and Section 6 for more details on firewalls and NFS).

Revision as of 18:45, 5 April 2006

Contents

Mounting Remote Directories

Before beginning, you should double-check to make sure your mount program is new enough (version 2.10m if you want to use Version 3 NFS), and that the client machine supports NFS mounting, though most standard distributions do. If you are using a 2.2 or later kernel with the /proc filesystem you can check the latter by reading the file /proc/filesystems and making sure there is a line containing nfs. If not, typing insmod nfs may make it magically appear if NFS has been compiled as a module; otherwise, you will need to build (or download) a kernel that has NFS support built in. In general, kernels that do not have NFS compiled in will give a very specific error when the mount command below is run.

To begin using machine as an NFS client, you will need the portmapper running on that machine, and to use NFS file locking, you will also need rpc.statd and rpc.lockd running on both the client and the server. Most recent distributions start those services by default at boot time; if yours doesn't, see Section 3.2 for information on how to start them up.

With portmap, lockd, and statd running, you should now be able to mount the remote directory from your server just the way you mount a local hard drive, with the mount command. Continuing our example from the previous section, suppose our server above is called master.foo.com,and we want to mount the /home directory on slave1.foo.com. Then, all we have to do, from the root prompt on slave1.foo.com, is type:

  # mount master.foo.com:/home /mnt/home

and the directory /home on master will appear as the directory /mnt/home on slave1. (Note that this assumes we have created the directory /mnt/home as an empty mount point beforehand.)

If this does not work, see the Troubleshooting section (Section 7).

You can get rid of the file system by typing

 # umount /mnt/home

just like you would for a local file system.

Getting NFS File Systems to Be Mounted at Boot Time

NFS file systems can be added to your /etc/fstab file the same way local file systems can, so that they mount when your system starts up. The only difference is that the file system type will be set to nfs and the dump and fsck order (the last two entries) will have to be set to zero. So for our example above, the entry in /etc/fstab would look like:

 # device       mountpoint     fs-type     options      dump fsckorder
   ...
   master.foo.com:/home  /mnt    nfs          rw            0    0
   ...

See the man pages for fstab if you are unfamiliar with the syntax of this file. If you are using an automounter such as amd or autofs, the options in the corresponding fields of your mount listings should look very similar if not identical.

At this point you should have NFS working, though a few tweaks may still be necessary to get it to work well. You should also read Section 6 to be sure your setup is reasonably secure.

Soft vs. Hard Mounting

There are some options you should consider adding at once. They govern the way the NFS client handles a server crash or network outage. One of the cool things about NFS is that it can handle this gracefully. If you set up the clients right. There are two distinct failure modes:

soft:If a file request fails, the NFS client will report an error to the process on the client machine requesting the file access. Some programs can handle this with composure, most won't. We do not recommend using this setting; it is a recipe for corrupted files and lost data. You should especially not use this for mail disks --- if you value your mail, that is.
hard:The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed (except by a "sure kill") unless you also specify intr. When the NFS server is back online the program will continue undisturbed from where it was. We recommend using hard,intr on all NFS mounted file systems.

/etc/hosts.allow and /etc/hosts.deny

These two files specify which computers on the network can use services on your machine. Each line of the file contains a single entry listing a service and a set of machines. When the server gets a request from a machine, it does the following:

  • It first checks hosts.allow to see if the machine matches a description listed in there. If it does, then the machine is allowed access.
  • If the machine does not match an entry in hosts.allow, the server then checks hosts.deny to see if the client matches a listing in there. If it does then the machine is denied access.
  • If the client matches no listings in either file, then it is allowed access.

In addition to controlling access to services handled by inetd (such as telnet and FTP), this file can also control access to NFS by restricting connections to the daemons that provide NFS services. Restrictions are done on a per-service basis.

The first daemon to restrict access to is the portmapper. This daemon essentially just tells requesting clients how to find all the NFS services on the system. Restricting access to the portmapper is the best defense against someone breaking into your system through NFS because completely unauthorized clients won't know where to find the NFS daemons. However, there are two things to watch out for. First, restricting portmapper isn't enough if the intruder already knows for some reason how to find those daemons. And second, if you are running NIS, restricting portmapper will also restrict requests to NIS. That should usually be harmless since you usually want to restrict NFS and NIS in a similar way, but just be cautioned. (Running NIS is generally a good idea if you are running NFS, because the client machines need a way of knowing who owns what files on the exported volumes. Of course there are other ways of doing this such as syncing password files. See the NIS HOWTO for information on setting up NIS.)

In general it is a good idea with NFS (as with most internet services) to explicitly deny access to IP addresses that you don't need to allow access to.

The first step in doing this is to add the followng entry to /etc/hosts.deny:

  portmap:ALL

Starting with nfs-utils 0.2.0, you can be a bit more careful by controlling access to individual daemons. It's a good precaution since an intruder will often be able to weasel around the portmapper. If you have a newer version of nfs-utils, add entries for each of the NFS daemons (see the next section to find out what these daemons are; for now just put entries for them in hosts.deny):

  lockd:ALL
  mountd:ALL
  rquotad:ALL
  statd:ALL

Even if you have an older version of nfs-utils, adding these entries is at worst harmless (since they will just be ignored) and at best will save you some trouble when you upgrade. Some sys admins choose to put the entry ALL:ALL in the file /etc/hosts.deny, which causes any service that looks at these files to deny access to all hosts unless it is explicitly allowed. While this is more secure behavior, it may also get you in trouble when you are installing new services, you forget you put it there, and you can't figure out for the life of you why they won't work.

Next, we need to add an entry to hosts.allow to give any hosts access that we want to have access. (If we just leave the above lines in hosts.deny then nobody will have access to NFS.) Entries in hosts.allow follow the format:

  service: host [or network/netmask] , host [or network/netmask]

Here, host is IP address of a potential client; it may be possible in some versions to use the DNS name of the host, but it is strongly discouraged.

Suppose we have the setup above and we just want to allow access to slave1.foo.com and slave2.foo.com, and suppose that the IP addresses of these machines are 192.168.0.1 and 192.168.0.2, respectively. We could add the following entry to /etc/hosts.allow:

  portmap: 192.168.0.1 , 192.168.0.2

For recent nfs-utils versions, we would also add the following (again, these entries are harmless even if they are not supported):

   lockd: 192.168.0.1 , 192.168.0.2
   rquotad: 192.168.0.1 , 192.168.0.2
   mountd: 192.168.0.1 , 192.168.0.2
   statd: 192.168.0.1 , 192.168.0.2

If you intend to run NFS on a large number of machines in a local network, /etc/hosts.allow also allows for network/netmask style entries in the same manner as /etc/exports above.

Getting the services started

Pre-Requisities

The NFS server should now be configured and we can start it running. First, you will need to have the appropriate packages installed. This consists mainly of a new enough kernel and a new enough version of the nfs-utils package. See Section 2.4 if you are in doubt.

Next, before you can start NFS, you will need to have TCP/IP networking functioning correctly on your machine. If you can use telnet, FTP, and so on, then chances are your TCP networking is fine.

That said, with most recent Linux distributions you may be able to get NFS up and running simply by rebooting your machine, and the startup scripts should detect that you have set up your /etc/exports file and will start up NFS correctly. If you try this, see Section 3.4 Verifying that NFS is running. If this does not work, or if you are not in a position to reboot your machine, then the following section will tell you which daemons need to be started in order to run NFS services. If for some reason nfsd was already running when you edited your configuration files above, you will have to flush your configuration; see Section 3.5 for details.

Starting the Portmapper

NFS depends on the portmapper daemon, either called portmap or rpc.portmap. It will need to be started first. It should be located in /sbin but is sometimes in /usr/sbin. Most recent Linux distributions start this daemon in the boot scripts, but it is worth making sure that it is running before you begin working with NFS (just type ps aux | grep portmap).

The Daemons

NFS serving is taken care of by five daemons: rpc.nfsd, which does most of the work; rpc.lockd and rpc.statd, which handle file locking; rpc.mountd, which handles the initial mount requests, and rpc.rquotad, which handles user file quotas on exported volumes. Starting with 2.2.18, lockd is called by nfsd upon demand, so you do not need to worry about starting it yourself. statd will need to be started separately. Most recent Linux distributions will have startup scripts for these daemons.

The daemons are all part of the nfs-utils package, and may be either in the /sbin directory or the /usr/sbin directory.

If your distribution does not include them in the startup scripts, then then you should add them, configured to start in the following order:

  1. rpc.portmap
  2. rpc.mountd, rpc.nfsd
  3. rpc.statd, rpc.lockd (if necessary), and rpc.rquotad

The nfs-utils package has sample startup scripts for RedHat and Debian. If you are using a different distribution, in general you can just copy the RedHat script, but you will probably have to take out the line that says:

. ../init.d/functions
Verifying that NFS is Running

To do this, query the portmapper with the command rpcinfo -p to find out what services it is providing. You should get something like this:

program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100011    1   udp    749  rquotad
    100011    2   udp    749  rquotad
    100005    1   udp    759  mountd
    100005    1   tcp    761  mountd
    100005    2   udp    764  mountd
    100005    2   tcp    766  mountd
    100005    3   udp    769  mountd
    100005    3   tcp    771  mountd
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    300019    1   tcp    830  amd
    300019    1   udp    831  amd
    100024    1   udp    944  status
    100024    1   tcp    946  status
    100021    1   udp   1042  nlockmgr
    100021    3   udp   1042  nlockmgr
    100021    4   udp   1042  nlockmgr
    100021    1   tcp   1629  nlockmgr
    100021    3   tcp   1629  nlockmgr
    100021    4   tcp   1629  nlockmgr

This says that we have NFS versions 2 and 3, rpc.statd version 1, network lock manager (the service name for rpc.lockd) versions 1, 3, and 4. There are also different service listings depending on whether NFS is travelling over TCP or UDP. Linux systems use UDP by default unless TCP is explicitly requested; however other OSes such as Solaris default to TCP.

If you do not at least see a line that says portmapper, a line that says nfs, and a line that says mountd then you will need to backtrack and try again to start up the daemons (see Section 7, Troubleshooting, if this still doesn't work).

If you do see these services listed, then you should be ready to set up NFS clients to access files from your server.

Making changes to /etc/exports later on

If you come back and change your /etc/exports file, the changes you make may not take effect immediately. You should run the command exportfs -ra to force nfsd to re-read the /etc/exports file. If you can't find the exportfs command, then you can kill nfsd with the -HUP flag (see the man pages for kill for details).

If that still doesn't work, don't forget to check hosts.allow to make sure you haven't forgotten to list any new client machines there. Also check the host listings on any firewalls you may have set up (see Section 7 and Section 6 for more details on firewalls and NFS).

Personal tools