http://wiki.linux-nfs.org/wiki/index.php?title=Special:Contributions/Steved&feed=atom&limit=50&target=Steved&year=&month=Linux NFS - User contributions [en]2024-03-29T06:31:44ZFrom Linux NFSMediaWiki 1.16.5http://wiki.linux-nfs.org/wiki/index.php/Main_PageMain Page2022-05-20T21:15:09Z<p>Steved: </p>
<hr />
<div>{|{|cellpadding="5" cellspacing="3" class="mainpagetable" width="100%"<br />
|- + <br />
|valign="top" style="padding: .5em 1em 1em; width: 50%"| <br />
'''Development'''<br />
* Mailing lists:<br />
** linux-nfs@vger.kernel.org ([http://marc.info/?l=linux-nfs archive])<br />
** linux-fsdevel@vger.kernel.org ([http://marc.info/?l=linux-fsdevel archive])<br />
** linux-kernel@vger.kernel.org ([http://marc.info/?l=linux-kernel archive])<br />
** defunct pnfs list [http://linux-nfs.org/pipermail/pnfs/ archive]<br />
** defunct nfsv4 list [http://linux-nfs.org/pipermail/nfsv4/ archive]<br />
* IRC: #linux-nfs at oftc.net (mainly for developer chat; questions are better sent to the mailing list)<br />
* Code repositories:<br />
** [http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=summary upstream kernel]<br />
** [http://git.linux-nfs.org/?p=steved/nfs-utils.git;a=summary nfs-utils]<br />
** [http://git.linux-nfs.org/?p=steved/rpcbind.git;a=summary rpcbind]<br />
** [http://git.linux-nfs.org/?p=steved/libtirpc.git;a=summary libtirpc]<br />
** [http://git.linux-nfs.org/?p=steved/nfs4-acl-tools.git;a=summary nfs4-acl-tools] Client tools for manipulating NFSv4 ACLs directly.<br />
*** [http://linux-nfs.org/~steved/nfs4-acl-tools/ nfs4-acl-tools tarballs]<br />
* Bugzilla:<br />
** [http://bugzilla.kernel.org bugzilla.kernel.org] for upstream bugs<br />
** [http://bugzilla.linux-nfs.org bugzilla.linux-nfs.org] for out-of-tree projects and other miscellaneous NFS uses<br />
* [https://datatracker.ietf.org/wg/nfsv4 IETF NFSv4 working group]: protocol specifications<br />
* [http://nfsv4bat.org/Documents nfsv4bat.org/Documents]: Bakeathon, Connectathon, and other presentations<br />
* [[Introduction to Linux NFS hacking]]<br />
* [[To do]] (projects looking for volunteers)<br />
* [[Dual-protocol support]]<br />
* [[pNFS Development]]<br />
* [[Cluster Coherent NFS design|Cluster Coherent NFS]]<br />
* [[Nfsd4 server recovery]]<br />
* [http://linux-nfs.org/files/ NFS related files for download]<br />
* [[Peer-to-peer NFS]]<br />
* [[high availability SCSI layout]]<br />
* [[Alternate Data Streams]]<br />
* [[NFS re-export]]<br />
|valign="top"|<br />
'''Documentation'''<br />
* [http://nfs.sourceforge.net/nfs-howto/ NFS Howto]<br />
* [http://nfs.sourceforge.net/ NFS FAQ]<br />
* [[NFSv41_Introduction|NFSv4.1 End user documentation]]<br />
* [[General troubleshooting recommendations]]<br />
* [[Feature Design Documents]]<br />
* [http://nfsworld.blogspot.com/2005/06/using-active-directory-as-your-kdc-for.html Linux, AD, and NetApp filers]<br />
* [https://fedorahosted.org/gss-proxy/wiki/NFS GSS-Proxy]<br />
* [[Reporting bugs]]<br />
* [[Readdir performance results]]<br />
* [[Jenkins CI]]<br />
* [[NFS and FreeIPA]]<br />
* [[pNFS block server setup]]<br />
* [[NFS over SoftRoCE setup]]<br />
'''Testing'''<br />
* [[Connectathon test suite]]<br />
* [[pynfs]]<br />
* [[NFSometer]]: NFS performance measurement tool<br />
* [[NFStest]]: NFS test suite<br />
* [[xfstests]]: xfstests setup & expected output<br />
|}<br />
<br />
[[Old stuff]] (design documents for stalled or completed projects, etc.)</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Main_PageMain Page2022-05-20T21:14:09Z<p>Steved: </p>
<hr />
<div>{|{|cellpadding="5" cellspacing="3" class="mainpagetable" width="100%"<br />
|- + <br />
|valign="top" style="padding: .5em 1em 1em; width: 50%"| <br />
'''Development'''<br />
* Mailing lists:<br />
** linux-nfs@vger.kernel.org ([http://marc.info/?l=linux-nfs archive])<br />
** linux-fsdevel@vger.kernel.org ([http://marc.info/?l=linux-fsdevel archive])<br />
** linux-kernel@vger.kernel.org ([http://marc.info/?l=linux-kernel archive])<br />
** defunct pnfs list [http://linux-nfs.org/pipermail/pnfs/ archive]<br />
** defunct nfsv4 list [http://linux-nfs.org/pipermail/nfsv4/ archive]<br />
* IRC: #linux-nfs at oftc.net (mainly for developer chat; questions are better sent to the mailing list)<br />
* Code repositories:<br />
** [http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=summary upstream kernel]<br />
** [http://git.linux-nfs.org/?p=steved/nfs-utils.git;a=summary nfs-utils]<br />
** [http://git.linux-nfs.org/?p=steved/rpcbind.git;a=summary rpcbind]<br />
** [http://git.linux-nfs.org/?p=steved/libtirpc.git;a=summary libtirpc]<br />
** [http://git.linux-nfs.org/?p=bfields/nfs4-acl-tools.git;a=summary nfs4-acl-tools] Client tools for manipulating NFSv4 ACLs directly.<br />
*** [http://linux-nfs.org/~steved/nfs4-acl-tools/ nfs4-acl-tools tarballs]<br />
* Bugzilla:<br />
** [http://bugzilla.kernel.org bugzilla.kernel.org] for upstream bugs<br />
** [http://bugzilla.linux-nfs.org bugzilla.linux-nfs.org] for out-of-tree projects and other miscellaneous NFS uses<br />
* [https://datatracker.ietf.org/wg/nfsv4 IETF NFSv4 working group]: protocol specifications<br />
* [http://nfsv4bat.org/Documents nfsv4bat.org/Documents]: Bakeathon, Connectathon, and other presentations<br />
* [[Introduction to Linux NFS hacking]]<br />
* [[To do]] (projects looking for volunteers)<br />
* [[Dual-protocol support]]<br />
* [[pNFS Development]]<br />
* [[Cluster Coherent NFS design|Cluster Coherent NFS]]<br />
* [[Nfsd4 server recovery]]<br />
* [http://linux-nfs.org/files/ NFS related files for download]<br />
* [[Peer-to-peer NFS]]<br />
* [[high availability SCSI layout]]<br />
* [[Alternate Data Streams]]<br />
* [[NFS re-export]]<br />
|valign="top"|<br />
'''Documentation'''<br />
* [http://nfs.sourceforge.net/nfs-howto/ NFS Howto]<br />
* [http://nfs.sourceforge.net/ NFS FAQ]<br />
* [[NFSv41_Introduction|NFSv4.1 End user documentation]]<br />
* [[General troubleshooting recommendations]]<br />
* [[Feature Design Documents]]<br />
* [http://nfsworld.blogspot.com/2005/06/using-active-directory-as-your-kdc-for.html Linux, AD, and NetApp filers]<br />
* [https://fedorahosted.org/gss-proxy/wiki/NFS GSS-Proxy]<br />
* [[Reporting bugs]]<br />
* [[Readdir performance results]]<br />
* [[Jenkins CI]]<br />
* [[NFS and FreeIPA]]<br />
* [[pNFS block server setup]]<br />
* [[NFS over SoftRoCE setup]]<br />
'''Testing'''<br />
* [[Connectathon test suite]]<br />
* [[pynfs]]<br />
* [[NFSometer]]: NFS performance measurement tool<br />
* [[NFStest]]: NFS test suite<br />
* [[xfstests]]: xfstests setup & expected output<br />
|}<br />
<br />
[[Old stuff]] (design documents for stalled or completed projects, etc.)</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Main_PageMain Page2015-05-06T15:54:31Z<p>Steved: </p>
<hr />
<div>{|{|cellpadding="5" cellspacing="3" class="mainpagetable" width="100%"<br />
|- + <br />
|valign="top" style="padding: .5em 1em 1em; width: 50%"| <br />
'''Development'''<br />
* Mailing lists:<br />
** linux-nfs@vger.kernel.org ([http://news.gmane.org/gmane.linux.nfs archive])<br />
** linux-fsdevel@vger.kernel.org ([http://marc.info/?l=linux-fsdevel archive])<br />
** linux-kernel@vger.kernel.org ([http://news.gmane.org/gmane.linux.kernel archive])<br />
** defunct pnfs list [http://linux-nfs.org/pipermail/pnfs/ archive]<br />
** defunct nfsv4 list [http://linux-nfs.org/pipermail/nfsv4/ archive]<br />
* IRC: #linux-nfs at oftc.net (mainly for developer chat; questions are better sent to the mailing list)<br />
* Code repositories:<br />
** [http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=summary upstream kernel]<br />
** [http://git.linux-nfs.org/?p=steved/nfs-utils.git;a=summary nfs-utils]<br />
** [http://git.linux-nfs.org/?p=steved/rpcbind.git;a=summary rpcbind]<br />
** [http://git.linux-nfs.org/?p=steved/libtirpc.git;a=summary libtirpc]<br />
** [http://git.linux-nfs.org/ kernel nfs git repositories]:<br />
*** [http://git.linux-nfs.org/?p=bfields/linux.git;a=shortlog;h=refs/heads/nfsd-next Bruce's nfs server changes for next merge window]<br />
*** [http://git.linux-nfs.org/?p=trondmy/linux-nfs.git;a=shortlog;h=refs/heads/nfs-for-next Trond's nfs client changes for next merge window]<br />
** [http://git.linux-nfs.org/?p=bfields/nfs4-acl-tools.git;a=summary nfs4-acl-tools] Client tools for manipulating NFSv4 ACLs directly.<br />
* Bugzilla:<br />
** [http://bugzilla.kernel.org bugzilla.kernel.org] for upstream bugs<br />
** [http://bugzilla.linux-nfs.org bugzilla.linux-nfs.org] for out-of-tree projects<br />
* [https://datatracker.ietf.org/wg/nfsv4 IETF NFSv4 working group]: protocol specifications<br />
* [http://nfsv4bat.org/Documents nfsv4bat.org/Documents]: Bakeathon, Connectathon, and other presentations<br />
* [[Introduction to Linux NFS hacking]]<br />
* [[To do]] (projects looking for volunteers)<br />
* [[Dual-protocol support]]<br />
* [[pNFS Development]]<br />
* [[Cluster Coherent NFS design|Cluster Coherent NFS]]<br />
* [[Nfsd4 server recovery]]<br />
* [http://linux-nfs.org/files/ NFS related files for download]<br />
* [[Peer-to-peer NFS]]<br />
|valign="top"|<br />
'''Documentation'''<br />
* [http://nfs.sourceforge.net/nfs-howto/ NFS Howto]<br />
* [http://nfs.sourceforge.net/ NFS FAQ]<br />
* [[NFSv41_Introduction|NFSv4.1 End user documentation]]<br />
* [[General troubleshooting recommendations]]<br />
* [[Feature Design Documents]]<br />
* [http://nfsworld.blogspot.com/2005/06/using-active-directory-as-your-kdc-for.html Linux, AD, and NetApp filers]<br />
* [https://fedorahosted.org/gss-proxy/wiki/NFS GSS-Proxy]<br />
* [[Reporting bugs]]<br />
* [[Readdir performance results]]<br />
* [[Jenkins CI]]<br />
* [[NFS and FreeIPA]]<br />
* [[pNFS block server setup]]<br />
'''Testing'''<br />
* [[Connectathon test suite]]<br />
* [[pynfs]]<br />
* [[NFSometer]]: NFS performance measurement tool<br />
* [[NFStest]]: NFS test suite<br />
|}<br />
<br />
[[Old stuff]] (design documents for stalled or completed projects, etc.)</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Main_PageMain Page2015-04-27T17:19:50Z<p>Steved: </p>
<hr />
<div>{|{|cellpadding="5" cellspacing="3" class="mainpagetable" width="100%"<br />
|- + <br />
|valign="top" style="padding: .5em 1em 1em; width: 50%"| <br />
'''Development'''<br />
* Mailing lists:<br />
** linux-nfs@vger.kernel.org ([http://news.gmane.org/gmane.linux.nfs archive])<br />
** linux-fsdevel@vger.kernel.org ([http://marc.info/?l=linux-fsdevel archive])<br />
** linux-kernel@vger.kernel.org ([http://news.gmane.org/gmane.linux.kernel archive])<br />
** defunct pnfs list [http://linux-nfs.org/pipermail/pnfs/ archive]<br />
** defunct nfsv4 list [http://linux-nfs.org/pipermail/nfsv4/ archive]<br />
* IRC: #linux-nfs at oftc.net (mainly for developer chat; questions are better sent to the mailing list)<br />
* Code repositories:<br />
** [http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=summary upstream kernel]<br />
** [http://git.linux-nfs.org/?p=steved/nfs-utils.git;a=summary nfs-utils]<br />
** [http://git.linux-nfs.org/?p=steved/rpcbind.git;a=summary rpcbind]<br />
** [http://git.linux-nfs.org/ kernel nfs git repositories]:<br />
*** [http://git.linux-nfs.org/?p=bfields/linux.git;a=shortlog;h=refs/heads/nfsd-next Bruce's nfs server changes for next merge window]<br />
*** [http://git.linux-nfs.org/?p=trondmy/linux-nfs.git;a=shortlog;h=refs/heads/nfs-for-next Trond's nfs client changes for next merge window]<br />
** [http://git.linux-nfs.org/?p=bfields/nfs4-acl-tools.git;a=summary nfs4-acl-tools] Client tools for manipulating NFSv4 ACLs directly.<br />
* Bugzilla:<br />
** [http://bugzilla.kernel.org bugzilla.kernel.org] for upstream bugs<br />
** [http://bugzilla.linux-nfs.org bugzilla.linux-nfs.org] for out-of-tree projects<br />
* [https://datatracker.ietf.org/wg/nfsv4 IETF NFSv4 working group]: protocol specifications<br />
* [http://nfsv4bat.org/Documents nfsv4bat.org/Documents]: Bakeathon, Connectathon, and other presentations<br />
* [[Introduction to Linux NFS hacking]]<br />
* [[To do]] (projects looking for volunteers)<br />
* [[Dual-protocol support]]<br />
* [[pNFS Development]]<br />
* [[Cluster Coherent NFS design|Cluster Coherent NFS]]<br />
* [[Nfsd4 server recovery]]<br />
* [http://linux-nfs.org/files/ NFS related files for download]<br />
* [[Peer-to-peer NFS]]<br />
|valign="top"|<br />
'''Documentation'''<br />
* [http://nfs.sourceforge.net/nfs-howto/ NFS Howto]<br />
* [http://nfs.sourceforge.net/ NFS FAQ]<br />
* [[NFSv41_Introduction|NFSv4.1 End user documentation]]<br />
* [[General troubleshooting recommendations]]<br />
* [[Feature Design Documents]]<br />
* [http://nfsworld.blogspot.com/2005/06/using-active-directory-as-your-kdc-for.html Linux, AD, and NetApp filers]<br />
* [https://fedorahosted.org/gss-proxy/wiki/NFS GSS-Proxy]<br />
* [[Reporting bugs]]<br />
* [[Readdir performance results]]<br />
* [[Jenkins CI]]<br />
* [[NFS and FreeIPA]]<br />
* [[pNFS block server setup]]<br />
'''Testing'''<br />
* [[Connectathon test suite]]<br />
* [[pynfs]]<br />
* [[NFSometer]]: NFS performance measurement tool<br />
* [[NFStest]]: NFS test suite<br />
|}<br />
<br />
[[Old stuff]] (design documents for stalled or completed projects, etc.)</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Main_PageMain Page2014-11-10T18:26:34Z<p>Steved: </p>
<hr />
<div>{|{|cellpadding="5" cellspacing="3" class="mainpagetable" width="100%"<br />
|- + <br />
|valign="top" style="padding: .5em 1em 1em; width: 50%"| <br />
'''Development'''<br />
* Mailing lists:<br />
** linux-nfs@vger.kernel.org ([http://news.gmane.org/gmane.linux.nfs archive])<br />
** linux-fsdevel@vger.kernel.org ([http://marc.info/?l=linux-fsdevel archive])<br />
** linux-kernel@vger.kernel.org ([http://news.gmane.org/gmane.linux.kernel archive])<br />
** defunct pnfs list [http://linux-nfs.org/pipermail/pnfs/ archive]<br />
** defunct nfsv4 list [http://linux-nfs.org/pipermail/nfsv4/ archive]<br />
* IRC: #linux-nfs at oftc.net (mainly for developer chat; questions are better sent to the mailing list)<br />
* Code repositories:<br />
** [http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=summary upstream kernel]<br />
** [http://git.linux-nfs.org/?p=steved/nfs-utils.git;a=summary nfs-utils]<br />
** [http://git.linux-nfs.org/ kernel nfs git repositories]:<br />
*** [http://git.linux-nfs.org/?p=bfields/linux.git;a=shortlog;h=refs/heads/nfsd-next Bruce's nfs server changes for next merge window]<br />
*** [http://git.linux-nfs.org/?p=trondmy/linux-nfs.git;a=shortlog;h=refs/heads/nfs-for-next Trond's nfs client changes for next merge window]<br />
** [http://git.linux-nfs.org/?p=bfields/nfs4-acl-tools.git;a=summary nfs4-acl-tools] Client tools for manipulating NFSv4 ACLs directly.<br />
* Bugzilla:<br />
** [http://bugzilla.kernel.org bugzilla.kernel.org] for upstream bugs<br />
** [http://bugzilla.linux-nfs.org bugzilla.linux-nfs.org] for out-of-tree projects<br />
* [[Introduction to Linux NFS hacking]]<br />
* [[To do]] (projects looking for volunteers)<br />
* [[Server 4.0 and 4.1 issues]]<br />
* [[pNFS Development]]<br />
* [[Cluster Coherent NFS design|Cluster Coherent NFS]]<br />
* [[Nfsd4 server recovery]]<br />
* [http://linux-nfs.org/files/ NFS related files for download]<br />
* [[Peer-to-peer NFS]]<br />
|valign="top"|<br />
'''Documentation'''<br />
* [http://nfs.sourceforge.net/nfs-howto/ NFS Howto]<br />
* [http://nfs.sourceforge.net/ NFS FAQ]<br />
* [[NFSv41_Introduction|NFSv4.1 End user documentation]]<br />
* [[General troubleshooting recommendations]]<br />
* [[Feature Design Documents]]<br />
* [http://nfsworld.blogspot.com/2005/06/using-active-directory-as-your-kdc-for.html Linux, AD, and NetApp filers]<br />
* [https://fedorahosted.org/gss-proxy/wiki/NFS GSS-Proxy]<br />
* [[Reporting bugs]]<br />
* [[Readdir performance results]]<br />
* [[Jenkins CI]]<br />
* [[NFS and FreeIPA]]<br />
'''Testing'''<br />
* [[Connectathon test suite]]<br />
* [[pynfs]]<br />
* [[NFSometer]]: NFS performance measurement tool<br />
* [[NFStest]]: NFS test suite<br />
|}<br />
<br />
[[Old stuff]] (design documents for stalled or completed projects, etc.)</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/PNFS_prototype_designPNFS prototype design2010-05-06T14:05:27Z<p>Steved: </p>
<hr />
<div>= pNFS =<br />
<br />
'''pNFS''' is part of the first NFSv4 minor version. This space is used to track and share Linux pNFS implementation ideas and issues.<br />
<br />
== General Information ==<br />
<br />
* [http://www.citi.umich.edu/projects/asci/pnfs/linux/ Linux pNFS Implementation Homepage]<br />
<br />
* [[pNFS Setup Instructions]] - Basic pNFS setup instructions.<br />
<br />
* [[Configuring pNFS/spnfsd]] - How to build and set up a spnfs pNFS server<br />
<br />
* [[GFS2 Setup Notes - cluster3, 2.6.27 kernel]]<br />
<br />
* [[Older GFS2 Setup Notes - first pass, in VMWare, and upgrading from cluster2 to cluster3]]<br />
<br />
* [[pNFS Block Server Setup Instructions]] - Basic pNFS Block Server setup instructions.<br />
<br />
* [[Fedora pNFS Client Setup]] - How to set up a Fedora pNFS client. <br />
<br />
==== Filing Bugs ====<br />
*[http://bugzilla.linux-nfs.org linux-nfs.org Bugzilla] - Read/ Write access by "NFSv4.1 related bugs" group members<br />
** Use the keywords: "NFSv4.1" and "pNFS".<br />
** The "NFSv4.1 related bugs" group is used to track our bugs. You'll need a user account on [http://bugzilla.linux-nfs.org bugzilla], after that, send an email to Trond to add you to the group.<br />
<br />
== Development Resources ==<br />
<br />
* [[pNFS Development Git tree|pNFS Development Git tree]]<br />
<br />
* [[pNFS Git tree recipies|pNFS Git tree recipies]]<br />
<br />
* [[Wireshark Patches|Wireshark Patches]]<br />
<br />
== Current Issues ==<br />
* [[Client_sessions_Implementation_Issues|Client Sessions Implementation Issues]]<br />
<br />
* [[Client pNFS Requirements]]<br />
**[[pNFS Client Review for Kernel Submission]] - Review and redesign of pNFS client for submission to the Kernel.<br />
<br />
* [[pNFS Todo List|pNFS Todo List]] (last updated July 2009)<br />
<br />
* [[pNFS Implementation Issues|pNFS Implementation Issues]] (last updated April 2008)<br />
<br />
* [[Bakeathon 2007 Issues List|Bakeathon 2007 Issues List]]<br />
<br />
* [[pNFS Development Road Map]]<br />
<br />
* [[pNFS File-based Stateid Distribution]]<br />
<br />
== Old Issues ==<br />
* [[Cthon06 Meeting Notes|Connectathon 2006 Linux pNFS Implementation Meeting Notes]]<br />
<br />
* [[linux pnfs client rewrite may 2006|Linux pNFS Client Internal Reorg patches May 2006 - For Display Purposes Only - Do Not Use]]<br />
<br />
* [[pNFS todo List 2007|pNFS todo List July 2007]]</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T20:44:21Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
a yum repository or directly downloading.<br />
<br />
<b>Yum repository</b><br />
<p>For Fedora 12 (kernel-2.6.32) and Fedora 13 (kernel-2.6.33) kernels<br />
use the used the http://steved.fedorapeople.org/pnfs.repo</p><br />
<p>For Fedora development kernels (kernel-2.6.32) repository<br />
http://steved.fedorapeople.org/pnfs-rawhide.repo</p><br />
<br />
Download the appropriate repository into /etc/yum.repo.d direcotry.<br />
Then use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo='*' --enablerepo=pnfs install kernel\*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo='*' --enablerepo=pnfs-debug install kernel\*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo='*' --enablerepo=pnfs-debug install kernel\*<br />
<br />
Note: For development kernels use '--enablerepo=pnfs-rawhide-XXX'<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
With the direct downloads you will need periodicity check for updates as well as <br />
figure the dependencies. <br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests are expected to pass without errors.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T17:55:59Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
a yum repository or directly downloading.<br />
<br />
<b>Yum repository</b><br />
<p>For Fedora 12 (kernel-2.6.32) and Fedora 13 (kernel-2.6.33) kernels<br />
use the used the http://steved.fedorapeople.org/pnfs.repo</p><br />
<p>For Fedora development kernels (kernel-2.6.32) repository<br />
http://steved.fedorapeople.org/pnfs-rawhide.repo</p><br />
<br />
Download the appropriate repository into /etc/yum.repo.d direcotry.<br />
Then use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo='*' --enablerepo=pnfs install kernel\*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo='*' --enablerepo=pnfs-debug install kernel\*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo='*' --enablerepo=pnfs-debug install kernel\*<br />
<br />
Note: For development kernels use '--enablerepo=pnfs-rawhide-XXX'<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
With the direct downloads you will need periodicity check for updates as well as <br />
figure the dependencies. <br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T17:31:09Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
a yum repository or directly downloading.<br />
<br />
<b>Yum repository</b><br />
<p>For Fedora 12 (kernel-2.6.32) and Fedora 13 (kernel-2.6.33) kernels<br />
use the used the http://steved.fedorapeople.org/pnfs.repo</p><br />
<p>For Fedora development kernels (kernel-2.6.32) repository<br />
http://steved.fedorapeople.org/pnfs-rawhide.repo</p><br />
<br />
Download the appropriate repository into /etc/yum.repo.d direcotry.<br />
Then use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
Note: For development kernels use '--enablerepo=pnfs-rawhide-XXX'<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
With the direct downloads you will need periodicity check for updates as well as <br />
figure the dependencies. <br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T17:29:54Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
a yum repository or directly downloading.<br />
<br />
<b>Yum repository</b><br />
<br />
For Fedora 12 (kernel-2.6.32) and Fedora 13 (kernel-2.6.33) kernels<br />
use the used the http://steved.fedorapeople.org/pnfs.repo<br />
<br><br />
For Fedora development kernels (kernel-2.6.32)repository<br />
http://steved.fedorapeople.org/pnfs-rawhide.repo<br />
<br />
Download the appropriate repository into /etc/yum.repo.d direcotry.<br />
Then use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
Note: For development kernels use '--enablerepo=pnfs-rawhide-XXX'<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
With the direct downloads you will need periodicity check for updates as well as <br />
figure the dependencies. <br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T17:27:25Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
a yum repository or directly downloading.<br />
<br />
<b>Yum repository</b><br />
<br />
For Fedora 12 (kernel-2.6.32) and Fedora 13 (kernel-2.6.33) kernels<br />
use the used the http://steved.fedorapeople.org/pnfs.repo<br />
<br><br />
For Fedora development kernels (kernel-2.6.32)repository<br />
http://steved.fedorapeople.org/pnfs-rawhide.repo<br />
<br />
Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d. Use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
With the direct downloads you will need periodicity check for updates as well as <br />
figure the dependencies. <br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T17:23:52Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
a yum repository or directly downloading.<br />
<br />
<b>Yum repository</b><br />
<br />
For Fedora 12 and Fedora 13 used the http://steved.fedorapeople.org/pnfs.repo<br />
repository into /etc/yum.repo.d. <br />
<br />
Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d. Use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
With the direct downloads you will need periodicity check for updates as well as <br />
figure the dependencies. <br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T17:08:44Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading<br />
<br />
<b>Yum repository</b><br />
<br />
Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d. Use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
With the direct downloads you will need periodicity check for updates as well as <br />
figure the dependencies. <br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T17:06:44Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading<br />
<br />
<b>Yum repository</b><br />
<br />
Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d. Use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
<b>Direct download</b><br />
<br />
You can directly download the pNFS kernel rpms from http://steved.fedorapeople.org/repos/pnfs.<br />
<br />
kernel-[current_version].pnfs.fc12.x86_64.rpm<br />
kernel-firmware-[current_version].pnfs.fc12.noarch.rpm<br />
<br />
With the direct downloads you will need periodicity check for updates (latest –build April 22).<br />
Using the yum repository doing a 'yum update' will check of updates.<br />
<br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T16:46:44Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading<br />
<br />
<b>Yum repository</b><br />
<br />
Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d. Use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
Direct download - Download pNFS kernel rpms from <br />
http://steved.fedorapeople.org/repos/pnfs.<br />
You will need following two rpms for non-development use:<br />
<br />
kernel-[current_version].pnfs.fc12.x86_64.rpm<br />
kernel-firmware-[current_version].pnfs.fc12.noarch.rpm<br />
<br />
<p>With the direct downloads you will need periodicity check for updates (latest –build April 22).<br />
Using the yum repository doing a 'yum update' will check of updates.<br />
</p><br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T16:44:58Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading<br />
<h3>Yum repository</h3><br />
Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d. Use one of the following commands to installed the kernel <br />
of choice (including the dependencies):<br />
<br />
To install non-debug kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
Direct download - Download pNFS kernel rpms from <br />
http://steved.fedorapeople.org/repos/pnfs.<br />
You will need following two rpms for non-development use:<br />
<br />
kernel-[current_version].pnfs.fc12.x86_64.rpm<br />
kernel-firmware-[current_version].pnfs.fc12.noarch.rpm<br />
<br />
<p>With the direct downloads you will need periodicity check for updates (latest –build April 22).<br />
Using the yum repository doing a 'yum update' will check of updates.<br />
</p><br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T16:42:15Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading<br />
<br />
Yum repository - Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d (Note the pnfs repository is enabled by default). To install the <br />
the kernel (and its dependencies) do one of the following:<br />
To install non-debug kenrels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
To install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
To install debuginfo rpms , which aid with debugging:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
Direct download - Download pNFS kernel rpms from <br />
http://steved.fedorapeople.org/repos/pnfs.<br />
You will need following two rpms for non-development use:<br />
<br />
kernel-[current_version].pnfs.fc12.x86_64.rpm<br />
kernel-firmware-[current_version].pnfs.fc12.noarch.rpm<br />
<br />
<p>With the direct downloads you will need periodicity check for updates (latest –build April 22).<br />
Using the yum repository doing a 'yum update' will check of updates.<br />
</p><br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T16:40:34Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading<br />
<br />
Yum repository - Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d (Note the pnfs repository is enabled by default). To install the <br />
the kernel (and its dependencies) do one of the following:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs install kernel*<br />
<br />
or to install debug enabled kernels:<br />
<br />
yum --disablerepo="*" --enablerepo=pnfs-debug install kernel*<br />
<br />
Direct download - Download pNFS kernel rpms from <br />
http://steved.fedorapeople.org/repos/pnfs.<br />
You will need following two rpms for non-development use:<br />
<br />
kernel-[current_version].pnfs.fc12.x86_64.rpm<br />
kernel-firmware-[current_version].pnfs.fc12.noarch.rpm<br />
<br />
<p>With the direct downloads you will need periodicity check for updates (latest –build April 22).<br />
Using the yum repository doing a 'yum update' will check of updates.<br />
</p><br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T16:32:23Z<p>Steved: </p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The Install Guide at http://docs.fedoraproject.org/ describes<br />
a numerous ways in which to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
<p>There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading</p><br />
<ul><br />
<li>Yum repository - Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d (Note the pnfs repository is enabled by default). To install the <br />
the kernel (and the dependencies) do a 'yum update kernel'<br />
<li> Direct download - Download pNFS kernel rpms from <br />
http://steved.fedorapeople.org/repos/pnfs.<br />
You will need following two rpms for non-development use:<br />
<ul><br />
<li>kernel-[current_version].pnfs.fc12.x86_64.rpm<br />
<li>kernel-firmware-[current_version].pnfs.fc12.noarch.rpm<br />
</ul> <br />
</ul><br />
<p>With the direct downloads you will need periodicity check for updates (latest –build April 22).<br />
Using the yum repository doing a 'yum update' will check of updates.<br />
</p><br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/Fedora_pNFS_Client_SetupFedora pNFS Client Setup2010-04-30T16:29:50Z<p>Steved: Created page with '<h2>Select Hardware</h2> <p> Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For …'</p>
<hr />
<div><h2>Select Hardware</h2><br />
<p><br />
Select hardware that is capable of running 64-bit os and at a minimum of two GigE copper NIC ports. Connect all necessary network ports to VLAN. For iSCSI target access, it is recommended that you use separate subnets for the backbone and iSCSI.<br />
</p><br />
<h2>Installing Fedora</h2><br />
<p>The following link http://docs.fedoraproject.org/ describes<br />
then numerous ways to install Fedora. Choose the best method for you.</p><br />
<ul><br />
<li>Select local disk and take default disk partitions for /boot, /swap LVM etc<br />
<li>If your block device is accessed through iSCSI then click on “Advanced disk configuration” tab and select iSCSI. If you are using FC you are recommended to unplug Fiber cable before installation and connect back before reboot. Do not initialize Block devices if you are using EMC Unified storage devices (Celerra NAS)<br />
<li>In order to run the Connectathon test suite you will need the “software development” package.<br />
<ul><br />
<li>Click on the "Software Development" button in the package install screen<br />
similar to [http://i1-news.softpedia-static.com/images/extra/LINUX/large/fedora10installguide-large_014.jpg this]<br />
<li>Its also advisable to added in the 'Fedora' repository on the same page.<br />
<li>Finally it's also a good idea to do a 'yum -y update' first thing after the <br />
install completes. This will ensure you have the most up to date bits available.<br />
</ul><br />
</ul><br />
<h2>Installing pNFS Enabled Fedora kernel</h2><br />
<p>There are two way to install the pNFS enabled kernels. Either using<br />
the yum repository or directly downloading</p><br />
<ul><br />
<li>Yum repository - Download the [http://steved.fedorapeople.org/pnfs.repo pnfs.repo]<br />
into /etc/yum.repo.d (Note the pnfs repository is enabled by default). To install the <br />
the kernel (and the dependencies) do a 'yum update kernel'<br />
<li> Direct download - Download pNFS kernel rpms from <br />
http://steved.fedorapeople.org/repos/pnfs.<br />
You will need following two rpms for non-development use:<br />
<ul><br />
<li>kernel-[current_version].pnfs.fc12.x86_64.rpm<br />
<li>kernel-firmware-[current_version].pnfs.fc12.noarch.rpm<br />
</ul> <br />
</ul><br />
<p>With the direct downloads you will need periodicity check for updates (latest –build April 22).<br />
Using the yum repository doing a 'yum update' will check of updates.<br />
</p><br />
<h2>Load pNFS modules</h2><br />
Load the needed modules with the following commands:<br />
modprobe nfslayoutdriver<br />
modprobe blocklayoutdrive<br />
<br />
To verify pNFS modules are loaded correctly do the following:<br />
lsmod | grep nfslayout<br />
nfslayoutdriver 18423 0<br />
nfs 353047 3 blocklayoutdriver,nfslayoutdriver<br />
<h2> Mount Filesystem</h2><br />
Use the '-o minorversion=1' mount option when mounting the server,<br />
similar to:<br />
mount -t nfs4 -o minorversion=1 <server>:/export /mnt<br />
<br />
To Verify pNFS is up and working, grep for the word 'LAYOUT' in /proc/self/mountstat<br />
grep LAYOUT /proc/self/mountstat<br />
<h2>Generate Traffic</h2><br />
Generate some I/O using “dd” or run “Connectathon ”. You may download Connectathon from http://www.connectathon.org. All tests expected to pass without problem.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/NewMountDesignSpecNewMountDesignSpec2007-08-29T14:35:30Z<p>Steved: /* Mount system call testing */</p>
<hr />
<div>== Introduction ==<br />
<br />
This wiki page is a working design specification for the new text-based NFS mount API. Here we discuss use cases, requirement statements, error reporting, and design specifications, in addition to minute behavioral details of mounting NFS shares. The purpose of this discussion is to understand how to implement the new interface, and to construct a unit test plan for both the legacy user-space mount command and the new in-kernel mount client.<br />
<br />
== Requirements ==<br />
<br />
There are several broad requirements for the new text-based NFS mount API.<br />
<br />
# Scalability - Allow for thousands of NFS mount points, and a large number of simultaneous mount operations<br />
# No user-space dependency on a versioned binary blob for passing NFS mount options to the kernel<br />
# Support version fallback - If NFS version 4 is not supported, fall back to version 3; if version 3 is not supported, fall back to version 2<br />
## NFSv4 mounts will ignore legacy options in order to make fallback work<br />
# Support transport protocol fallback - If TCP is not supported, fall back to UDP<br />
# Provide reasonable default behavior in the presence of network firewalls and misconfigured servers<br />
# Facilitate new features - IPv6, RDMA, FS cache should be easy to introduce<br />
# Better error reporting - Report and log useful, relevant, clear error messages when a failure has occurred; prepare for i18n<br />
# Update and clarify NFS mount documentation<br />
<br />
== Use Cases ==<br />
<br />
To mount a remote share using NFS version 2, use the '''nfs''' file system type and specify the ''nfsvers=2'' mount option. To mount using NFS version 3, use the '''nfs''' file system type and specify the ''nfsvers=3'' mount option. To mount using NFS version 4, use the '''nfs4''' file system type (the ''nfsvers'' mount option is not supported for the '''nfs4''' file system type).<br />
<br />
Here is an example from an ''/etc/fstab'' file for an NFS version 3 mount over TCP.<br />
<br />
server:/export/share /mnt nfs nfsvers=3,proto=tcp<br />
<br />
Here is an example for an NFS version 4 mount over TCP using Kerberos 5 mutual authentication.<br />
<br />
server:/export/share /mnt nfs4 sec=krb5<br />
<br />
== Design Specifications ==<br />
<br />
Obviously the discussion of NFSv2/v3 mounting will be significantly more complicated than NFSv4 mounting.<br />
<br />
=== Mounting NFS version 2 and version 3 shares ===<br />
<br />
=== Mounting NFS version 4 shares ===<br />
<br />
== Return Codes and Error Reporting ==<br />
<br />
Currently mount's error messages are very problematic.<br />
<br />
# Some error messages are incorrect.<br />
# Some error messages are repeated.<br />
# Some errors are never reported.<br />
# Some error messages are too specific to be useful to an average administration. For example, reporting an "RPC program/version mismatch occurred" is not helpful if the real problem is that "proto=udp" is not supported.<br />
# Some error messages are too general to be useful. For example, reporting "mount.nfs: not a directory" is obviously an errno string, but more specific information would provide a course of corrective action.<br />
<br />
Perhaps a clear error message can be reported to the command line, and a lot of detail should be reported in the system log? That's easy enough with in-kernel mount option parsing!<br />
<br />
=== mount(2) API return codes ===<br />
<br />
The mount.nfs program needs to distinguish between temporary problems and permanent errors in order to determine whether it's worth retrying a mount request in the background.<br />
<br />
For text-based NFS mounts, the version/protocol fallback mechanism should occur in user space -- certainly fallback policy is easier to set and implement in user space, but the kernel must provide specific information about how a mount request failed so that user space can make an appropriate choice about the next step to try.<br />
<br />
The current '''mount'''(2) API is described in a man page. The man page describes a set of generic error return codes, which we excerpt here. It also suggests that we can add specific error codes for NFS mounts.<br />
<br />
<pre><br />
RETURN VALUE<br />
On success, zero is returned. On error, -1 is returned, and errno is<br />
set appropriately.<br />
<br />
ERRORS<br />
The error values given below result from filesystem type independent<br />
errors. Each filesystem type may have its own special errors and its<br />
own special behavior. See the kernel source code for details.<br />
<br />
EACCES A component of a path was not searchable. (See also path_resolu-<br />
tion(2).) Or, mounting a read-only filesystem was attempted<br />
without giving the MS_RDONLY flag. Or, the block device source<br />
is located on a filesystem mounted with the MS_NODEV option.<br />
<br />
EAGAIN A call to umount2() specifying MNT_EXPIRE successfully marked an<br />
unbusy file system as expired.<br />
<br />
EBUSY source is already mounted. Or, it cannot be remounted read-only,<br />
because it still holds files open for writing. Or, it cannot be<br />
mounted on target because target is still busy (it is the work-<br />
ing directory of some task, the mount point of another device,<br />
has open files, etc.). Or, it could not be unmounted because it<br />
is busy.<br />
<br />
EFAULT One of the pointer arguments points outside the user address<br />
space.<br />
<br />
EINVAL source had an invalid superblock. Or, a remount (MS_REMOUNT)<br />
was attempted, but source was not already mounted on target.<br />
Or, a move (MS_MOVE) was attempted, but source was not a mount<br />
point, or was ’/’. Or, an unmount was attempted, but target was<br />
not a mount point. Or, umount2() was called with MNT_EXPIRE and<br />
either MNT_DETACH or MNT_FORCE.<br />
<br />
ELOOP Too many link encountered during pathname resolution. Or, a<br />
move was attempted, while target is a descendant of source.<br />
<br />
EMFILE (In case no block device is required:) Table of dummy devices is<br />
full.<br />
<br />
ENAMETOOLONG<br />
A pathname was longer than MAXPATHLEN.<br />
<br />
ENODEV filesystemtype not configured in the kernel.<br />
<br />
ENOENT A pathname was empty or had a nonexistent component.<br />
<br />
ENOMEM The kernel could not allocate a free page to copy filenames or<br />
data into.<br />
<br />
ENOTBLK<br />
source is not a block device (and a device was required).<br />
<br />
ENOTDIR<br />
The second argument, or a prefix of the first argument, is not a<br />
directory.<br />
<br />
ENXIO The major number of the block device source is out of range.<br />
<br />
EPERM The caller does not have the required privileges.<br />
</pre><br />
<br />
In the following table, we discuss how each of these error values is used.<br />
<br />
{|<br />
|-<br />
|valign="top"|'''EACCES'''<br />
|A component of a path was not searchable. (See also '''path_resolution'''(2).) Or, mounting a read-only filesystem was attempted without giving the MS_RDONLY flag. Or, the block device source is located on a filesystem mounted with the MS_NODEV option.<br />
|-<br />
|valign="top"|'''EAGAIN'''<br />
|A call to umount2() specifying MNT_EXPIRE successfully marked an unbusy file system as expired.<br />
|-<br />
|valign="top"|'''EBUSY'''<br />
|source is already mounted. Or, it cannot be remounted read-only, because it still holds files open for writing. Or, it cannot be mounted on target because target is still busy (it is the working directory of some task, the mount point of another device, has open files, etc.). Or, it could not be unmounted because it is busy.<br />
|-<br />
|valign="top"|'''EFAULT'''<br />
| One of the pointer arguments points outside the user address space.<br />
|-<br />
|valign="top"|'''EINVAL'''<br />
|source had an invalid superblock. Or, a remount (MS_REMOUNT) was attempted, but source was not already mounted on target. Or, a move (MS_MOVE) was attempted, but source was not a mount point, or was ’/’. Or, an unmount was attempted, but target was not a mount point. Or, umount2() was called with MNT_EXPIRE and either MNT_DETACH or MNT_FORCE.<br />
Note that NFS uses this error return code to signal bad mount options: The mount option string was not able to be parsed, or an unrecognized option was specified, or a keyword option was specified with a value that is out of range. This appears to be a precedent set by OCFS2 and CIFS.<br />
|-<br />
|valign="top"|'''ELOOP'''<br />
|Too many link encountered during pathname resolution. Or, a move was attempted, while target is a descendant of source.<br />
|-<br />
|valign="top"|'''EMFILE'''<br />
|(In case no block device is required:) Table of dummy devices is full.<br />
|-<br />
|valign="top"|'''ENAMETOOLONG'''<br />
|A pathname was longer than MAXPATHLEN.<br />
|-<br />
|valign="top"|'''ENODEV'''<br />
|filesystemtype not configured in the kernel.<br />
|-<br />
|valign="top"|'''ENOENT'''<br />
|A pathname was empty or had a nonexistent component.<br />
|-<br />
|valign="top"|'''ENOMEM'''<br />
|The kernel could not allocate a free page to copy filenames or data into.<br />
|-<br />
|valign="top"|'''ENOTBLK'''<br />
|source is not a block device (and a device was required).<br />
|-<br />
|valign="top"|'''ENOTDIR'''<br />
|The second argument, or a prefix of the first argument, is not a directory.<br />
|-<br />
|valign="top"|'''ENXIO'''<br />
|The major number of the block device source is out of range.<br />
|-<br />
|valign="top"|'''EPERM'''<br />
|The caller does not have the required privileges.<br />
|}<br />
<br />
Here are some additional return codes I recommend for NFS mounts, just as a start. These should allow a calling program to report a reasonably specific error message, and decide whether and how to retry the request.<br />
<br />
<pre><br />
EBADF The mount option string was not able to be parsed, or an unre-<br />
cognized option was specified, or a keyword option was specified<br />
with a value that is out of range.<br />
</pre><br />
<br />
This is a permanent mount error. The calling program should not retry this request with the same options.<br />
<br />
<pre><br />
ESTALE The server denied access to the requested share.<br />
<br />
ETIMEDOUT<br />
The kernel's mount attempt timed out after n seconds (I think n<br />
is 15).<br />
</pre><br />
<br />
These are temporary errors. The calling program may choose to retry this request using the same options, or fail immediately.<br />
<br />
<pre><br />
EIO An unknown error occurred while attempting the mount request.<br />
<br />
EPROTONOSUPPORT<br />
The server reports that the program, version, or transport pro-<br />
tocol is not currently available.<br />
<br />
ECONNREFUSED<br />
The kernel's mount connection attempt was refused by the server<br />
at the network transport layer.<br />
</pre><br />
<br />
These are temporary errors. The calling program can attempt to recover by adjusting the options and retrying the request.<br />
<br />
== Test Planning ==<br />
<br />
Each section below will provide an abbreviated description of a unit test plan for that mount option. Our goal is to construct an automated test harness that can run all of these unit tests at once, acting either as a check-in test or as a final release test. We'd like something similar to the t/ directory in the git-core distribution.<br />
<br />
=== Mount system call testing ===<br />
<br />
We can begin with some simple tests to make sure the mount system call API, as implemented by the NFS client, is working. The obvious stuff:<br />
<br />
# Testing first parameter sanity checking:<br />
## Called with first parameter set to NULL<br />
## Called with no ":" in the first parameter string<br />
## Called with first parameter set to a very long string<br />
## Called with first parameter pointing to unallocated storage<br />
# Testing second parameter sanity checking:<br />
## Called with second parameter set to NULL<br />
## Called with second parameter set to a very long string<br />
## Called with second parameter pointing to a path with too many symlinks<br />
## Called with second parameter pointing to unallocated storage<br />
# Testing option string sanity checking:<br />
## Called with option string set to NULL<br />
## Called with option string set to a very long string<br />
## Called with option string pointing to unallocated storage<br />
# Testing security checking<br />
## Called by root<br />
## Called by a normal user<br />
# Test Protocol Roll backs<br />
## Using iptables on a Linux server, turn off all TCP traffic to see if mount rolls back to UDP.<br />
<br />
== Discussion of Individual NFS Mount Options ==<br />
<br />
There are four classes of mount options for '''nfs''' and '''nfs4''' file systems. '''Fix this:''' All four classes of options are specified as normal NFS mount options because there is only one way to specify mount options in the ''/etc/fstab'' file.<br />
<br />
# There are generic mount options available to all Linux file systems, such as "ro" or "sync". See '''mount'''(8) for a description of generic mount options available for all file systems.<br />
# Some mount options can determine how the mount command behaves, such as "mountport" or "retry". These options have no affect after the mount operation has completed, but might be used to mount an NFS share through a network firewall.<br />
# Some mount options determine how the NFS client behaves during normal operation, such as "rsize" and "wsize". These may be used to tune performance, or change the client's caching or file locking behavior.<br />
# Mount options such as ''timeout='' or ''retrans='' can control aspects of Remote Procedure Call behavior. NFS clients send requests to NFS servers via Remote Procedure Calls, or RPCs. RPCs handle per-request authentication, adjust request parameters for different byte endianness on client and server, and retransmit requests that may have been lost by the network or server.<br />
<br />
Note that some options take the form of ''keyword=value'' while some options are boolean, taking either the form of ''keyword'' or ''nokeyword''. All options which do not use the ''keyword=value'' form use the boolean form, except for '''hard | soft''', '''udp | tcp''', and '''fg | bg'''.<br />
<br />
'''To Do'''<br />
<br />
* Format this section<br />
* Add status information about each option<br />
** Tested (legacy / text-based)<br />
** Works, does not work as documented (legacy / text-based)<br />
** Implementation/fix priority<br />
** Details about how it works and/or how it should work<br />
<br />
=== Valid options for either the nfs or nfs4 file system type ===<br />
<br />
==== soft | hard ====<br />
<br />
;Description<br />
:Determines the recovery behavior of the RPC client after an RPC request times out. If neither option is specified, or if the \fIhard\fR option is specified, the RPC is retried indefinitely. If the \fIsoft\fR option is specified, then the RPC client fails the RPC request after a major timeout occurs, and causes the NFS client to return an error to the calling application.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== timeo=''n'' ====<br />
<br />
;Description<br />
:The value, in tenths of a second, before timing out an RPC request. The default value is 600 (60 seconds) for NFS over TCP. On a UDP transport, the Linux RPC client uses an adaptive algorithm to estimate the time out value for frequently used request types such as READ and WRITE, and uses the ''timeo='' setting for infrequently used requests such as FSINFO. The ''timeo='' value defaults to 7 tenths of a second for NFS over UDP. After each timeout, the RPC client may retransmit the timed out request, or it may take some other action depending on the settings of the ''hard'' or ''retrans='' options.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== retrans=''n'' ====<br />
<br />
;Description<br />
:The number of RPC timeouts that must occur before a major timeout occurs. The default is 3 timeouts. If the file system is mounted with the ''hard'' option, the RPC client will generate a "server not responding" message after a major timeout, then continue to retransmit the<br />
request. If the file system is mounted with the ''soft'' option, the RPC client will abandon the request after a major timeout, and cause NFS to return an error to the application.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== rsize=''n'' ====<br />
<br />
;Description<br />
:The maximum number of bytes in each network READ request that the NFS client can use when reading data from a file on an NFS server; the actual data payload size of each NFS READ request is equal to or smaller than the ''rsize'' value. The ''rsize'' value is a positive integral multiple of 1024, and the largest value supported by the Linux NFS client is 1,048,576 bytes. Specified values outside of this range are rounded down to the closest multiple of 1024, and specified values smaller than 1024 are replaced with a default of 4096. If an ''rsize'' value is not specified, or if a value is specified but is larger than the maximums either the client or server support, the client and server negotiate the largest ''rsize'' value that both will support. The ''rsize'' option as specified on the '''mount'''(8) command line appears in the ''/etc/mtab'' file, but the effective ''rsize'' value negotiated by the client and server is reported in the ''/proc/mounts'' file.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== wsize=''n'' ====<br />
<br />
;Description<br />
:The maximum number of bytes per network WRITE request that the NFS client can use when writing data to a file on an NFS server. See the description of the \fIrsize\fP option for more details.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acregmin=''n'' ====<br />
<br />
;Description<br />
:The minimum time in seconds that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 3 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acregmax=''n'' ====<br />
<br />
;Description<br />
:The maximum time in seconds that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 60 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acdirmin=''n'' ====<br />
<br />
;Description<br />
:The minimum time in seconds that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. The default is 30 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acdirmax=''n'' ====<br />
<br />
;Description<br />
:The maximum time in seconds that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. The default is 60 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== actimeo=''n'' ====<br />
<br />
;Description<br />
:Using actimeo sets all of ''acregmin'', ''acregmax'', ''acdirmin'', and ''acdirmax'' to the same value. There is no default value.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== bg | fg ====<br />
<br />
;Description<br />
:This mount option determines how the '''mount'''(8) command behaves if an attempt to mount a remote share fails. The ''fg'' option causes '''mount'''(8) to exit with an error status if any part of the mount request times out or fails outright. This is called a "foreground" mount, and is the default behavior if neither ''fg'' nor ''bg'' is specified. If the ''bg'' option is specified, a timeout or failure causes the '''mount'''(8) command to fork a child which continues to attempt to mount the remote share. The parent immediately returns with a zero exit code. This is known as a "background" mount. If the local mount point directory is missing, the '''mount'''(8) command treats that as if the mount request timed out. This permits nested NFS mounts.<br />
<br />
;Implementation priority<br />
:Questionable. There is some debate about whether users are still using this option, or are using autofs instead.<br />
<br />
;Implementation<br />
:The mount.nfs command must distinguish between permanent mount errors (such as a bad mount option) which prevent the mount request as specified from ever being valid, and temporary errors (such as an unreachable server) which might allow the mount request as specified from completing at some future point. See the discussion of mount(2) return codes for more detail.<br />
<br />
;Test plan (fg - v2/v3)<br />
# Remove the local mount point, then attempt an NFS mount with the "fg" option set. The mount should fail with (what error code and what error message?).<br />
# Shut down the NFS server (service nfs stop), then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
# Block the NFS server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
# Block the mountd server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
# Block the rpcbind server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
<br />
;Test plan (bg - v2/v3)<br />
# Remove the local mount point, then attempt an NFS mount with the "bg" option set. The mount should succeed once the mount point has been recreated.<br />
# Shut down the NFS server (service nfs stop), then attempt an NFS mount with the "bg" option set. The mount should succeed once the NFS server has been restarted.<br />
# Block the NFS server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should succeed once the ports are unblocked.<br />
# Block the mountd server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should succeed once the ports are unblocked.<br />
# Block the rpcbind server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should succeed once the ports are unblocked.<br />
<br />
;Testing status<br />
* Tested with legacy mount.nfs; works for v2/v3, not for v4<br />
* Tested with text-based mount.nfs; does not work for any version<br />
<br />
==== retry=''n'' ====<br />
<br />
;Description<br />
:The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for foreground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week.<br />
<br />
;Implementation<br />
:The ten thousand minute default might be too long. Perhaps foreground mounts should also use a much shorter default.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== sec=''mode'' ====<br />
<br />
;Description<br />
:The RPCGSS security flavor to use for accessing files on this mount point. If the ''sec='' option is not specified, or if ''sec=sys'' is specified, the RPC client uses the AUTH_SYS security flavor for all RPC operations on this mount point. Valid security flavors are '''none''', '''sys''', '''krb5''', '''krb5i''', '''krb5p''', '''lkey''', '''lkeyi''', '''lkeyp''', '''spkm''', '''spkmi''', and '''spkmp'''. See the SECURITY CONSIDERATIONS section for details.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== sharecache ====<br />
<br />
;Description<br />
:Determines how the client's data cache is shared between mount points that mount the same remote share. If the option is not specified, or the \fIsharecache\fR option is specified, then all mounts of the same remote share on a client use the same data cache. If the \fInosharecache\fR option is specified, then files under that mount point are cached separately from files under other mount points that may be accessing the same remote share. As of kernel 2.6.18, this is legacy caching behavior, and is considered a data risk since two cached copies of the same file on the same client can become out of sync following an update of one of the copies.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
=== Valid options for the nfs file system type ===<br />
<br />
==== proto=''netid'' ====<br />
<br />
;Description<br />
:The transport protocol used by the RPC client to transmit requests to the NFS server for this mount point. The value of ''netid'' can be either '''udp''' or '''tcp'''. Each transport protocol uses different default ''retrans'' and ''timeo'' settings; see the description of these two mount options for details.<br />
:'''NB:''' This mount option controls both how the '''mount'''(8) command communicates with the portmapper and the MNT and NFS server, and what transport protocol the in-kernel NFS client uses to transmit requests to the NFS server. Specifying ''proto=tcp'' forces all traffic from the mount command and the NFS client to use TCP. Specifying ''proto=udp'' forces all traffic types to use UDP. If the ''proto='' mount option is not specified, the '''mount'''(8) command chooses the best transport for each type of request (GETPORT, MNT, and NFS), and by default the in\-kernel NFS client uses the TCP protocol. If the server doesn't support one or the other protocol, the '''mount'''(8) command attempts to discover which protocol is supported and use that.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== port=''n'' ====<br />
<br />
;Description<br />
:The numeric value of the port used by the remote NFS service. If the ''port='' option is not specified, or if the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== namlen=''n'' ====<br />
<br />
;Description<br />
:When an NFS server does not support version two of the RPC mount protocol, this option can be used to specify the maximum length of a filename that is supported on the remote filesystem. This is used to support the POSIX pathconf functions. The default is 255 characters.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mountport=''n'' ====<br />
<br />
;Description<br />
:The numeric value of the '''mountd''' port.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mounthost=''name'' ====<br />
<br />
;Description<br />
:The name of the host running '''mountd'''.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mountprog=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC program number to contact the mount daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value is 100005 which is the standard RPC mount daemon program number.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mountvers=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC version number to contact the mount daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value depends on which kernel you are using.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nfsprog=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC program number to contact the NFS daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value is 100003 which is the standard RPC NFS daemon program number.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nfsvers=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC version number to contact the NFS daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value depends on which kernel you are using.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== vers=''n'' ====<br />
<br />
;Description<br />
:''vers'' is an alternative to nfsvers and is compatible with many other operating systems.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nolock ====<br />
<br />
;Description<br />
:Disable NFS locking. Do not start lockd. This is appropriate for mounting the root filesystem or '''/usr''' or '''/var'''. These filesystems are typically either read-only or not shared, and in those cases, remote locking is not needed. This also needs to be used with some old NFS servers that don't support locking.<br />
<br />
:Note that applications can still get locks on files, but the locks only provide exclusion locally. Other clients mounting the same filesystem will not be able to detect the locks.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== intr ====<br />
<br />
;Description<br />
:If an NFS file operation has a major timeout and it is hard mounted, then allow signals to interupt the file operation and cause it to return EINTR to the calling program. The default is to not allow file operations to be interrupted.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== posix ====<br />
<br />
;Description<br />
:Mount the NFS filesystem using POSIX semantics. This allows an NFS filesystem to properly support the POSIX pathconf command by querying the mount server for the maximum length of a filename. To do this, the remote host must support version two of the RPC mount protocol. Many NFS servers support only version one.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nocto ====<br />
<br />
;Description<br />
:Suppress the retrieval of new attributes when creating a file.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== noac ====<br />
<br />
;Description<br />
:Disable all forms of attribute caching entirely. This extracts a significant performance penalty but it allows two different NFS clients to get reasonable results when both clients are actively writing to a common export on the server.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== noacl ====<br />
<br />
;Description<br />
:Disables Access Control List (ACL) processing.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nordirplus ====<br />
<br />
;Description<br />
:Disables NFSv3 READDIRPLUS RPCs. Use this option when mounting servers that don't support or have broken READDIRPLUS implementations.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
=== Valid options for the nfs4 file system type ===<br />
<br />
==== proto=''netid'' ====<br />
<br />
;Description<br />
:The transport protocol used by the RPC client to transmit requests to the NFS server. The value of ''netid'' can be either '''udp''' or '''tcp'''. All NFS version 4 servers are required to support TCP, so the default transport protocol for NFS version 4 is TCP.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== port=''n'' ====<br />
<br />
;Description<br />
:The numeric value of the port used by the remote NFS service. If the ''port='' option is not specified, the NFS client uses the standard NFS port number of 2049 without checking the remote portmapper service. If the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== clientaddr=''n'' ====<br />
<br />
;Description<br />
:Causes the client to advertise a specific callback address when communicating with an NFS version 4 server. This mount option can be used to configure an NFSv4 server to callback a client through a NAT router. If no ''clientaddr='' option is specified, the mount.nfs chooses an appropriate default based on the network route between client and server.<br />
<br />
;Implementation priority<br />
:High<br />
<br />
;Implementation<br />
:The client address option must discover the local address the server will use to contact the client. On multi-homed hosts, the client's local address depends on which NIC is used to route requests to the server. The address is set automatically by the user-space mount command if the admin doesn't provide one.<br />
<br />
;Test plan<br />
# Specify no mount options, and check that the kernel is getting a valid clientaddr= option from the mount.nfs command (using rpcdebug).<br />
# Specify clientaddr=garbage, and check that the client's kernel and user-space mount.nfs command properly reject it.<br />
# Specify a clientaddr= a good address, and check that the client's kernel gets the same address.<br />
<br />
;Testing status<br />
* Not tested with the legacy mount.nfs command<br />
* Partially tested with the text-based mount.nfs command<br />
<br />
==== intr ====<br />
<br />
;Description<br />
:If an NFS file operation has a major timeout and it is hard mounted, then allow signals to interupt the file operation and cause it to return EINTR to the calling program. The default is to not allow file operations to be interrupted.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nocto ====<br />
<br />
;Description<br />
:Suppress the retrieval of new attributes when creating a file.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== noac ====<br />
<br />
;Description<br />
:Disable attribute caching, and force synchronous writes. This extracts a server performance penalty but it allows two different NFS clients to get reasonable good results when both clients are actively writing to common filesystem on the server.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
== Security Considerations ==<br />
<br />
NFS provides access control for data, but depends on its RPC implementation to provide authentication of NFS requests. Traditional NFS access control mimics the standard mode bit access control provided in local file systems. Traditional RPC authentication uses a number to represent each user (usually the user's own uid), a number to represent the user's group (the user's gid), and a set of up to 16 auxiliary group numbers to represent other groups of which the user may be a member. File data and user ID values appear in the clear on the network.<br />
<br />
Moreover, NFS versions 2 and 3 use separate protocols for mounting, for locking and unlocking files, and for reporting system status of clients and servers. These auxiliary protocols use no authentication.<br />
<br />
In addition to combining all the auxiliary protocols into a single protocol, NFS version 4 introduces more advanced forms of access control, authentication, and in-transit data protection. Linux also implements the proprietary NFSv3 access control list implementation built into Solaris, but never standardized, and allows the use of advanced authentication modes for NFS version 2 and version 3 mounts.<br />
<br />
The NFS version 4 specification mandates NFSv4 ACLs, RPCGSS authentication, and RPCGSS security flavors that provide per-RPC integrity checking and encryption, and it applies to all NFS version 4 operations including mounting, file locking, and so on. Note that Linux does not yet implement security mode negotiation between NFS version 4 clients and servers.<br />
<br />
A mount option enables the RPCGSS security mode that is in effect on a given NFS mount point. Using the ''sec=krb5'' mount option provides a cryptographic proof of a user's identity in each RPC request that passes between client and server. This makes a very strong guarantee about who is accessing what data on the server.<br />
<br />
Two other flavors of Kerberos security are supported as well. '''krb5i''' provides a cryptographically strong guarantee that the data in each RPC request has not been tampered with. And '''krbp''' encrypts every RPC request so the data is not exposed at all during transit on networks between NFS client and server. There can be some performance impact when using integrity checking or encryption, however.<br />
<br />
Support for other forms of cryptographic security are also available, including lipkey and SPKM3.<br />
<br />
== Citations ==<br />
'''fstab'''(5), '''mount'''(8), '''umount'''(8), '''mount.nfs'''(5), '''umount.nfs'''(5), '''exports'''(5), '''nfsd'''(8), '''rpc.idmapd'''(8), '''rpc.gssd'''(8), '''rpc.svcgssd'''(8), '''kerberos'''(1)<br />
<br />
* RFC 768 for the UDP specification.<br />
* RFC 793 for the TCP specification.<br />
* RFC 1094 for the NFS version 2 specification.<br />
* RFC 1813 for the NFS version 3 specification.<br />
* RFC 1832 for the XDR specification.<br />
* RFC 1833 for the RPC bind specification.<br />
* RFC 2203 for the RPCSEC GSS API protocol specification.<br />
* RFC 3530 for the NFS version 4 specification.</div>Stevedhttp://wiki.linux-nfs.org/wiki/index.php/NewMountDesignSpecNewMountDesignSpec2007-08-29T14:35:00Z<p>Steved: /* Mount system call testing */</p>
<hr />
<div>== Introduction ==<br />
<br />
This wiki page is a working design specification for the new text-based NFS mount API. Here we discuss use cases, requirement statements, error reporting, and design specifications, in addition to minute behavioral details of mounting NFS shares. The purpose of this discussion is to understand how to implement the new interface, and to construct a unit test plan for both the legacy user-space mount command and the new in-kernel mount client.<br />
<br />
== Requirements ==<br />
<br />
There are several broad requirements for the new text-based NFS mount API.<br />
<br />
# Scalability - Allow for thousands of NFS mount points, and a large number of simultaneous mount operations<br />
# No user-space dependency on a versioned binary blob for passing NFS mount options to the kernel<br />
# Support version fallback - If NFS version 4 is not supported, fall back to version 3; if version 3 is not supported, fall back to version 2<br />
## NFSv4 mounts will ignore legacy options in order to make fallback work<br />
# Support transport protocol fallback - If TCP is not supported, fall back to UDP<br />
# Provide reasonable default behavior in the presence of network firewalls and misconfigured servers<br />
# Facilitate new features - IPv6, RDMA, FS cache should be easy to introduce<br />
# Better error reporting - Report and log useful, relevant, clear error messages when a failure has occurred; prepare for i18n<br />
# Update and clarify NFS mount documentation<br />
<br />
== Use Cases ==<br />
<br />
To mount a remote share using NFS version 2, use the '''nfs''' file system type and specify the ''nfsvers=2'' mount option. To mount using NFS version 3, use the '''nfs''' file system type and specify the ''nfsvers=3'' mount option. To mount using NFS version 4, use the '''nfs4''' file system type (the ''nfsvers'' mount option is not supported for the '''nfs4''' file system type).<br />
<br />
Here is an example from an ''/etc/fstab'' file for an NFS version 3 mount over TCP.<br />
<br />
server:/export/share /mnt nfs nfsvers=3,proto=tcp<br />
<br />
Here is an example for an NFS version 4 mount over TCP using Kerberos 5 mutual authentication.<br />
<br />
server:/export/share /mnt nfs4 sec=krb5<br />
<br />
== Design Specifications ==<br />
<br />
Obviously the discussion of NFSv2/v3 mounting will be significantly more complicated than NFSv4 mounting.<br />
<br />
=== Mounting NFS version 2 and version 3 shares ===<br />
<br />
=== Mounting NFS version 4 shares ===<br />
<br />
== Return Codes and Error Reporting ==<br />
<br />
Currently mount's error messages are very problematic.<br />
<br />
# Some error messages are incorrect.<br />
# Some error messages are repeated.<br />
# Some errors are never reported.<br />
# Some error messages are too specific to be useful to an average administration. For example, reporting an "RPC program/version mismatch occurred" is not helpful if the real problem is that "proto=udp" is not supported.<br />
# Some error messages are too general to be useful. For example, reporting "mount.nfs: not a directory" is obviously an errno string, but more specific information would provide a course of corrective action.<br />
<br />
Perhaps a clear error message can be reported to the command line, and a lot of detail should be reported in the system log? That's easy enough with in-kernel mount option parsing!<br />
<br />
=== mount(2) API return codes ===<br />
<br />
The mount.nfs program needs to distinguish between temporary problems and permanent errors in order to determine whether it's worth retrying a mount request in the background.<br />
<br />
For text-based NFS mounts, the version/protocol fallback mechanism should occur in user space -- certainly fallback policy is easier to set and implement in user space, but the kernel must provide specific information about how a mount request failed so that user space can make an appropriate choice about the next step to try.<br />
<br />
The current '''mount'''(2) API is described in a man page. The man page describes a set of generic error return codes, which we excerpt here. It also suggests that we can add specific error codes for NFS mounts.<br />
<br />
<pre><br />
RETURN VALUE<br />
On success, zero is returned. On error, -1 is returned, and errno is<br />
set appropriately.<br />
<br />
ERRORS<br />
The error values given below result from filesystem type independent<br />
errors. Each filesystem type may have its own special errors and its<br />
own special behavior. See the kernel source code for details.<br />
<br />
EACCES A component of a path was not searchable. (See also path_resolu-<br />
tion(2).) Or, mounting a read-only filesystem was attempted<br />
without giving the MS_RDONLY flag. Or, the block device source<br />
is located on a filesystem mounted with the MS_NODEV option.<br />
<br />
EAGAIN A call to umount2() specifying MNT_EXPIRE successfully marked an<br />
unbusy file system as expired.<br />
<br />
EBUSY source is already mounted. Or, it cannot be remounted read-only,<br />
because it still holds files open for writing. Or, it cannot be<br />
mounted on target because target is still busy (it is the work-<br />
ing directory of some task, the mount point of another device,<br />
has open files, etc.). Or, it could not be unmounted because it<br />
is busy.<br />
<br />
EFAULT One of the pointer arguments points outside the user address<br />
space.<br />
<br />
EINVAL source had an invalid superblock. Or, a remount (MS_REMOUNT)<br />
was attempted, but source was not already mounted on target.<br />
Or, a move (MS_MOVE) was attempted, but source was not a mount<br />
point, or was ’/’. Or, an unmount was attempted, but target was<br />
not a mount point. Or, umount2() was called with MNT_EXPIRE and<br />
either MNT_DETACH or MNT_FORCE.<br />
<br />
ELOOP Too many link encountered during pathname resolution. Or, a<br />
move was attempted, while target is a descendant of source.<br />
<br />
EMFILE (In case no block device is required:) Table of dummy devices is<br />
full.<br />
<br />
ENAMETOOLONG<br />
A pathname was longer than MAXPATHLEN.<br />
<br />
ENODEV filesystemtype not configured in the kernel.<br />
<br />
ENOENT A pathname was empty or had a nonexistent component.<br />
<br />
ENOMEM The kernel could not allocate a free page to copy filenames or<br />
data into.<br />
<br />
ENOTBLK<br />
source is not a block device (and a device was required).<br />
<br />
ENOTDIR<br />
The second argument, or a prefix of the first argument, is not a<br />
directory.<br />
<br />
ENXIO The major number of the block device source is out of range.<br />
<br />
EPERM The caller does not have the required privileges.<br />
</pre><br />
<br />
In the following table, we discuss how each of these error values is used.<br />
<br />
{|<br />
|-<br />
|valign="top"|'''EACCES'''<br />
|A component of a path was not searchable. (See also '''path_resolution'''(2).) Or, mounting a read-only filesystem was attempted without giving the MS_RDONLY flag. Or, the block device source is located on a filesystem mounted with the MS_NODEV option.<br />
|-<br />
|valign="top"|'''EAGAIN'''<br />
|A call to umount2() specifying MNT_EXPIRE successfully marked an unbusy file system as expired.<br />
|-<br />
|valign="top"|'''EBUSY'''<br />
|source is already mounted. Or, it cannot be remounted read-only, because it still holds files open for writing. Or, it cannot be mounted on target because target is still busy (it is the working directory of some task, the mount point of another device, has open files, etc.). Or, it could not be unmounted because it is busy.<br />
|-<br />
|valign="top"|'''EFAULT'''<br />
| One of the pointer arguments points outside the user address space.<br />
|-<br />
|valign="top"|'''EINVAL'''<br />
|source had an invalid superblock. Or, a remount (MS_REMOUNT) was attempted, but source was not already mounted on target. Or, a move (MS_MOVE) was attempted, but source was not a mount point, or was ’/’. Or, an unmount was attempted, but target was not a mount point. Or, umount2() was called with MNT_EXPIRE and either MNT_DETACH or MNT_FORCE.<br />
Note that NFS uses this error return code to signal bad mount options: The mount option string was not able to be parsed, or an unrecognized option was specified, or a keyword option was specified with a value that is out of range. This appears to be a precedent set by OCFS2 and CIFS.<br />
|-<br />
|valign="top"|'''ELOOP'''<br />
|Too many link encountered during pathname resolution. Or, a move was attempted, while target is a descendant of source.<br />
|-<br />
|valign="top"|'''EMFILE'''<br />
|(In case no block device is required:) Table of dummy devices is full.<br />
|-<br />
|valign="top"|'''ENAMETOOLONG'''<br />
|A pathname was longer than MAXPATHLEN.<br />
|-<br />
|valign="top"|'''ENODEV'''<br />
|filesystemtype not configured in the kernel.<br />
|-<br />
|valign="top"|'''ENOENT'''<br />
|A pathname was empty or had a nonexistent component.<br />
|-<br />
|valign="top"|'''ENOMEM'''<br />
|The kernel could not allocate a free page to copy filenames or data into.<br />
|-<br />
|valign="top"|'''ENOTBLK'''<br />
|source is not a block device (and a device was required).<br />
|-<br />
|valign="top"|'''ENOTDIR'''<br />
|The second argument, or a prefix of the first argument, is not a directory.<br />
|-<br />
|valign="top"|'''ENXIO'''<br />
|The major number of the block device source is out of range.<br />
|-<br />
|valign="top"|'''EPERM'''<br />
|The caller does not have the required privileges.<br />
|}<br />
<br />
Here are some additional return codes I recommend for NFS mounts, just as a start. These should allow a calling program to report a reasonably specific error message, and decide whether and how to retry the request.<br />
<br />
<pre><br />
EBADF The mount option string was not able to be parsed, or an unre-<br />
cognized option was specified, or a keyword option was specified<br />
with a value that is out of range.<br />
</pre><br />
<br />
This is a permanent mount error. The calling program should not retry this request with the same options.<br />
<br />
<pre><br />
ESTALE The server denied access to the requested share.<br />
<br />
ETIMEDOUT<br />
The kernel's mount attempt timed out after n seconds (I think n<br />
is 15).<br />
</pre><br />
<br />
These are temporary errors. The calling program may choose to retry this request using the same options, or fail immediately.<br />
<br />
<pre><br />
EIO An unknown error occurred while attempting the mount request.<br />
<br />
EPROTONOSUPPORT<br />
The server reports that the program, version, or transport pro-<br />
tocol is not currently available.<br />
<br />
ECONNREFUSED<br />
The kernel's mount connection attempt was refused by the server<br />
at the network transport layer.<br />
</pre><br />
<br />
These are temporary errors. The calling program can attempt to recover by adjusting the options and retrying the request.<br />
<br />
== Test Planning ==<br />
<br />
Each section below will provide an abbreviated description of a unit test plan for that mount option. Our goal is to construct an automated test harness that can run all of these unit tests at once, acting either as a check-in test or as a final release test. We'd like something similar to the t/ directory in the git-core distribution.<br />
<br />
=== Mount system call testing ===<br />
<br />
We can begin with some simple tests to make sure the mount system call API, as implemented by the NFS client, is working. The obvious stuff:<br />
<br />
# Testing first parameter sanity checking:<br />
## Called with first parameter set to NULL<br />
## Called with no ":" in the first parameter string<br />
## Called with first parameter set to a very long string<br />
## Called with first parameter pointing to unallocated storage<br />
# Testing second parameter sanity checking:<br />
## Called with second parameter set to NULL<br />
## Called with second parameter set to a very long string<br />
## Called with second parameter pointing to a path with too many symlinks<br />
## Called with second parameter pointing to unallocated storage<br />
# Testing option string sanity checking:<br />
## Called with option string set to NULL<br />
## Called with option string set to a very long string<br />
## Called with option string pointing to unallocated storage<br />
# Testing security checking<br />
## Called by root<br />
## Called by a normal user<br />
# Test Protocol Roll backs<br />
## Using iptables on a Linux server, turn off all TCP <br />
traffic to see if mount rolls back to UDP.<br />
<br />
== Discussion of Individual NFS Mount Options ==<br />
<br />
There are four classes of mount options for '''nfs''' and '''nfs4''' file systems. '''Fix this:''' All four classes of options are specified as normal NFS mount options because there is only one way to specify mount options in the ''/etc/fstab'' file.<br />
<br />
# There are generic mount options available to all Linux file systems, such as "ro" or "sync". See '''mount'''(8) for a description of generic mount options available for all file systems.<br />
# Some mount options can determine how the mount command behaves, such as "mountport" or "retry". These options have no affect after the mount operation has completed, but might be used to mount an NFS share through a network firewall.<br />
# Some mount options determine how the NFS client behaves during normal operation, such as "rsize" and "wsize". These may be used to tune performance, or change the client's caching or file locking behavior.<br />
# Mount options such as ''timeout='' or ''retrans='' can control aspects of Remote Procedure Call behavior. NFS clients send requests to NFS servers via Remote Procedure Calls, or RPCs. RPCs handle per-request authentication, adjust request parameters for different byte endianness on client and server, and retransmit requests that may have been lost by the network or server.<br />
<br />
Note that some options take the form of ''keyword=value'' while some options are boolean, taking either the form of ''keyword'' or ''nokeyword''. All options which do not use the ''keyword=value'' form use the boolean form, except for '''hard | soft''', '''udp | tcp''', and '''fg | bg'''.<br />
<br />
'''To Do'''<br />
<br />
* Format this section<br />
* Add status information about each option<br />
** Tested (legacy / text-based)<br />
** Works, does not work as documented (legacy / text-based)<br />
** Implementation/fix priority<br />
** Details about how it works and/or how it should work<br />
<br />
=== Valid options for either the nfs or nfs4 file system type ===<br />
<br />
==== soft | hard ====<br />
<br />
;Description<br />
:Determines the recovery behavior of the RPC client after an RPC request times out. If neither option is specified, or if the \fIhard\fR option is specified, the RPC is retried indefinitely. If the \fIsoft\fR option is specified, then the RPC client fails the RPC request after a major timeout occurs, and causes the NFS client to return an error to the calling application.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== timeo=''n'' ====<br />
<br />
;Description<br />
:The value, in tenths of a second, before timing out an RPC request. The default value is 600 (60 seconds) for NFS over TCP. On a UDP transport, the Linux RPC client uses an adaptive algorithm to estimate the time out value for frequently used request types such as READ and WRITE, and uses the ''timeo='' setting for infrequently used requests such as FSINFO. The ''timeo='' value defaults to 7 tenths of a second for NFS over UDP. After each timeout, the RPC client may retransmit the timed out request, or it may take some other action depending on the settings of the ''hard'' or ''retrans='' options.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== retrans=''n'' ====<br />
<br />
;Description<br />
:The number of RPC timeouts that must occur before a major timeout occurs. The default is 3 timeouts. If the file system is mounted with the ''hard'' option, the RPC client will generate a "server not responding" message after a major timeout, then continue to retransmit the<br />
request. If the file system is mounted with the ''soft'' option, the RPC client will abandon the request after a major timeout, and cause NFS to return an error to the application.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== rsize=''n'' ====<br />
<br />
;Description<br />
:The maximum number of bytes in each network READ request that the NFS client can use when reading data from a file on an NFS server; the actual data payload size of each NFS READ request is equal to or smaller than the ''rsize'' value. The ''rsize'' value is a positive integral multiple of 1024, and the largest value supported by the Linux NFS client is 1,048,576 bytes. Specified values outside of this range are rounded down to the closest multiple of 1024, and specified values smaller than 1024 are replaced with a default of 4096. If an ''rsize'' value is not specified, or if a value is specified but is larger than the maximums either the client or server support, the client and server negotiate the largest ''rsize'' value that both will support. The ''rsize'' option as specified on the '''mount'''(8) command line appears in the ''/etc/mtab'' file, but the effective ''rsize'' value negotiated by the client and server is reported in the ''/proc/mounts'' file.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== wsize=''n'' ====<br />
<br />
;Description<br />
:The maximum number of bytes per network WRITE request that the NFS client can use when writing data to a file on an NFS server. See the description of the \fIrsize\fP option for more details.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acregmin=''n'' ====<br />
<br />
;Description<br />
:The minimum time in seconds that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 3 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acregmax=''n'' ====<br />
<br />
;Description<br />
:The maximum time in seconds that the NFS client caches attributes of a regular file before it requests fresh attribute information from a server. The default is 60 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acdirmin=''n'' ====<br />
<br />
;Description<br />
:The minimum time in seconds that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. The default is 30 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== acdirmax=''n'' ====<br />
<br />
;Description<br />
:The maximum time in seconds that the NFS client caches attributes of a directory before it requests fresh attribute information from a server. The default is 60 seconds.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== actimeo=''n'' ====<br />
<br />
;Description<br />
:Using actimeo sets all of ''acregmin'', ''acregmax'', ''acdirmin'', and ''acdirmax'' to the same value. There is no default value.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== bg | fg ====<br />
<br />
;Description<br />
:This mount option determines how the '''mount'''(8) command behaves if an attempt to mount a remote share fails. The ''fg'' option causes '''mount'''(8) to exit with an error status if any part of the mount request times out or fails outright. This is called a "foreground" mount, and is the default behavior if neither ''fg'' nor ''bg'' is specified. If the ''bg'' option is specified, a timeout or failure causes the '''mount'''(8) command to fork a child which continues to attempt to mount the remote share. The parent immediately returns with a zero exit code. This is known as a "background" mount. If the local mount point directory is missing, the '''mount'''(8) command treats that as if the mount request timed out. This permits nested NFS mounts.<br />
<br />
;Implementation priority<br />
:Questionable. There is some debate about whether users are still using this option, or are using autofs instead.<br />
<br />
;Implementation<br />
:The mount.nfs command must distinguish between permanent mount errors (such as a bad mount option) which prevent the mount request as specified from ever being valid, and temporary errors (such as an unreachable server) which might allow the mount request as specified from completing at some future point. See the discussion of mount(2) return codes for more detail.<br />
<br />
;Test plan (fg - v2/v3)<br />
# Remove the local mount point, then attempt an NFS mount with the "fg" option set. The mount should fail with (what error code and what error message?).<br />
# Shut down the NFS server (service nfs stop), then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
# Block the NFS server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
# Block the mountd server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
# Block the rpcbind server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should fail with (what error code and what error message?).<br />
<br />
;Test plan (bg - v2/v3)<br />
# Remove the local mount point, then attempt an NFS mount with the "bg" option set. The mount should succeed once the mount point has been recreated.<br />
# Shut down the NFS server (service nfs stop), then attempt an NFS mount with the "bg" option set. The mount should succeed once the NFS server has been restarted.<br />
# Block the NFS server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should succeed once the ports are unblocked.<br />
# Block the mountd server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should succeed once the ports are unblocked.<br />
# Block the rpcbind server ports on the server with iptables, then attempt an NFS mount with the "bg" option set. The mount should succeed once the ports are unblocked.<br />
<br />
;Testing status<br />
* Tested with legacy mount.nfs; works for v2/v3, not for v4<br />
* Tested with text-based mount.nfs; does not work for any version<br />
<br />
==== retry=''n'' ====<br />
<br />
;Description<br />
:The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for foreground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week.<br />
<br />
;Implementation<br />
:The ten thousand minute default might be too long. Perhaps foreground mounts should also use a much shorter default.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== sec=''mode'' ====<br />
<br />
;Description<br />
:The RPCGSS security flavor to use for accessing files on this mount point. If the ''sec='' option is not specified, or if ''sec=sys'' is specified, the RPC client uses the AUTH_SYS security flavor for all RPC operations on this mount point. Valid security flavors are '''none''', '''sys''', '''krb5''', '''krb5i''', '''krb5p''', '''lkey''', '''lkeyi''', '''lkeyp''', '''spkm''', '''spkmi''', and '''spkmp'''. See the SECURITY CONSIDERATIONS section for details.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== sharecache ====<br />
<br />
;Description<br />
:Determines how the client's data cache is shared between mount points that mount the same remote share. If the option is not specified, or the \fIsharecache\fR option is specified, then all mounts of the same remote share on a client use the same data cache. If the \fInosharecache\fR option is specified, then files under that mount point are cached separately from files under other mount points that may be accessing the same remote share. As of kernel 2.6.18, this is legacy caching behavior, and is considered a data risk since two cached copies of the same file on the same client can become out of sync following an update of one of the copies.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
=== Valid options for the nfs file system type ===<br />
<br />
==== proto=''netid'' ====<br />
<br />
;Description<br />
:The transport protocol used by the RPC client to transmit requests to the NFS server for this mount point. The value of ''netid'' can be either '''udp''' or '''tcp'''. Each transport protocol uses different default ''retrans'' and ''timeo'' settings; see the description of these two mount options for details.<br />
:'''NB:''' This mount option controls both how the '''mount'''(8) command communicates with the portmapper and the MNT and NFS server, and what transport protocol the in-kernel NFS client uses to transmit requests to the NFS server. Specifying ''proto=tcp'' forces all traffic from the mount command and the NFS client to use TCP. Specifying ''proto=udp'' forces all traffic types to use UDP. If the ''proto='' mount option is not specified, the '''mount'''(8) command chooses the best transport for each type of request (GETPORT, MNT, and NFS), and by default the in\-kernel NFS client uses the TCP protocol. If the server doesn't support one or the other protocol, the '''mount'''(8) command attempts to discover which protocol is supported and use that.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== port=''n'' ====<br />
<br />
;Description<br />
:The numeric value of the port used by the remote NFS service. If the ''port='' option is not specified, or if the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== namlen=''n'' ====<br />
<br />
;Description<br />
:When an NFS server does not support version two of the RPC mount protocol, this option can be used to specify the maximum length of a filename that is supported on the remote filesystem. This is used to support the POSIX pathconf functions. The default is 255 characters.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mountport=''n'' ====<br />
<br />
;Description<br />
:The numeric value of the '''mountd''' port.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mounthost=''name'' ====<br />
<br />
;Description<br />
:The name of the host running '''mountd'''.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mountprog=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC program number to contact the mount daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value is 100005 which is the standard RPC mount daemon program number.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== mountvers=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC version number to contact the mount daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value depends on which kernel you are using.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nfsprog=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC program number to contact the NFS daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value is 100003 which is the standard RPC NFS daemon program number.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nfsvers=''n'' ====<br />
<br />
;Description<br />
:Use an alternate RPC version number to contact the NFS daemon on the remote host. This option is useful for hosts that can run multiple NFS servers. The default value depends on which kernel you are using.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== vers=''n'' ====<br />
<br />
;Description<br />
:''vers'' is an alternative to nfsvers and is compatible with many other operating systems.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nolock ====<br />
<br />
;Description<br />
:Disable NFS locking. Do not start lockd. This is appropriate for mounting the root filesystem or '''/usr''' or '''/var'''. These filesystems are typically either read-only or not shared, and in those cases, remote locking is not needed. This also needs to be used with some old NFS servers that don't support locking.<br />
<br />
:Note that applications can still get locks on files, but the locks only provide exclusion locally. Other clients mounting the same filesystem will not be able to detect the locks.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== intr ====<br />
<br />
;Description<br />
:If an NFS file operation has a major timeout and it is hard mounted, then allow signals to interupt the file operation and cause it to return EINTR to the calling program. The default is to not allow file operations to be interrupted.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== posix ====<br />
<br />
;Description<br />
:Mount the NFS filesystem using POSIX semantics. This allows an NFS filesystem to properly support the POSIX pathconf command by querying the mount server for the maximum length of a filename. To do this, the remote host must support version two of the RPC mount protocol. Many NFS servers support only version one.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nocto ====<br />
<br />
;Description<br />
:Suppress the retrieval of new attributes when creating a file.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== noac ====<br />
<br />
;Description<br />
:Disable all forms of attribute caching entirely. This extracts a significant performance penalty but it allows two different NFS clients to get reasonable results when both clients are actively writing to a common export on the server.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== noacl ====<br />
<br />
;Description<br />
:Disables Access Control List (ACL) processing.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nordirplus ====<br />
<br />
;Description<br />
:Disables NFSv3 READDIRPLUS RPCs. Use this option when mounting servers that don't support or have broken READDIRPLUS implementations.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
=== Valid options for the nfs4 file system type ===<br />
<br />
==== proto=''netid'' ====<br />
<br />
;Description<br />
:The transport protocol used by the RPC client to transmit requests to the NFS server. The value of ''netid'' can be either '''udp''' or '''tcp'''. All NFS version 4 servers are required to support TCP, so the default transport protocol for NFS version 4 is TCP.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== port=''n'' ====<br />
<br />
;Description<br />
:The numeric value of the port used by the remote NFS service. If the ''port='' option is not specified, the NFS client uses the standard NFS port number of 2049 without checking the remote portmapper service. If the specified port value is 0, then the NFS client uses the NFS service port provided by the remote portmapper service. If any other value is specified, then the NFS client uses that value as the destination port when connecting to the remote NFS service. If the remote host's NFS service is not registered with its portmapper, or if the NFS service is not available on the specified port, the mount fails.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== clientaddr=''n'' ====<br />
<br />
;Description<br />
:Causes the client to advertise a specific callback address when communicating with an NFS version 4 server. This mount option can be used to configure an NFSv4 server to callback a client through a NAT router. If no ''clientaddr='' option is specified, the mount.nfs chooses an appropriate default based on the network route between client and server.<br />
<br />
;Implementation priority<br />
:High<br />
<br />
;Implementation<br />
:The client address option must discover the local address the server will use to contact the client. On multi-homed hosts, the client's local address depends on which NIC is used to route requests to the server. The address is set automatically by the user-space mount command if the admin doesn't provide one.<br />
<br />
;Test plan<br />
# Specify no mount options, and check that the kernel is getting a valid clientaddr= option from the mount.nfs command (using rpcdebug).<br />
# Specify clientaddr=garbage, and check that the client's kernel and user-space mount.nfs command properly reject it.<br />
# Specify a clientaddr= a good address, and check that the client's kernel gets the same address.<br />
<br />
;Testing status<br />
* Not tested with the legacy mount.nfs command<br />
* Partially tested with the text-based mount.nfs command<br />
<br />
==== intr ====<br />
<br />
;Description<br />
:If an NFS file operation has a major timeout and it is hard mounted, then allow signals to interupt the file operation and cause it to return EINTR to the calling program. The default is to not allow file operations to be interrupted.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== nocto ====<br />
<br />
;Description<br />
:Suppress the retrieval of new attributes when creating a file.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
==== noac ====<br />
<br />
;Description<br />
:Disable attribute caching, and force synchronous writes. This extracts a server performance penalty but it allows two different NFS clients to get reasonable good results when both clients are actively writing to common filesystem on the server.<br />
<br />
;Implementation<br />
:No notes.<br />
<br />
;Testing status<br />
* Not tested with legacy mount.nfs<br />
* Not tested with text-based mount.nfs<br />
<br />
== Security Considerations ==<br />
<br />
NFS provides access control for data, but depends on its RPC implementation to provide authentication of NFS requests. Traditional NFS access control mimics the standard mode bit access control provided in local file systems. Traditional RPC authentication uses a number to represent each user (usually the user's own uid), a number to represent the user's group (the user's gid), and a set of up to 16 auxiliary group numbers to represent other groups of which the user may be a member. File data and user ID values appear in the clear on the network.<br />
<br />
Moreover, NFS versions 2 and 3 use separate protocols for mounting, for locking and unlocking files, and for reporting system status of clients and servers. These auxiliary protocols use no authentication.<br />
<br />
In addition to combining all the auxiliary protocols into a single protocol, NFS version 4 introduces more advanced forms of access control, authentication, and in-transit data protection. Linux also implements the proprietary NFSv3 access control list implementation built into Solaris, but never standardized, and allows the use of advanced authentication modes for NFS version 2 and version 3 mounts.<br />
<br />
The NFS version 4 specification mandates NFSv4 ACLs, RPCGSS authentication, and RPCGSS security flavors that provide per-RPC integrity checking and encryption, and it applies to all NFS version 4 operations including mounting, file locking, and so on. Note that Linux does not yet implement security mode negotiation between NFS version 4 clients and servers.<br />
<br />
A mount option enables the RPCGSS security mode that is in effect on a given NFS mount point. Using the ''sec=krb5'' mount option provides a cryptographic proof of a user's identity in each RPC request that passes between client and server. This makes a very strong guarantee about who is accessing what data on the server.<br />
<br />
Two other flavors of Kerberos security are supported as well. '''krb5i''' provides a cryptographically strong guarantee that the data in each RPC request has not been tampered with. And '''krbp''' encrypts every RPC request so the data is not exposed at all during transit on networks between NFS client and server. There can be some performance impact when using integrity checking or encryption, however.<br />
<br />
Support for other forms of cryptographic security are also available, including lipkey and SPKM3.<br />
<br />
== Citations ==<br />
'''fstab'''(5), '''mount'''(8), '''umount'''(8), '''mount.nfs'''(5), '''umount.nfs'''(5), '''exports'''(5), '''nfsd'''(8), '''rpc.idmapd'''(8), '''rpc.gssd'''(8), '''rpc.svcgssd'''(8), '''kerberos'''(1)<br />
<br />
* RFC 768 for the UDP specification.<br />
* RFC 793 for the TCP specification.<br />
* RFC 1094 for the NFS version 2 specification.<br />
* RFC 1813 for the NFS version 3 specification.<br />
* RFC 1832 for the XDR specification.<br />
* RFC 1833 for the RPC bind specification.<br />
* RFC 2203 for the RPCSEC GSS API protocol specification.<br />
* RFC 3530 for the NFS version 4 specification.</div>Steved