PNFS Block Server Setup Instructions

From Linux NFS

Revision as of 14:25, 1 March 2011 by BennyHalevy (Talk | contribs)
Jump to: navigation, search

How to guide to setup the pNFS Block Layout server based on sPNFS

This page describes the setup of the pNFS Block Layout Server. This is based on the Rick McNeal's how to guide. Please note that Fedora 11 was used to setup the server, some of the content you see might be specific to Fedora ( for e.g yum).

Note that this is an early development prototype, and recently has not been actively maintained; thus this is recommended for developers only.

Contents

Building the code



1) Building the kernel source

Obtain the code from Linux pNFS git. pNFS Block Layout server is currently a part of the pNFS git.

    git clone git://linux-nfs.org/~bhalevy/linux-pnfs.git

Use the pnfs-all-latest branch. CONFIG_SPNFS_BLOCK should be enabled before the compilation of the code.

This page doesn't discuss anything about kernel compilation.

2) Building the nfsutils and utils/blkmapd

Obtain the "nfs-utils" source code.

    git clone git://linux-nfs.org/~bhalevy/pnfs-nfs-utils.git

Run the autogen.sh to generate the "configure" file. If you are trying to build the code first time several packages are required. I have either installed or updated the following packages.

yum install libtirpc-devel
yum install tcp_wrappers-devel
yum install libevent-devel
yum install libnfsidmap-devel
yum install nfs-utils-lib-devel
yum install openldap-devel
yum install libgssglue-devel
# Fedora 12 requires also:
yum install libblkid-devel
yum install device-mapper-devel
# Fedora 13 requires also:
yum install krb5-devel

blkmapd

The blkmapd daemon should be run on the pnfs client to map the block devices according the pnfs device information. See utils/blkmapd/etc/blkmapd.conf and utils/blkmapd/etc/initd/initd.redhat for more information about its setup.

Exporting the filesystem

For the block access to work properly the disks must have a signature. Partitioned the disks using "parted". Disks partitioned with "fdisk" doesn't have the signatures.

I have followed the below mentioned steps.

  1. parted /dev/sdb
(parted) mklabel gpt
(parted) mkpart 1 <Provide start and end of the partetions>
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name  Flags
1      17.4kB  53.7GB  53.7GB  ext3         1     msftres

I have tested with ext4 filesystem, create ext4 filesystem with 4K block size.


 # mkfs.ext4 -b 4096 /dev/sdb1 


Setting up the BLOCK storage / SAN

I haven't setup the block storage and metadata server on the same machine. You may setup them on two different machines , but client and metadata sever should see the same disks.


I have used iSCSI to setup the block storage , "scsi-target-utils" is required to setup the iscsi target. One key thing is when adding a LUN to the target , don't add the disk partition (/dev/sdb1), instead add the entire disk(/dev/sdb).

The disk signatures are not visible when if you add the disk partetion to the target.

Export Options

/mnt *(rw,sync,fsid=0,insecure,no_subtree_check,no_root_squash,pnfs)

How to Start the server

I have used the following script to start the server

#/bin/bash 
# UMOUNT /mnt
umount /mnt
#start the service
service tgtd restart
sleep 8
# Create iSCSI target
tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.1992-05.com.emc:openblock
# Expose LUN as iSCSI target
tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 --backing-store /dev/sdb
# Allow acces of all initiator
tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address ALL
# show all the details
tgtadm --lld iscsi --op show --mode target
# mount the partetion
mount /dev/sdb1 /mnt
sleep 3
# start the nfs server
service nfs restart
sleep 3
# start the deamon
cd <CTL_SRC>/ctl/
./ctl -u &

Mount from the client

mount -t nfs4 -o minorversion=1 SN:/ /mnt/ob

How to verify

 - tcpdump/wireshark  is the best way to see what is happening.
 - The other way is after mounting the export,on the client check /proc/self/mountstats.
Personal tools