GFS2 Setup Notes - cluster3, 2.6.27 kernel

From Linux NFS

Revision as of 21:12, 21 December 2008 by Richterd (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

With the release of Fedora 10 in early October 2008 (Update: since delayed until December), Red Hat's newest version of their cluster suite ("cluster3") will go prime-time. In the last couple months, one of cluster3's main dependencies split into two parts, corosync and openAIS, and building has been problematic at times. These are my notes from my latest GFS2 setup, and the first time I'm moving everything to 2.6.27.

Contents

The parts:

As of this moment, cluster3's dependencies aren't yet packaged as RPMs, so building from source is a must. The particular revisions are changing almost every day, so you'll have to consult the cluster project wiki for current versions.

Even overnight, this just changed for me. Yikes! I'm now basing off of cluster3 cluster-2.99.10, corosync svn r1667, and openAIS svn r1651.

  • I used yum to get: libvolume_id-devel, libxml2, libxml2-devel, openldap, openldap-devel, readline (likely installed already), and readline-devel.
    • (Note: I only needed the readline stuff when I went back to LVM2.2.02.39, as ...40 was broken.)
  • get the latest device-mapper
  • use svn to clone the corosync repository
  • use svn to clone the openAIS repository
  • use git to clone my cluster3 repository
    • $ git-clone git://git.linux-nfs.org/projects/richterd/cluster.git
    • my "pnfs-gfs2-dev" branch is where current development goes.
  • get the latest LVM2
    • note: LVM2.2.02.40 is broken (?!?) -- a minor build issue, but I can't believe they released something that has an undefined symbol.

The build:

  • build/install the device-mapper
  • build/install corosync (shouldn't even need to configure it; this is nice and easy now!)
  • build/install openAIS (shouldn't even need to configure it)
  • before doing the cluster3 stuff, you'll need to be running a 2.6.27-based kernel and have its sources available
  • build/install the cluster3 stuff
    • I had to point it at openAIS, and I generally disable the rgmanager stuff. Also now disabling the perl/python bindings, and I build the kernel module separately anyway.
    • $ ./configure --openaislibdir=/usr/lib/openais --openaisincdir=/usr/include --without_rgmanager --without_bindings --without_kernel_modules
  • build/install LVM2, making sure to specify the clvmd type
    • $ ./configure --with-lvm1=none --with-clvmd=cman --prefix=/usr
The rest of the cluster:

So far, I'd built all of this on a single cluster node. I set that node up as an NFS server and exported my top-level build directory. Then, on each cluster node, I mounted the export and just did the make install steps.

Shared storage:

For testing, I use ATA over ethernet and have had fairly good results with it.

  • yum-installed AoE initiator (client) aoetools-23-1 across the cluster
  • downloaded AoE target (server) vblade-15.tgz and installed it on a separate host (rhclhead).
    • with dd, I created a 1GB empty file, AOE_SHARED_STORAGE
    • .. and exported it: $ sudo vbladed 2 3 eth0 AOE_SHARED_STORAGE (major dev num 2, minor 3)

Creating the filesystem:

  • prep the volume with LVM2 metadata: $ sudo pvcreate -M 2 /dev/etherd/e2.3
  • create the volume group DMRVolGroup: $ sudo vgcreate -M 2 -s 1m -c y DMRVolGroup /dev/etherd/e2.3
  • edit /etc/lvm/lvm.conf across the cluster and set the locking type to DLM.
  • make sure you have a properly configured /etc/cluster/cluster.conf also set up across the cluster (DMRCluster, in my case).
  • now, bring up the cluster: pdsh -w guest[1-3] sudo service cman start && pdsh -w guest[1-3] sudo service clvmd start
  • create the logical volume DMRVolume: $ sudo lvcreate -n DMRVolume -l 100%VG DMRVolGroup
  • create the GFS2 filesystem DMRFS: $ sudo gfs2_mkfs -j 4 -p lock_dlm -t DMRCluster:DMRFS /dev/DMRVolGroup/DMRVolume
    • note the -j ("number of journals") argument needs to be appropriate for your cluster size.
Personal tools