GFS2 Cluster in VMware

From Linux NFS

(Difference between revisions)
Jump to: navigation, search
m
m (maybe "guest1" is more useful to the reader than the hostname "fatsuit"..)
Line 2: Line 2:
* bought a copy of VMware Workstation 6, installed it on my T-43 Thinkpad <tt>"atro"</tt>(running openSuSE 10.2, 2GB of RAM).
* bought a copy of VMware Workstation 6, installed it on my T-43 Thinkpad <tt>"atro"</tt>(running openSuSE 10.2, 2GB of RAM).
* made a new virtual machine: '''OS''': Linux, '''Version''': "Other Linux 2.6.x kernel", '''Networking''': Bridged, '''Disk''': 4GB, split into 2GB files, '''RAM''': 256MB
* made a new virtual machine: '''OS''': Linux, '''Version''': "Other Linux 2.6.x kernel", '''Networking''': Bridged, '''Disk''': 4GB, split into 2GB files, '''RAM''': 256MB
-
* installed Fedora 8 in it -- even X worked well with only 256MB of RAM(!) -- guest is named <tt>"fatsuit"</tt>
+
* installed Fedora 8 in it -- even X worked well with only 256MB of RAM(!) -- guest is named <tt>"guest1"</tt>
* yum-installed '''gfs2-utils''' and '''libvolume_id-devel''' ''(i also tried cman, cman-devel, openais, openais-devel, and lvm2-cluster, but even '''they''' were out-of-date with the stock Fedora kernel, and so are also too old for the pNFS kernels)''
* yum-installed '''gfs2-utils''' and '''libvolume_id-devel''' ''(i also tried cman, cman-devel, openais, openais-devel, and lvm2-cluster, but even '''they''' were out-of-date with the stock Fedora kernel, and so are also too old for the pNFS kernels)''
* downloaded and installed '''device-mapper-1.02.22''', '''openais-0.80.3''', '''cluster-2.01.00''', and '''lvm2-2.02.28'''
* downloaded and installed '''device-mapper-1.02.22''', '''openais-0.80.3''', '''cluster-2.01.00''', and '''lvm2-2.02.28'''
==ATA over Ethernet (for guest cluster shared storage)==
==ATA over Ethernet (for guest cluster shared storage)==
-
* yum-installed AoE initiator (client) '''aoetools-18-1''' on <tt>fatsuit</tt>
+
* yum-installed AoE initiator (client) '''aoetools-18-1''' on <tt>guest1</tt>
* downloaded AoE target (server) [http://internap.dl.sourceforge.net/sourceforge/aoetools/vblade-15.tgz vblade-15.tgz] and installed it on <tt>atro</tt>
* downloaded AoE target (server) [http://internap.dl.sourceforge.net/sourceforge/aoetools/vblade-15.tgz vblade-15.tgz] and installed it on <tt>atro</tt>
* i set aside a spare partition on <tt>atro</tt> to export as a block device over AoE:
* i set aside a spare partition on <tt>atro</tt> to export as a block device over AoE:
** <tt>[atro] $ sudo ln -s /dev/sda6 /dev/AoE</tt>
** <tt>[atro] $ sudo ln -s /dev/sda6 /dev/AoE</tt>
** <tt>[atro] $ sudo vbladed 0 1 eth0 /dev/AoE</tt>  ''(major dev num 0, minor 1)''
** <tt>[atro] $ sudo vbladed 0 1 eth0 /dev/AoE</tt>  ''(major dev num 0, minor 1)''
-
** <tt>[fatsuit] $ sudo modprobe aoe</tt>
+
** <tt>[guest1] $ sudo modprobe aoe</tt>
*** .. AoE discovers all exported devices on the LAN; mine was the only one, and immediately appeared as <tt>/dev/etherd/e0.1</tt>.  Mounting it "just worked"; props to AoE!
*** .. AoE discovers all exported devices on the LAN; mine was the only one, and immediately appeared as <tt>/dev/etherd/e0.1</tt>.  Mounting it "just worked"; props to AoE!
==LVM and GFS2 setup==
==LVM and GFS2 setup==
* prep physical volume for LVM:
* prep physical volume for LVM:
-
** <tt>[fatsuit] $ sudo pvcreate -M 2 /dev/etherd/e0.1</tt>
+
** <tt>[guest1] $ sudo pvcreate -M 2 /dev/etherd/e0.1</tt>
* create the volume group '''GuestVolGroup''' and add all of the AoE "device" to it:
* create the volume group '''GuestVolGroup''' and add all of the AoE "device" to it:
-
** <tt>[fatsuit] $ sudo vgcreate -M 2 -s 1m -c y GuestVolGroup /dev/etherd/e0.1</tt>
+
** <tt>[guest1] $ sudo vgcreate -M 2 -s 1m -c y GuestVolGroup /dev/etherd/e0.1</tt>
* edit <tt>/etc/lvm/lvm.conf</tt> and make sure to set locking_type to DLM
* edit <tt>/etc/lvm/lvm.conf</tt> and make sure to set locking_type to DLM
-
* before further stuff can proceed, the cluster needs to be up and <tt>clvmd</tt> needs to be running everywhere.  So, in VMware I cloned <tt>fatsuit</tt> twice: as <tt>hagbard</tt> and <tt>wingnut</tt>.
+
* before further stuff can proceed, the cluster needs to be up and <tt>clvmd</tt> needs to be running everywhere.  So, in VMware I cloned <tt>guest1</tt> twice: as <tt>guest2</tt> and <tt>guest3</tt>.
* edit <tt>/etc/cluster.conf</tt> and name the cluster <tt>'''GuestCluster'''</tt> and set up the three nodes with manual (read: ignored) fencing.
* edit <tt>/etc/cluster.conf</tt> and name the cluster <tt>'''GuestCluster'''</tt> and set up the three nodes with manual (read: ignored) fencing.
* bring up the cluster:  
* bring up the cluster:  
-
** <tt>$ pdsh -w fatsuit,hagbard,wingnut sudo service cman start && pdsh -w fatsuit,hagbard,wingnut sudo service clvmd start</tt>
+
** <tt>$ pdsh -w guest[1-3] sudo service cman start && pdsh -w guest[1-3] sudo service clvmd start</tt>
* create the logical volume '''GuestVolume''' and assign the full volume group to it:
* create the logical volume '''GuestVolume''' and assign the full volume group to it:
-
** <tt>[fatsuit] $ sudo lvcreate -n GuestVolume -l 100%VG</tt>
+
** <tt>[guest1] $ sudo lvcreate -n GuestVolume -l 100%VG</tt>
* .. and make a GFS2 fs therein:
* .. and make a GFS2 fs therein:
-
** <tt>[fatsuit] $ sudo gfs2_mkfs -j 3 -p lock_dlm -t GuestCluster:GuestFS /dev/GuestVolGroup/GuestVolume</tt>
+
** <tt>[guest1] $ sudo gfs2_mkfs -j 3 -p lock_dlm -t GuestCluster:GuestFS /dev/GuestVolGroup/GuestVolume</tt>
* restart the daemons, then mount and your VMware GFS2 cluster should be good to go! <tt>:)</tt>
* restart the daemons, then mount and your VMware GFS2 cluster should be good to go! <tt>:)</tt>

Revision as of 11:04, 5 May 2008

VMware

  • bought a copy of VMware Workstation 6, installed it on my T-43 Thinkpad "atro"(running openSuSE 10.2, 2GB of RAM).
  • made a new virtual machine: OS: Linux, Version: "Other Linux 2.6.x kernel", Networking: Bridged, Disk: 4GB, split into 2GB files, RAM: 256MB
  • installed Fedora 8 in it -- even X worked well with only 256MB of RAM(!) -- guest is named "guest1"
  • yum-installed gfs2-utils and libvolume_id-devel (i also tried cman, cman-devel, openais, openais-devel, and lvm2-cluster, but even they were out-of-date with the stock Fedora kernel, and so are also too old for the pNFS kernels)
  • downloaded and installed device-mapper-1.02.22, openais-0.80.3, cluster-2.01.00, and lvm2-2.02.28

ATA over Ethernet (for guest cluster shared storage)

  • yum-installed AoE initiator (client) aoetools-18-1 on guest1
  • downloaded AoE target (server) vblade-15.tgz and installed it on atro
  • i set aside a spare partition on atro to export as a block device over AoE:
    • [atro] $ sudo ln -s /dev/sda6 /dev/AoE
    • [atro] $ sudo vbladed 0 1 eth0 /dev/AoE (major dev num 0, minor 1)
    • [guest1] $ sudo modprobe aoe
      • .. AoE discovers all exported devices on the LAN; mine was the only one, and immediately appeared as /dev/etherd/e0.1. Mounting it "just worked"; props to AoE!

LVM and GFS2 setup

  • prep physical volume for LVM:
    • [guest1] $ sudo pvcreate -M 2 /dev/etherd/e0.1
  • create the volume group GuestVolGroup and add all of the AoE "device" to it:
    • [guest1] $ sudo vgcreate -M 2 -s 1m -c y GuestVolGroup /dev/etherd/e0.1
  • edit /etc/lvm/lvm.conf and make sure to set locking_type to DLM
  • before further stuff can proceed, the cluster needs to be up and clvmd needs to be running everywhere. So, in VMware I cloned guest1 twice: as guest2 and guest3.
  • edit /etc/cluster.conf and name the cluster GuestCluster and set up the three nodes with manual (read: ignored) fencing.
  • bring up the cluster:
    • $ pdsh -w guest[1-3] sudo service cman start && pdsh -w guest[1-3] sudo service clvmd start
  • create the logical volume GuestVolume and assign the full volume group to it:
    • [guest1] $ sudo lvcreate -n GuestVolume -l 100%VG
  • .. and make a GFS2 fs therein:
    • [guest1] $ sudo gfs2_mkfs -j 3 -p lock_dlm -t GuestCluster:GuestFS /dev/GuestVolGroup/GuestVolume
  • restart the daemons, then mount and your VMware GFS2 cluster should be good to go! :)
Personal tools