Header Shadow Image


GlusterFS: Configuration and Setup w/ NFS-Ganesha for an HA NFS Cluster (Quick Start Guide)

This is a much shorter version of our troubleshooting article on NFS Ganesh we created earlier.  This is meant as a quick start guide for those who just want to get this server up and running very quickly.  The point of High Availabilty is that the best implement HA solutions never allow any outage to be noticed by the client.  It's not the client's job to put up with the fallout of a failure, it's the sysadmins job to ensure they never have too. In this configuration, however, we will use a 3 node Gluster Cluster. In short, we'll be using the following techs to setup an HA configuration:

  • GlusterFS
  • NFS Ganesha
  • CentOS 7 
  • HAPROXY
  • keepalived
  • firewalld
  • selinux

Here's a summary configuration for this whole work:

HOST SETTING DESCRIPTION
nfs01 / nfs02 / nfs03

Create and reserve some IP's for your hosts.  We are using the FreeIPA project to provide DNS and Kerberos functionality here:

192.168.0.80 nfs-c01 (nfs01, nfs02, nfs03)  VIP DNS Entry

192.168.0.131 nfs01
192.168.0.119 nfs02
192.168.0.125 nfs03

Add the hosts to your DNS server for a clean setup. Alternately  add them to /etc/hosts (ugly)
nfs01 / nfs02 / nfs03

wget https://github.com/nfs-ganesha/nfs-ganesha/archive/V2.6-.0.tar.gz

[root@nfs01 ~]# ganesha.nfsd -v
NFS-Ganesha Release = V2.6.0
nfs-ganesha compiled on Feb 20 2018 at 08:55:23
Release comment = GANESHA file server is 64 bits compliant and supports NFS v3,4.0,4.1 (pNFS) and 9P
Git HEAD = 97867975b2ee69d475876e222c439b1bc9764a78
Git Describe = V2.6-.0-0-g9786797
[root@nfs01 ~]#

DETAILED INSTRUCTIONS:

https://github.com/nfs-ganesha/nfs-ganesha/wiki/Compiling

https://github.com/nfs-ganesha/nfs-ganesha/wiki/GLUSTER
https://github.com/nfs-ganesha/nfs-ganesha/wiki/XFSLUSTRE

PACKAGES:

yum install glusterfs-api-devel.x86_64
yum install xfsprogs-devel.x86_64
yum install xfsprogs.x86_64
xfsdump-3.1.4-1.el7.x86_64
libguestfs-xfs-1.36.3-6.el7_4.3.x86_64
libntirpc-devel-1.5.4-1.el7.x86_64
libntirpc-1.5.4-1.el7.x86_64

libnfsidmap-devel-0.25-17.el7.x86_64
jemalloc-devel-3.6.0-1.el7.x86_64

COMMANDS

git clone https://github.com/nfs-ganesha/nfs-ganesha.git
cd nfs-ganesha;
git checkout V2.6-stable

git submodule update init recursive
yum install gcc-c++
yum install cmake

ccmake /root/ganesha/nfs-ganesha/src/
# Press the c, e, c, g keys to create and generate the config and make files.
make
make install

Compile and build
nfsganesha 2.60+
from source.  (At
this time RPM
packages did not work) Install the listed packages before compiling as well.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

nfs01 / nfs02 / nfs03 Add a disk to the VM such as /dev/sdb . Add secondary
disk for the
shared GlusterFS
nfs01 / nfs02 / nfs03

Create the FS on the new disk and mount it:

mkfs.xfs /dev/sdb
mkdir -p /bricks/0
mount /dev/sdb /bricks/0

yum install centos-release-gluster
systemctl enable glusterd.service
yum -y install glusterfs glusterfs-fuse glusterfs-server glusterfs-api glusterfs-cli


On node01 ONLY: 

gluster volume create gv01 replica 2 nfs01:/bricks/0/gv01 nfs02:/bricks/0/gv01

gluster volume info
gluster volume status

Add subsequent bricks:

(from existing cluster member ) gluster peer probe nfs03
gluster volume add-brick gv01 replica 3 nfs03:/bricks/0/gv01

Mount the storage locally:

systemctl disable autofs
mkdir /n

Example:

[root@nfs01 ~]# mount -t glusterfs nfs01:/gv01 /n
[root@nfs02 ~]# mount -t glusterfs nfs02:/gv01 /n
[root@nfs03 ~]# mount -t glusterfs nfs03:/gv01 /n

Ensure the following options are set on the gluster volume:

[root@nfs01 glusterfs]# gluster volume set gv01 cluster.quorum-type auto
volume set: success
[root@nfs01 glusterfs]# gluster volume set gv01 cluster.server-quorum-type server
volume set: success

Here is an example Gluster volume configuration we used:

cluster.server-quorum-type: server
cluster.quorum-type: auto
server.event-threads: 8
client.event-threads: 8
performance.readdir-ahead: on
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet

 

Configure the
GlusterFS filesystem
using
nfs01 / nfs02 / nfs03

PACKAGES:
yum install haproxy     # ( 1.5.18-6.el7.x86_64 used in this case )

/etc/haproxy/haproxy.cfg

global
    log         127.0.0.1 local2
    stats       socket /var/run/haproxy.sock mode 0600 level admin
    # stats     socket /var/lib/haproxy/stats
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    debug

defaults
    mode                    tcp
    log                     global
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend nfs-in
    bind nfs-c01:2049
    mode tcp
    option tcplog
    default_backend             nfs-back


backend nfs-back
    balance     roundrobin
    server      nfs01.nix.mine.dom    nfs01.nix.mine.dom:2049 check
    server      nfs02.nix.mine.dom    nfs02.nix.mine.dom:2049 check

    server      nfs03.nix.mine.dom    nfs03.nix.mine.dom:2049 check

Install and
Configure HAPROXY. great source that helped with this part.
nfs01 / nfs02 / nfs03 # echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
Turn on kernel parameters.  These allow keepalived below to function properly.
nfs01 / nfs02 / nfs03 

PACKAGES:

yum install keepalived    # ( Used 1.3.5-1.el7.x86_64 in this case )

NFS01:

vrrp_script chk_haproxy {
  script "killall -0 haproxy"           # check the haproxy process
  interval 2                            # every 2 seconds
  weight 2                              # add 2 points if OK
}

vrrp_instance VI_1 {
  interface eth0                        # interface to monitor
  state MASTER                          # MASTER on haproxy1, BACKUP on haproxy2
  virtual_router_id 51
  priority 101                          # 101 on haproxy1, 100 on haproxy2
  virtual_ipaddress {
       192.168.0.80                        # virtual ip address
  }
  track_script {
       chk_haproxy
  }
}

NFS02:

vrrp_script chk_haproxy {
  script "killall -0 haproxy"           # check the haproxy process
  interval 2                            # every 2 seconds
  weight 2                              # add 2 points if OK
}

vrrp_instance VI_1 {
  interface eth0                        # interface to monitor
  state BACKUP                          # MASTER on haproxy1, BACKUP on haproxy2
  virtual_router_id 51
  priority 102                          # 101 on haproxy1, 100 on haproxy2
  virtual_ipaddress {
    192.168.0.80                        # virtual ip address
  }
  track_script {
    chk_haproxy
  }
}

Configure keepalived. A great source that helped with this as well.

nfs01 / nfs02 / nfs03

This step can be made quicker by copying the xml definitions from one host to the other if you already have one defined:

/etc/firewalld/zones/dmz.xml
/etc/firewalld/zones/public.xml

Individual setup:

# cat public.bash

firewall-cmd –zone=public –permanent –add-port=2049/tcp

firewall-cmd –zone=public –permanent –add-port=111/tcp

firewall-cmd –zone=public –permanent –add-port=111/udp

firewall-cmd –zone=public –permanent –add-port=24007-24008/tcp

firewall-cmd –zone=public –permanent –add-port=49152/tcp

firewall-cmd –zone=public –permanent –add-port=38465-38469/tcp

firewall-cmd –zone=public –permanent –add-port=4501/tcp

firewall-cmd –zone=public –permanent –add-port=4501/udp

firewall-cmd –zone=public –permanent –add-port=20048/udp

firewall-cmd –zone=public –permanent –add-port=20048/tcp
firewall-cmd –reload

# cat dmz.bash

firewall-cmd –zone=dmz –permanent –add-port=2049/tcp

firewall-cmd –zone=dmz –permanent –add-port=111/tcp

firewall-cmd –zone=dmz –permanent –add-port=111/udp

firewall-cmd –zone=dmz –permanent –add-port=24007-24008/tcp

firewall-cmd –zone=dmz –permanent –add-port=49152/tcp

firewall-cmd –zone=dmz –permanent –add-port=38465-38469/tcp

firewall-cmd –zone=dmz –permanent –add-port=4501/tcp

firewall-cmd –zone=dmz –permanent –add-port=4501/udp

firewall-cmd –zone=dmz –permanent –add-port=20048/tcp

firewall-cmd –zone=dmz –permanent –add-port=20048/udp

firewall-cmd –reload

#

# On Both

firewall-cmd –permanent –direct –add-rule ipv4 filter INPUT 0 -m pkttype –pkt-type multicast -j ACCEPT
firewall-cmd –reload

 

HANDY STUFF:

firewall-cmd –zone=dmz –list-all
firewall-cmd –zone=public –list-all
firewall-cmd –set-log-denied=all
firewall-cmd –permanent –add-service=haproxy
firewall-cmd –list-all
firewall-cmd –runtime-to-permanent

Configure firewalld.
DO NOT
disable
firewalld .
nfs01 / nfs02 / nfs03

Run any of the following command, or a combination of, on deny entries in /var/log/audit/audit.log that may appear as you stop, start or install above services:

METHOD 1:
grep AVC /var/log/audit/audit.log | audit2allow -M systemd-allow
semodule -i systemd-allow.pp

METHOD 2:
audit2allow -a
audit2allow -a -M ganesha_<NUM>_port
semodule -i ganesha_<NUM>_port.pp

USEFULL THINGS:

ausearch –interpret
aureport

Configure selinux. 
Don't disable it.
This actually  makes your host safer and is actually
easy to work with using just these commands.
nfs01 / nfs02 / nfs03

NODE 1:

[root@nfs01 ~]# cat /etc/ganesha/ganesha.conf
###################################################
#
# EXPORT
#
# To function, all that is required is an EXPORT
#
# Define the absolute minimal export
#
###################################################


NFS_Core_Param {
        Bind_addr = 192.168.0.131;
        NFS_Port = 2049;
        MNT_Port = 20048;
        NLM_Port = 38468;
        Rquota_Port = 4501;
}

%include "/etc/ganesha/export.conf"
[root@nfs01 ~]# cat /etc/ganesha/export.conf
EXPORT{
    Export_Id = 1 ;                             # Export ID unique to each export
    Path = "/n";                                # Path of the volume to be exported. Eg: "/test_volume"

    FSAL {
        name = GLUSTER;
        hostname = "nfs01.nix.mine.dom";         # IP of one of the nodes in the trusted pool
        volume = "gv01";                        # Volume name. Eg: "test_volume"
    }

    Access_type = RW;                           # Access permissions
    Squash = No_root_squash;                    # To enable/disable root squashing
    Disable_ACL = FALSE;                        # To enable/disable ACL
    Pseudo = "/n";                              # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
    Protocols = "3","4";                        # NFS protocols supported
    Transports = "UDP","TCP" ;                  # Transport protocols supported
    SecType = "sys";                            # Security flavors supported
}
[root@nfs01 ~]#

NODE 2:

[root@nfs02 ~]# cd /etc/ganesha/
[root@nfs02 ganesha]# cat ganesha.conf
###################################################
#
# EXPORT
#
# To function, all that is required is an EXPORT
#
# Define the absolute minimal export
#
###################################################


NFS_Core_Param {
        Bind_addr=192.168.0.119;
        NFS_Port=2049;
        MNT_Port=20048;
        NLM_Port=38468;
        Rquota_Port=4501;
}

%include "/etc/ganesha/export.conf"
[root@nfs02 ganesha]# cat export.conf
EXPORT{
    Export_Id = 1 ;                             # Export ID unique to each export
    Path = "/n";                                # Path of the volume to be exported. Eg: "/test_volume"

    FSAL {
        name = GLUSTER;
        hostname = "nfs02.nix.mine.dom";         # IP of one of the nodes in the trusted pool
        volume = "gv01";                        # Volume name. Eg: "test_volume"
    }

    Access_type = RW;                           # Access permissions
    Squash = No_root_squash;                    # To enable/disable root squashing
    Disable_ACL = FALSE;                        # To enable/disable ACL
    Pseudo = "/n";                              # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
    Protocols = "3","4";                        # NFS protocols supported
    Transports = "UDP","TCP" ;                  # Transport protocols supported
    SecType = "sys";                            # Security flavors supported
}
[root@nfs02 ganesha]#

 

NODE 3:

[root@nfs03 ~]# cd /etc/ganesha/
[root@nfs03 ganesha]# cat ganesha.conf
###################################################
#
# EXPORT
#
# To function, all that is required is an EXPORT
#
# Define the absolute minimal export
#
###################################################


NFS_Core_Param {
        Bind_addr=192.168.0.125;
        NFS_Port=2049;
        MNT_Port=20048;
        NLM_Port=38468;
        Rquota_Port=4501;
}

%include "/etc/ganesha/export.conf"
[root@nfs03 ganesha]# cat export.conf
EXPORT{
    Export_Id = 1 ;                             # Export ID unique to each export
    Path = "/n";                                # Path of the volume to be exported. Eg: "/test_volume"

    FSAL {
        name = GLUSTER;
        hostname = "nfs03.nix.mine.dom";         # IP of one of the nodes in the trusted pool
        volume = "gv01";                        # Volume name. Eg: "test_volume"
    }

    Access_type = RW;                           # Access permissions
    Squash = No_root_squash;                    # To enable/disable root squashing
    Disable_ACL = FALSE;                        # To enable/disable ACL
    Pseudo = "/n";                              # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
    Protocols = "3","4";                        # NFS protocols supported
    Transports = "UDP","TCP" ;                  # Transport protocols supported
    SecType = "sys";                            # Security flavors supported
}
[root@nfs03 ganesha]#

STARTUP:

systemctl start nfs-ganesha
(Only if you did not extract the startup script) /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT

 

Configure NFS Ganesha
nfs01 / nfs02 / nfs03

 

[root@nfs01 ~]# cat /etc/fstab|grep -Ei "brick|gv01"
/dev/sdb /bricks/0                              xfs     defaults        0 0
nfs01:/gv01 /n                                  glusterfs defaults      0 0
[root@nfs01 ~]#

[root@nfs01 ~]# mount|grep -Ei "brick|gv01"
/dev/sdb on /bricks/0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
nfs01:/gv01 on /n type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@nfs01 ~]#

 

[root@nfs01 ~]# ps -ef|grep -Ei "haproxy|keepalived|ganesha"; netstat -pnlt|grep -Ei "haproxy|ganesha|keepalived"
root      1402     1  0 00:59 ?        00:00:00 /usr/sbin/keepalived -D
root      1403  1402  0 00:59 ?        00:00:00 /usr/sbin/keepalived -D
root      1404  1402  0 00:59 ?        00:00:02 /usr/sbin/keepalived -D
root     13087     1  0 01:02 ?        00:00:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  13088 13087  0 01:02 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
haproxy  13089 13088  0 01:02 ?        00:00:01 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
root     13129     1 15 01:02 ?        00:13:11 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT
root     19742 15633  0 02:30 pts/2    00:00:00 grep –color=auto -Ei haproxy|keepalived|ganesha
tcp        0      0 192.168.0.80:2049       0.0.0.0:*               LISTEN      13089/haproxy
tcp6       0      0 192.168.0.131:20048     :::*                    LISTEN      13129/ganesha.nfsd
tcp6       0      0 :::564                  :::*                    LISTEN      13129/ganesha.nfsd
tcp6       0      0 192.168.0.131:4501      :::*                    LISTEN      13129/ganesha.nfsd
tcp6       0      0 192.168.0.131:2049      :::*                    LISTEN      13129/ganesha.nfsd
tcp6       0      0 192.168.0.131:38468     :::*                    LISTEN      13129/ganesha.nfsd
[root@nfs01 ~]#

 

Ensure mounts are
done and everything
is started up.
nfs01 / nfs02 / nfs03

yumdownloader nfs-ganesha.x86_64
rpm2cpio nfs-ganesha-2.5.5-1.el7.x86_64.rpm | cpio -idmv ./usr/lib/systemd/system/nfs-ganesha-lock.service
rpm2cpio nfs-ganesha-2.5.5-1.el7.x86_64.rpm | cpio -idmv ./usr/lib/systemd/system/nfs-ganesha.service
rpm2cpio nfs-ganesha-2.5.5-1.el7.x86_64.rpm | cpio -idmv ./usr/lib/systemd/system/nfs-ganesha-config.service
rpm2cpio nfs-ganesha-2.5.5-1.el7.x86_64.rpm | cpio -idmv ./usr/libexec/ganesha/nfs-ganesha-config.sh

Copy above to the same folders under / instead of ./ :

systemctl enable nfs-ganesha.service
systemctl status nfs-ganesha.service

Since you compiled from source you don't have nice startup scripts.  To get your nice startup scripts from an existing ganesha RPM do the following.  Then use systemctl to stop and start nfs-ganesha as you would any other service.
 
ANY

Enable dumps:

gluster volume set gv01 server.statedump-path /var/log/glusterfs/
gluster volume statedump gv01

 

Enable state dumps for issue isolation.

TESTING

Now let's do some checks on our NFS HA.  Mount the share using the VIP from a client then create a test file:

[root@ipaclient01 /]# mount -t nfs4 nfs-c01:/n /n
[root@ipaclient01 n]# echo -ne "Hacked It.  Gluster, NFS Ganesha, HAPROXY, keepalived scalable NFS server." > some-people-find-this-awesome.txt

[root@ipaclient01 n]# mount|grep nfs4
nfs-c01:/n on /n type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.236,local_lock=none,addr=192.168.0.80)
[root@ipaclient01 n]#

 

Then check each brick to see if the file was replicated:

[root@nfs01 n]# cat /bricks/0/gv01/some-people-find-this-awesome.txt
Hacked It.  Gluster, NFS Ganesha, HAPROXY, keepalived scalable NFS server.
[root@nfs01 n]# mount|grep -Ei gv01
nfs01:/gv01 on /n type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@nfs01 n]#

[root@nfs02 n]# cat /bricks/0/gv01/some-people-find-this-awesome.txt
Hacked It.  Gluster, NFS Ganesha, HAPROXY, keepalived scalable NFS server.
[root@nfs02 n]# mount|grep -Ei gv01
nfs02:/gv01 on /n type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@nfs02 n]#

Good!  Now let's hard shutdown one node, nfs01, the primary node.  Expected behaviour is that we need to see failover to nfs02 and then when we bring back the nfs01 server, we need to see the file is replicated.  While we do this, the client ipaclient01 is not supposed to loose any connection to the NFS mount via the VIP.  Here are the results:

[root@nfs02 n]# ps -ef|grep -Ei "haproxy|ganesha|keepalived"
root     12245     1  0 Feb19 ?        00:00:03 /usr/sbin/keepalived -D
root     12246 12245  0 Feb19 ?        00:00:03 /usr/sbin/keepalived -D
root     12247 12245  0 Feb19 ?        00:00:41 /usr/sbin/keepalived -D
root     12409     1 16 Feb20 ?        00:13:05 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT
root     17892     1  0 00:37 ?        00:00:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  17893 17892  0 00:37 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
haproxy  17894 17893  0 00:37 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
root     17918 21084  0 00:38 pts/0    00:00:00 grep –color=auto -Ei haproxy|ganesha|keepalived
[root@nfs02 n]# ps -ef|grep -Ei "haproxy|ganesha|keepalived"; netstat -pnlt|grep -Ei ganesha; netstat -pnlt|grep -Ei haproxy; netstat -pnlt|grep -Ei keepalived
root     12245     1  0 Feb19 ?        00:00:03 /usr/sbin/keepalived -D
root     12246 12245  0 Feb19 ?        00:00:03 /usr/sbin/keepalived -D
root     12247 12245  0 Feb19 ?        00:00:41 /usr/sbin/keepalived -D
root     12409     1 16 Feb20 ?        00:13:09 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT
root     17892     1  0 00:37 ?        00:00:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  17893 17892  0 00:37 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
haproxy  17894 17893  0 00:37 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
root     17947 21084  0 00:38 pts/0    00:00:00 grep –color=auto -Ei haproxy|ganesha|keepalived
tcp6       0      0 192.168.0.119:20048     :::*                    LISTEN      12409/ganesha.nfsd
tcp6       0      0 :::564                  :::*                    LISTEN      12409/ganesha.nfsd
tcp6       0      0 192.168.0.119:4501      :::*                    LISTEN      12409/ganesha.nfsd
tcp6       0      0 192.168.0.119:2049      :::*                    LISTEN      12409/ganesha.nfsd
tcp6       0      0 192.168.0.119:38468     :::*                    LISTEN      12409/ganesha.nfsd
tcp        0      0 192.168.0.80:2049       0.0.0.0:*               LISTEN      17894/haproxy
[root@nfs02 n]#
[root@nfs02 n]#
[root@nfs02 n]#
[root@nfs02 n]# ssh nfs-c01
Password:
Last login: Wed Feb 21 00:37:28 2018 from nfs-c01.nix.mine.dom
[root@nfs02 ~]# logout
Connection to nfs-c01 closed.
[root@nfs02 n]#

From client we can still see all the files (seemless with no interruption to the NFS service).  As a bonus, while we started this first test, we noticed that HAPROXY was offline on nfs02.  While trying to list the client files, it appeared hung but still responded then listed files right after we started HAPROXY on nfs02

[root@ipaclient01 n]# ls -altri some-people-find-this-awesome.txt
11782527620043058273 -rw-r–r–. 1 nobody nobody 74 Feb 21 00:26 some-people-find-this-awesome.txt
[root@ipaclient01 n]# df -h .
Filesystem      Size  Used Avail Use% Mounted on
nfs-c01:/n      128G   43M  128G   1% /n
[root@ipaclient01 n]# ssh nfs-c01
Password:
Last login: Wed Feb 21 00:41:06 2018 from nfs-c01.nix.mine.dom
[root@nfs02 ~]#

Checking the gluster volume on nfs02:

[root@nfs02 n]# gluster volume status
Status of volume: gv01
Gluster process                             TCP Port  RDMA Port  Online  Pid
——————————————————————————
Brick nfs02:/bricks/0/gv01                  49152     0          Y       16103
Self-heal Daemon on localhost               N/A       N/A        Y       16094

Task Status of Volume gv01
——————————————————————————
There are no active volume tasks

[root@nfs02 n]#

Now let's bring back the first node and fail the second after nfs01 is up again.  As soon as we bring nfs01 back up, the VIP fails over to nfs01 without any hickup or manual invervention on the client end:

[root@ipaclient01 n]# ls -altri
total 11
                 128 dr-xr-xr-x. 21 root   root   4096 Feb 18 22:24 ..
11782527620043058273 -rw-r–r–.  1 nobody nobody   74 Feb 21 00:26 some-people-find-this-awesome.txt
                   1 drwxr-xr-x.  3 nobody nobody 4096 Feb 21 00:26 .
[root@ipaclient01 n]#
[root@ipaclient01 n]#
[root@ipaclient01 n]#
[root@ipaclient01 n]# ssh nfs-c01
Password:
Last login: Wed Feb 21 00:59:56 2018
[root@nfs01 ~]#

So now let's fail the second node.  NFS still works:

[root@ipaclient01 ~]# ssh nfs-c01
Password:
Last login: Wed Feb 21 01:31:50 2018
[root@nfs01 ~]# logout
Connection to nfs-c01 closed.
[root@ipaclient01 ~]# cd /n
[root@ipaclient01 n]# ls -altri some-people-find-this-awesome.txt
11782527620043058273 -rw-r–r–. 1 nobody nobody 74 Feb 21 00:26 some-people-find-this-awesome.txt
[root@ipaclient01 n]# df -h .
Filesystem      Size  Used Avail Use% Mounted on
nfs-c01:/n      128G   43M  128G   1% /n
[root@ipaclient01 n]#

So we bring the second node back up.  And that concludes the configuration!  All works like a charm!

You can also check out our guest post for the same on loadbalancer.org!

Good Luck!

Cheers,
Tom K.

Leave a Reply

You must be logged in to post a comment.


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License