XCP-ng: Adding Plugins: GlusterFS
To create a plugin entry for XCP-ng 8.0.1, follow the following procedure ( We will use the GlusterFS plugin for this example. ):
Create the GlusterFS repo:
[19:40 xcpng02 sm]# cat /etc/yum.repos.d/gluster63.repo
[gluster63]
name=Gluster 6.3
baseurl=http://mirror.centos.org/centos/7/storage/x86_64/gluster-6/
gpgcheck=0
enabled=1
[19:41 xcpng02 sm]#
Download the GlusterFS python driver:
https://github.com/vatesfr/glusterfs-driver
Copy and compile the driver into it's modules (Not 100% sure if this step is still required. Appears the /etc/ plugin directory is used righ tnow?):
cp GlusterFS.py /opt/xensource/sm/GlusterFSSR.py
chmod 755 /opt/xensource/sm/GlusterFSSR.py
python -O -m compileall /opt/xensource/sm/GlusterFSSR.py
python -m compileall /opt/xensource/sm/GlusterFSSR.py
ln -s /opt/xensource/sm/GlusterFSSR.py /opt/xensource/sm/GlusterFSSR
Copy the above modules over to /etc/xapi.d/plugins/ .
cp /opt/xensource/sm/GlusterFSSR.py /opt/xensource/sm/GlusterFSSR.pyo /opt/xensource/sm/GlusterFSSR.pyc /etc/xapi.d/plugins/
ln -s /etc/xapi.d/plugins/GlusterFSSR.py /etc/xapi.d/plugins/GlusterFSSR
End result should look like this for both folders:
[19:44 xcpng02 sm]# ls -altri GlusterFS*
1033328 -rwxr-xr-x 1 root root 10990 Jan 8 2018 GlusterFSSR.py
1033334 -rw-r–r– 1 root root 11638 Oct 13 15:23 GlusterFSSR.pyo
1033336 -rw-r–r– 1 root root 11698 Oct 13 18:33 GlusterFSSR.pyc
1033330 lrwxrwxrwx 1 root root 32 Oct 13 18:34 GlusterFSSR -> /opt/xensource/sm/GlusterFSSR.py
[19:44 xcpng02 sm]# ls -altri /etc/xapi.d/plugins/GlusterFS*
361692 -rwxr-xr-x 1 root root 10990 Oct 13 15:19 /etc/xapi.d/plugins/GlusterFS.py
361577 -rw-r–r– 1 root root 11098 Oct 13 15:24 /etc/xapi.d/plugins/GlusterFS.pyc
361689 -rw-r–r– 1 root root 11098 Oct 13 15:24 /etc/xapi.d/plugins/GlusterFS.pyo
[19:44 xcpng02 sm]#
Edit /opt/xensource/sm/cleanup.py and add the glusterfs tag to the list:
2532 def normalizeType(type):
2533 if type in LVHDSR.SUBTYPES:
2534 type = SR.TYPE_LVHD
2535 if type in [“lvm”, “lvmoiscsi”, “lvmohba”, “lvmofcoe”]:
2536 # temporary while LVHD is symlinked as LVM
2537 type = SR.TYPE_LVHD
2538 if type in ["ext", "ext4", "nfs", "ocfsoiscsi", "ocfsohba", "smb", "xfs", "glusterfs"]:
2539 type = SR.TYPE_FILE
2540 if not type in SR.TYPES:
2541 raise util.SMException("Unsupported SR type: %s" % type)
2542 return type
2543
Recompile the cleanup.py code:
python -O -m compileall /opt/xensource/sm/cleanup.py
python -m compileall /opt/xensource/sm/cleanup.py
Update the following config file to add glusterfs to the list of plugins:
[19:59 xcpng02 sm]# grep -Ein gluster /etc/xapi.conf
171:sm-plugins=ext nfs iscsi lvmoiscsi dummy file hba rawhba udev iso lvm lvmohba lvmofcoe shm smb glusterfs
[19:59 xcpng02 sm]#
Do a very basic check:
[09:29 xcpng02 plugins]# cd /opt/xensource/sm/
[19:59 xcpng02 sm]# ./GlusterFSSR
<?xml version='1.0'?>
<methodResponse>
<fault>
<value><struct>
<member>
<name>faultCode</name>
<value><int>143</int></value>
</member>
<member>
<name>faultString</name>
<value><string>Failed to parse the request</string></value>
</member>
</struct></value>
</fault>
</methodResponse>
[20:00 xcpng02 sm]#
Restart the tool stack on each host to ensure the new plugins are picked up:
# xe-toolstack-restart
Install the GlusterFS packages:
yum install glusterfs.x86_64 glusterfs-server.x86_64 -y
Configure iptables to allow for GlusterFS communication between hosts by ensuring the following lines exist:
[10:15 xcpng01 ~]# cat /etc/sysconfig/iptables
# glusterfs
-A RH-Firewall-1-INPUT -d localhost -p tcp –dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -d localhost -p udp –dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -d localhost -p tcp –dport 24007:24020 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp –dport 24007:24008 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp –dport 49152:49170 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp –dport 24007:24008 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp –dport 49152:49170 -j ACCEPT
[10:15 xcpng01 ~]#
Restart iptables:
systemctl restart iptables
If you don't have the above, you will likely get this message when attempting to peer probe:
# gluster peer probe xcpng04.nix.mds.xyz
peer probe: failed: Probe returned with Transport endpoint is not connected
Create the gluster storage. On both nodes, create the brick folder and corresponding LV's:
mkdir -p /bricks/0
Configure LVM. We will use /dev/sdb. On the first host issue:
?[10:37 xcpng03 sm]# fdisk -l|grep Disk
Disk /dev/sda: 146.8 GB, 146778685440 bytes, 286677120 sectors
Disk /dev/sdb: 4398.0 GB, 4398046511104 bytes, 8589934592 sectors
Disk /dev/sdc: 4398.0 GB, 4398046511104 bytes, 8589934592 sectors
#
[10:37 xcpng03 sm]# pvcreate /dev/sdb –config global{metadata_read_only=0}
[10:44 xcpng03 sm]# vgcreate xcpng03-gvg /dev/sdb –config global{metadata_read_only=0}
[10:44 xcpng03 sm]# lvcreate -L 3000G -n xcpng03-glv xcpng03-gvg –config global{metadata_read_only=0}
[10:45 xcpng03 sm]# yum install xfsprogs -y
[10:45 xcpng03 sm]# mkfs.xfs /dev/xcpng03-gvg/xcpng03-glv
And on the secondary node:
[10:54 xcpng04 sm]# fdisk -l|grep Disk
Disk /dev/sda: 146.8 GB, 146778685440 bytes, 286677120 sectors
Disk /dev/sdb: 4398.0 GB, 4398046511104 bytes, 8589934592 sectors
Disk /dev/sdc: 4398.0 GB, 4398046511104 bytes, 8589934592 sectors
[10:54 xcpng04 sm]#
[10:54 xcpng04 sm]# pvcreate /dev/sdb –config global{metadata_read_only=0}
[10:54 xcpng04 sm]# vgcreate xcpng04-gvg /dev/sdb –config global{metadata_read_only=0}
[10:54 xcpng04 sm]# lvcreate -L 3000G -n xcpng04-glv xcpng04-gvg –config global{metadata_read_only=0}
[10:54 xcpng04 sm]# yum install xfsprogs -y
[10:54 xcpng04 sm]# mkfs.xfs /dev/xcpng04-gvg/xcpng04-glv
Ensure /etc/fstab is configured to mount the volumes on both:
[10:58 xcpng03 sm]# cat /etc/fstab |grep xcpng03
/dev/xcpng03-gvg/xcpng03-glv /bricks/0 xfs defaults 0 0
[10:58 xcpng04 sm]# cat /etc/fstab |grep -Ei xcpng04
/dev/xcpng04-gvg/xcpng04-glv /bricks/0 xfs defaults 0 0
One one node, issue the following (xcpng03 in our case):
systemctl enable glusterd
systemctl start glusterd
gluster peer probe xcpng04.nix.mds.xyz
gluster volume create xcpnggv-c01 replica 2 xcpng03.nix.mds.xyz:/bricks/0/xcpnggv-c01 xcpng04.nix.mds.xyz:/bricks/0/xcpnggv-c01
gluster volume start xcpnggv-c01
Execute the command to create the GlusterFS SR / storage:
[19:27 xcpng01 ~]# xe sr-create content-type=user type=glusterfs name-label="xcpnggv" shared=true device-config:server=192.168.0.91:/xcpnggv device-config:backupservers=192.168.0.90 device-config:fetchattempts=2
Error code: SR_BACKEND_FAILURE_111
Error parameters: , GlusterFS mount error [opterr=mount failed with return code 1],
[20:01 xcpng01 ~]#
If you get the above, ensure to check the GlusterFS logs:
xcpnggv.log:[2019-10-14 03:03:45.055412] E [fuse-bridge.c:5211:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)
xcpnggv.log:[2019-10-14 03:04:53.449937] E [MSGID: 114058] [client-handshake.c:1449:client_query_portmap_cbk] 0-xcpnggv-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
And ensure your GlusterFS volume is started:
[23:05 xcpng01 glusterfs]# gluster volume status
Volume xcpnggv is not started
[23:06 xcpng01 glusterfs]# gluster volume start xcpnggv
volume start: xcpnggv: success
[23:06 xcpng01 glusterfs]#
[23:06 xcpng01 glusterfs]#
[23:06 xcpng01 glusterfs]# gluster volume status
Status of volume: xcpnggv
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick 192.168.0.90:/bricks/0/xcpnggv01 49152 0 Y 19738
Brick 192.168.0.91:/bricks/0/xcpnggv01 49152 0 Y 21765
Self-heal Daemon on localhost N/A N/A Y 21786
Self-heal Daemon on 192.168.0.90 N/A N/A Y 19766
Task Status of Volume xcpnggv
——————————————————————————
There are no active volume tasks
[23:06 xcpng01 glusterfs]#
And an example from another host set:
[11:29 xcpng03 sm]# gluster volume status
Status of volume: xcpnggv-c01
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick xcpng03.nix.mds.xyz:/bricks/0/xcpnggv
-c01 49152 0 Y 16988
Brick xcpng04.nix.mds.xyz:/bricks/0/xcpnggv
-c01 49152 0 Y 14147
Self-heal Daemon on localhost N/A N/A Y 17009
Self-heal Daemon on xcpng04.nix.mds.xyz N/A N/A Y 14168
Task Status of Volume xcpnggv-c01
——————————————————————————
There are no active volume tasks
[11:29 xcpng03 sm]# gluster volume info
Volume Name: xcpnggv-c01
Type: Replicate
Volume ID: b5e36aeb-b789-4fa7-8e1a-f75f49779e5c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: xcpng03.nix.mds.xyz:/bricks/0/xcpnggv-c01
Brick2: xcpng04.nix.mds.xyz:/bricks/0/xcpnggv-c01
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[11:29 xcpng03 sm]#
Another reason for the above failure is that you specified a non existent Gluster Volume:
[11:19 xcpng03 sm]# xe sr-create content-type=user type=glusterfs name-label="xcpnggv-c01" shared=true device-config:server=xcpng03.nix.mds.xyz:/xcpnggv device-config:backupservers=xcpng04.nix.mds.xyz device-config:fetchattempts=2
Error code: SR_BACKEND_FAILURE_111
Error parameters: , GlusterFS mount error [opterr=mount failed with return code 1],
Specify the correct one:
[11:26 xcpng03 sm]# xe sr-create content-type=user type=glusterfs name-label="xcpnggv-c01" shared=true device-config:server=xcpng03.nix.mds.xyz:/xcpnggv-c01 device-config:backupservers=xcpng04.nix.mds.xyz device-config:fetchattempts=2
293c695e-fbe3-faab-02c3-df909796c287
Now the command to add the SR should work:
[23:06 xcpng01 plugins]# xe sr-create content-type=user type=GlusterFS name-label="xcpnggv" shared=true device-config:server=192.168.0.91:/xcpnggv device-config:backupservers=192.168.0.90 device-config:fetchattempts=2
7c052073-d974-89d0-ef65-f8e898826a4c
[23:06 xcpng01 plugins]#
To see the types of drivers you're allowed to use, allow auto-completion to tell you:
This is related to how SMAPIv1 is working. It was created circa 2008, and it's far from being flexible. The only thing you can do to list available driver is to use xe
CLI autocompletion during sr-create
command, on type=
. This should list all the drivers.
This will be far easier with SMAPIv3 which is designed to have "drivers" added dynamically.
REF: https://github.com/xcp-ng/xcp/issues/290#issuecomment-541514462
Thx,