Archive for September, 2019
Getting this? [root@mdskvm-p01 ~]# gluster volume delete mdsgv01 Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: mdsgv01: failed: Some of the peers are down [root@mdskvm-p01 ~]# [root@mdskvm-p01 ~]# gluster volume remove-brick mdsgv01 mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 force Remove-brick force will not migrate files from the removed bricks, so […]
September 28th, 2019 | Posted in NIX Posts | No Comments
NOTE: Experimental steps. Use at your own discretion. Also note that ultimately, these steps didn't resolve the subject errors though they were successful in updating the VDSM certs. I ended up blowing the cluster away (as I didn't have much on it anyway). Getting this with oVirt? VDSM mdskvm-p01.nix.mds.xyz command Get Host Capabilities failed: General […]
September 25th, 2019 | Posted in NIX Posts | No Comments
Getting this? /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log [2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz –volfile-id mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid -S /var/run/gluster/defbdb699838d53b.socket –brick-name /mnt/p01-d01/glusterv01 -l /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log –xlator-option *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 –process-name brick –brick-port 49155 –xlator-option mdsgv01-server.listen-port=49155) [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 23133 [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] 0-socket.glusterfsd: closing […]
September 25th, 2019 | Posted in NIX Posts | No Comments
Getting this in a two node cluster? volume set: failed: Quorum not met. Volume operation not allowed. If you can't afford a third, you'll have to disable the quorum: [root@mdskvm-p01 glusterfs]# [root@mdskvm-p01 glusterfs]# gluster volume info Volume Name: mdsgv01 Type: Replicate Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 Status: Stopped Snapshot Count: 0 Number of Bricks: 1 x 2 […]
September 25th, 2019 | Posted in NIX Posts | No Comments
Getting this? Mount failed. Please check the log file for more details. Do some checks: tail -f /var/log/messages /var/log/glusterfs/*.log to get this: Sep 23 21:37:21 mdskvm-p01 kernel: ovirtmgmt: received packet on bond0 with own address as source address (addr:78:e7:d1:8f:4d:26, vlan:0) ==> /var/log/glusterfs/g.log <== [2019-09-24 01:37:22.454768] I [MSGID: 100030] [glusterfsd.c:2511:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.15 […]
September 23rd, 2019 | Posted in NIX Posts | No Comments
Looking around the oVirt configs to troubleshoot an issue earlier yielded no results. This is because oVirt manages the host networks through it's UI instead. All automated and GUI controlled. Some of the error messages we needed to troubleshoot: ovirtmgmt: received packet on bond0 with own address as source address (addr:78:e7:d1:8f:4d:26, vlan:0) bond0: option slaves: […]
September 22nd, 2019 | Posted in NIX Posts | No Comments
If you're not relying on Microsoft Edge as much as on Google, it may be used for older locations that cannot be changed. This is true especially when accessing older hardware web locations that are not compatible with Chrome anymore. To change the user agent do the following: Start Microsoft Edge Press F12 to open […]
September 21st, 2019 | Posted in NIX Posts | No Comments
Getting the following errors from spark-shell or from listing out valid KMS keys? tom@mds.xyz@cm-r01en01:~] 🙂 $ hadoop key list 19/09/17 23:56:43 INFO util.KerberosName: No auth_to_local rules applied to tom@MDS.XYZ Cannot list keys for KeyProvider: org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@e350b40 list [-provider ] [-strict] [-metadata] [-help]: The list subcommand displays the keynames contained within a particular provider as configured in […]
September 18th, 2019 | Posted in NIX Posts | No Comments
Gettnig this? 19/09/17 22:17:41 WARN lineage.LineageWriter: Lineage directory /var/log/spark/lineage doesn't exist or is not writable. Lineage for this application will be disabled. Resolve it by creating the folder: [root@cm-r01en01 spark]# ls -altri total 2244 335565630 drwxr-xr-x. 2 spark spark 6 Aug 17 21:25 stacks 67109037 drwxr-xr-x. 27 root root 4096 […]
September 18th, 2019 | Posted in NIX Posts | No Comments
Getting this? FATAL: remaining connection slots are reserved for non-replication superuser connections Fix that by updating the Patroni configuration like like so: [root@psql01 log]# patronictl -c /etc/patroni.yml edit-config postgres — +++ @@ -1,9 +1,10 @@ loop_wait: 10 maximum_lag_on_failover: 1048576 postgresql: + parameters: – max_connections: 256 + max_connections: 256 – max_replication_slots: 64 + max_replication_slots: […]
September 13th, 2019 | Posted in NIX Posts | No Comments