Header Shadow Image


Archive for the 'NIX Posts' Category

volume start: mdsgv01: failed: Commit failed on localhost. Please check log file for details.

Getting this? /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log [2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz –volfile-id mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid -S /var/run/gluster/defbdb699838d53b.socket –brick-name /mnt/p01-d01/glusterv01 -l /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log –xlator-option *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 –process-name brick –brick-port 49155 –xlator-option mdsgv01-server.listen-port=49155) [2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 23133 [2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] 0-socket.glusterfsd: closing […]

volume set: failed: Quorum not met. Volume operation not allowed.

Getting this in a two node cluster? volume set: failed: Quorum not met. Volume operation not allowed. If you can't afford a third, you'll have to disable the quorum: [root@mdskvm-p01 glusterfs]# [root@mdskvm-p01 glusterfs]# gluster volume info Volume Name: mdsgv01 Type: Replicate Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0 Status: Stopped Snapshot Count: 0 Number of Bricks: 1 x 2 […]

Mount failed. Please check the log file for more details.

Getting this? Mount failed. Please check the log file for more details. Do some checks: tail -f /var/log/messages /var/log/glusterfs/*.log to get this: Sep 23 21:37:21 mdskvm-p01 kernel: ovirtmgmt: received packet on bond0 with own address as source address (addr:78:e7:d1:8f:4d:26, vlan:0) ==> /var/log/glusterfs/g.log <== [2019-09-24 01:37:22.454768] I [MSGID: 100030] [glusterfsd.c:2511:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.15 […]

oVirt: bond0: option slaves: invalid value (-eth1)

Looking around the oVirt configs to troubleshoot an issue earlier yielded no results.  This is because oVirt manages the host networks through it's UI instead.  All automated and GUI controlled.  Some of the error messages we needed to troubleshoot: ovirtmgmt: received packet on bond0 with own address as source address (addr:78:e7:d1:8f:4d:26, vlan:0) bond0: option slaves: […]

Change User Agent under Microsoft Edge

If you're not relying on Microsoft Edge as much as on Google, it may be used for older locations that cannot be changed.  This is true especially when accessing older hardware web locations that are not compatible with Chrome anymore. To change the user agent do the following: Start Microsoft Edge Press F12 to open […]

Executing command failed with the following exception: AuthorizationException: User:tom@MDS.XYZ not allowed to do ‘GET_KEYS’

Getting the following errors from spark-shell or from listing out valid KMS keys? tom@mds.xyz@cm-r01en01:~] 🙂 $ hadoop key list 19/09/17 23:56:43 INFO util.KerberosName: No auth_to_local rules applied to tom@MDS.XYZ Cannot list keys for KeyProvider: org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@e350b40 list [-provider ] [-strict] [-metadata] [-help]: The list subcommand displays the keynames contained within a particular provider as configured in […]

WARN lineage.LineageWriter: Lineage directory /var/log/spark/lineage doesn’t exist or is not writable. Lineage for this application will be disabled.

Gettnig this? 19/09/17 22:17:41 WARN lineage.LineageWriter: Lineage directory /var/log/spark/lineage doesn't exist or is not writable. Lineage for this application will be disabled. Resolve it by creating the folder: [root@cm-r01en01 spark]# ls -altri total 2244 335565630 drwxr-xr-x.  2 spark spark       6 Aug 17 21:25 stacks  67109037 drwxr-xr-x. 27 root  root     4096 […]

FATAL: remaining connection slots are reserved for non-replication superuser connections

Getting this? FATAL:  remaining connection slots are reserved for non-replication superuser connections Fix that by updating the Patroni configuration like like so: [root@psql01 log]# patronictl -c /etc/patroni.yml edit-config postgres — +++ @@ -1,9 +1,10 @@  loop_wait: 10  maximum_lag_on_failover: 1048576  postgresql: +  parameters: –  max_connections: 256 +    max_connections: 256 –  max_replication_slots: 64 +    max_replication_slots: […]

touch: cannot touch /atlas/atlassian/confluence/logs/catalina.out: Permission denied

Getting this? [confluence@atlas02 logs]$ logout [root@atlas02 atlassian]# systemctl status confluence.service -l â confluence.service – LSB: Atlassian Confluence    Loaded: loaded (/etc/rc.d/init.d/confluence; bad; vendor preset: disabled)    Active: failed (Result: exit-code) since Tue 2019-09-10 22:07:18 EDT; 2min 5s ago      Docs: man:systemd-sysv-generator(8)   Process: 11361 ExecStop=/etc/rc.d/init.d/confluence stop (code=exited, status=0/SUCCESS)   Process: 11925 ExecStart=/etc/rc.d/init.d/confluence start (code=exited, […]

Application application_1567571625367_0006 failed 2 times due to AM Container for appattempt_1567571625367_0006_000002 exited with  exitCode: -1000

Getting this? 19/09/07 23:41:56 ERROR repl.Main: Failed to initialize Spark session. org.apache.spark.SparkException: Application application_1567571625367_0006 failed 2 times due to AM Container for appattempt_1567571625367_0006_000002 exited with  exitCode: -1000 Failing this attempt.Diagnostics: [2019-09-07 23:41:54.934]Application application_1567571625367_0006 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is tom main : requested yarn user […]


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License