Header Shadow Image


volume start: mdsgv01: failed: Commit failed on localhost. Please check log file for details.

Getting this?

/var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log
[2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5 (args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz –volfile-id mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p /var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid -S /var/run/gluster/defbdb699838d53b.socket –brick-name /mnt/p01-d01/glusterv01 -l /var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log –xlator-option *-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546 –process-name brick –brick-port 49155 –xlator-option mdsgv01-server.listen-port=49155)
[2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of current running process is 23133
[2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2019-09-25 10:53:37.865940] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2019-09-25 10:53:37.866054] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: mdskvm-p01.nix.mds.xyz
[2019-09-25 10:53:37.866043] I [MSGID: 101190] [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-09-25 10:53:37.866083] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers
[2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit] (–>/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] –>/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] –>/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: received signum (1), shutting down
[2019-09-25 10:53:37.872399] I [socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected (priv->connected = 0)
[2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit] 0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)
[2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit] (–>/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3] –>/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef] –>/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-: received signum (1), shutting down

which resulted from this:

    /var/log/glusterfs/glusterd.log
    [2019-09-25 05:17:26.615203] D [MSGID: 0]
    [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0
    [2019-09-25 05:17:26.615555] D [MSGID: 0]
    [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5.
    Returning 0
    [2019-09-25 05:17:26.616271] D [MSGID: 0]
    [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
    mdsgv01 found
    [2019-09-25 05:17:26.616305] D [MSGID: 0]
    [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
    [2019-09-25 05:17:26.616327] D [MSGID: 0]
    [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0
    [2019-09-25 05:17:26.617056] I
    [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
    fresh brick process for brick /mnt/p01-d01/glusterv01
    [2019-09-25 05:17:26.722717] E [MSGID: 106005]
    [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
    start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01

    [2019-09-25 05:17:26.722960] D [MSGID: 0]
    [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
    -107
    [2019-09-25 05:17:26.723006] E [MSGID: 106122]
    [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
    commit failed.
    [2019-09-25 05:17:26.723027] D [MSGID: 0]
    [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
    Returning -107
    [2019-09-25 05:17:26.723045] E [MSGID: 106122]
    [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
    failed for operation Start on local node
    [2019-09-25 05:17:26.723073] D [MSGID: 0]
    [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx
    modification not required
    [2019-09-25 05:17:26.723141] E [MSGID: 106122]
    [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
    0-management: Commit Op Failed
    [2019-09-25 05:17:26.723204] D [MSGID: 0]
    [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to
    release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as
    mdsgv01_vol
    [2019-09-25 05:17:26.723239] D [MSGID: 0]
    [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for
    vol mdsgv01 successfully released
    [2019-09-25 05:17:26.723273] D [MSGID: 0]
    [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
    mdsgv01 found
    [2019-09-25 05:17:26.723326] D [MSGID: 0]
    [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
    [2019-09-25 05:17:26.723360] D [MSGID: 0]
    [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
    Returning 0

    ==> /var/log/glusterfs/cmd_history.log <==
    [2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED : Commit
    failed on localhost. Please check log file for details.

    ==> /var/log/glusterfs/glusterd.log <==
    [2019-09-25 05:17:26.723479] D [MSGID: 0]
    [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
    Returning 0

and our action to start the gluster volume:

[root@mdskvm-p01 glusterfs]# gluster volume start mdsgv01
volume start: mdsgv01: failed: Commit failed on localhost. Please check log file for details.
[root@mdskvm-p01 glusterfs]#

You can fix it by adding the following property:

[root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol|grep -Ei transport.socket.listen-port
    option transport.socket.listen-port 24007
[root@mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 0
    option event-threads 1
    option rpc-auth-allow-insecure on
    option cluster.server-quorum-type none
    option cluster.quorum-type none
    # option cluster.server-quorum-type server
    # option cluster.quorum-type auto
    option server.event-threads 8
    option client.event-threads 8
    option performance.write-behind-window-size 8MB
    option performance.io-thread-count 16
    option performance.cache-size 1GB
    option nfs.trusted-sync on
    option storage.owner-uid 36
    option storage.owner-uid 36
    option cluster.data-self-heal-algorithm full
    option performance.low-prio-threads 32
    option features.shard-block-size 512MB
    option features.shard on
    option transport.socket.listen-port 24007
end-volume
[root@mdskvm-p01 glusterfs]#

Then stopping, and starting (not restarting) the daemon.

REF: https://bugzilla.redhat.com/show_bug.cgi?id=1702316

Cheers,
TK

Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

     
  Copyright © 2003 - 2025 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License

 

0
Would love your thoughts, please comment.x
()
x
The IT Development and Technology Mini Vault | MicroDevSys.com
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.