Header Shadow Image


volume delete: VOLUME: failed: Some of the peers are down

Getting this?

[root@mdskvm-p01 ~]# gluster volume delete mdsgv01
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: mdsgv01: failed: Some of the peers are down
[root@mdskvm-p01 ~]#

[root@mdskvm-p01 ~]# gluster volume remove-brick mdsgv01  mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 force
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: failed: Deleting all the bricks of the volume is not allowed
[root@mdskvm-p01 ~]#

[root@mdskvm-p01 ~]# gluster volume info

Volume Name: mdsgv01
Type: Distribute
Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
Options Reconfigured:
diagnostics.client-log-level: DEBUG
diagnostics.brick-sys-log-level: INFO
diagnostics.brick-log-level: DEBUG
performance.readdir-ahead: on
server.allow-insecure: on
nfs.trusted-sync: on
performance.cache-size: 1GB
performance.io-thread-count: 16
performance.write-behind-window-size: 8MB
client.event-threads: 8
server.event-threads: 8
cluster.quorum-type: none
cluster.server-quorum-type: none
storage.owner-uid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
storage.owner-gid: 36
[root@mdskvm-p01 ~]#

Solve it by checking the peers and removing them first:

[root@mdskvm-p01 ~]# gluster peer status
Number of Peers: 2

Hostname: opennebula01
Uuid: 94d11cc5-2a8b-4583-97f3-5890cbd7d624
State: Peer Rejected (Disconnected)

Hostname: mdskvm-p02
Uuid: ad7d956a-a121-422e-8c5c-56765bdf6a62
State: Peer in Cluster (Connected)
[root@mdskvm-p01 ~]#

Removing the peers:

[root@mdskvm-p01 ~]# gluster peer detach opennebula01
All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: success
[root@mdskvm-p01 ~]#

[root@mdskvm-p01 ~]# gluster peer detach mdskvm-p02
All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: success
[root@mdskvm-p01 ~]#

Now delete the last volume:

[root@mdskvm-p01 ~]# gluster volume remove-brick mdsgv01 replica 1 mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02 force
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root@mdskvm-p01 ~]#

[root@mdskvm-p01 ~]# gluster volume delete mdsgv01
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: mdsgv01: success
[root@mdskvm-p01 ~]#

Now set everything up clean!

Cheers,
TK


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License