• Как заменить реплику на том-же хосте glusterfs?

    @mmmex Автор вопроса
    linux admin
    Единственное решение, которое нашел, заключается в переустановке реплики (удаление неисправного кирпича из тома, удаление из кластера):
    1. gluster volume remove-brick gv0 replica 2 bx-app03:/data/brick1/gv0 force
    2. gluster peer detach bx-app03
    [root@bx-app02 glusterfs]# gluster volume info
     
    Volume Name: gv0
    Type: Replicate
    Volume ID: 0ce5aeb8-59f0-46a7-8523-7cd2b1cc1d6b
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x 3 = 3
    Transport-type: tcp
    Bricks:
    Brick1: bx-app01:/data/brick1/gv0
    Brick2: bx-app02:/data/brick1/gv0
    Brick3: bx-app03:/data/brick1/gv0
    Options Reconfigured:
    cluster.granular-entry-heal: on
    storage.fips-mode-rchecksum: on
    transport.address-family: inet
    nfs.disable: on
    performance.client-io-threads: off
    [root@bx-app02 glusterfs]# gluster volume remove-brick gv0 replica 2 bx-app03:/data/brick1/gv0 force
    Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
    Do you want to continue? (y/n) y
    volume remove-brick commit force: success
    [root@bx-app02 glusterfs]# gluster volume info gv0
     
    Volume Name: gv0
    Type: Replicate
    Volume ID: 0ce5aeb8-59f0-46a7-8523-7cd2b1cc1d6b
    Status: Started
    Snapshot Count: 0
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: bx-app01:/data/brick1/gv0
    Brick2: bx-app02:/data/brick1/gv0
    Options Reconfigured:
    cluster.granular-entry-heal: on
    storage.fips-mode-rchecksum: on
    transport.address-family: inet
    nfs.disable: on
    performance.client-io-threads: off
    [root@bx-app02 glusterfs]# gluster peer status
    Number of Peers: 2
    
    Hostname: bx-app01
    Uuid: bfa41c6c-0357-4846-9b6c-f8704fe61d0a
    State: Peer in Cluster (Connected)
    
    Hostname: bx-app03
    Uuid: 5a9d71f7-d4ab-4945-a1b8-d39c189c3fb2
    State: Peer in Cluster (Connected)
    [root@bx-app02 glusterfs]# gluster peer detach bx-app03
    All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y
    peer detach: success
    [root@bx-app02 glusterd]# gluster peer status
    Number of Peers: 1
    
    Hostname: bx-app01
    Uuid: bfa41c6c-0357-4846-9b6c-f8704fe61d0a
    State: Peer in Cluster (Connected)

    Добавляем заново:
    1. gluster peer probe bx-app03
    2. gluster volume add-brick gv0 replica 3 bx-app03:/data/brick1/gv0
    Ответ написан
    Комментировать