• Как заменить реплику на том-же хосте glusterfs?

    @mmmex Автор вопроса
    # systemctl status glusterd
    ● glusterd.service - GlusterFS, a clustered file-system server
       Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
       Active: active (running) since Сб 2023-03-04 16:15:46 UTC; 18min ago
         Docs: man:glusterd(8)
      Process: 4139 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
     Main PID: 4140 (glusterd)
       CGroup: /system.slice/glusterd.service
               ├─4140 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
               └─4162 /usr/sbin/glusterfs -s localhost --volfile-id shd/gv0 -p /var/run/gluster/shd/gv0/gv0-shd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/b94a89dee1a6c620.socket --xlator-option *replicate*.node-uuid=5a9d...
    
    мар 04 16:15:46 bx-app03.org.test systemd[1]: Starting GlusterFS, a clustered file-system server...
    мар 04 16:15:46 bx-app03.org.test systemd[1]: Started GlusterFS, a clustered file-system server.
  • Как заменить реплику на том-же хосте glusterfs?

    @mmmex Автор вопроса
    # gluster volume status
    Status of volume: gv0
    Gluster process                             TCP Port  RDMA Port  Online  Pid
    ------------------------------------------------------------------------------
    Brick bx-app01:/data/brick1/gv0             49152     0          Y       20476
    Brick bx-app02:/data/brick1/gv0             49152     0          Y       16598
    Brick bx-app03:/data/brick1/gv0             N/A       N/A        N       N/A  
    Self-heal Daemon on localhost               N/A       N/A        Y       4162 
    Self-heal Daemon on bx-app01                N/A       N/A        Y       20493
    Self-heal Daemon on bx-app02.org.test       N/A       N/A        Y       16615
     
    Task Status of Volume gv0
    ------------------------------------------------------------------------------
    There are no active volume tasks
  • Как заменить реплику на том-же хосте glusterfs?

    @mmmex Автор вопроса
    Лог:
    # cat glfsheal-gv0.log 
    [2023-03-04 13:10:26.534541 +0000] I [io-stats.c:3708:ios_sample_buf_size_configure] 0-gv0: Configure ios_sample_buf  size is 1024 because ios_sample_interval is 0
    [2023-03-04 13:10:26.538878 +0000] I [MSGID: 104045] [glfs-primary.c:81:notify] 0-gfapi: New graph coming up [{graph-uuid=62782d61-7070-3033-2e6f-72672e746573}, {id=0}] 
    [2023-03-04 13:10:26.538997 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-gv0-client-0: parent translators are ready, attempting connect on transport [] 
    [2023-03-04 13:10:26.545578 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-gv0-client-1: parent translators are ready, attempting connect on transport [] 
    [2023-03-04 13:10:26.547136 +0000] I [rpc-clnt.c:1972:rpc_clnt_reconfig] 0-gv0-client-0: changing port to 49152 (from 0)
    [2023-03-04 13:10:26.547209 +0000] I [socket.c:848:__socket_shutdown] 0-gv0-client-0: intentional socket shutdown(9)
    [2023-03-04 13:10:26.551421 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-gv0-client-2: parent translators are ready, attempting connect on transport [] 
    [2023-03-04 13:10:26.557803 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-gv0-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] 
    [2023-03-04 13:10:26.558051 +0000] I [rpc-clnt.c:1972:rpc_clnt_reconfig] 0-gv0-client-1: changing port to 49152 (from 0)
    [2023-03-04 13:10:26.558107 +0000] I [socket.c:848:__socket_shutdown] 0-gv0-client-1: intentional socket shutdown(11)
    Final graph:
    +------------------------------------------------------------------------------+
      1: volume gv0-client-0
      2:     type protocol/client
      3:     option opversion 90000
      4:     option clnt-lk-version 1
      5:     option volfile-checksum 0
      6:     option volfile-key gv0
      7:     option client-version 9.6
      8:     option process-name gfapi.glfsheal
      9:     option process-uuid CTX_ID:73b4cc5e-f3c7-4410-9577-b134bb54133d-GRAPH_ID:0-PID:3776-HOST:bx-app03.org.test-PC_NAME:gv0-client-0-RECON_NO:-0
     10:     option fops-version 1298437
     11:     option ping-timeout 42
     12:     option remote-host bx-app01
     13:     option remote-subvolume /data/brick1/gv0
     14:     option transport-type socket
     15:     option transport.address-family inet
     16:     option username f24811f8-dd3c-49ab-a9f3-0e6cbe5dddbf
     17:     option password ae566c26-30b0-442e-942d-73ad09e7df55
     18:     option transport.socket.ssl-enabled off
     19:     option transport.tcp-user-timeout 0
     20:     option transport.socket.keepalive-time 20
     21:     option transport.socket.keepalive-interval 2
     22:     option transport.socket.keepalive-count 9
     23:     option strict-locks off
     24:     option send-gids true
     25: end-volume
     26:  
     27: volume gv0-client-1
     28:     type protocol/client
     29:     option ping-timeout 42
     30:     option remote-host bx-app02
     31:     option remote-subvolume /data/brick1/gv0
     32:     option transport-type socket
     33:     option transport.address-family inet
     34:     option username f24811f8-dd3c-49ab-a9f3-0e6cbe5dddbf
     35:     option password ae566c26-30b0-442e-942d-73ad09e7df55
     36:     option transport.socket.ssl-enabled off
     37:     option transport.tcp-user-timeout 0
     38:     option transport.socket.keepalive-time 20
     39:     option transport.socket.keepalive-interval 2
     40:     option transport.socket.keepalive-count 9
     41:     option strict-locks off
     42:     option send-gids true
     43: end-volume
     44:  
     45: volume gv0-client-2
     46:     type protocol/client
     47:     option ping-timeout 42
     48:     option remote-host bx-app03
     49:     option remote-subvolume /data/brick1/gv0
     50:     option transport-type socket
     51:     option transport.address-family inet
     52:     option username f24811f8-dd3c-49ab-a9f3-0e6cbe5dddbf
     53:     option password ae566c26-30b0-442e-942d-73ad09e7df55
     54:     option transport.socket.ssl-enabled off
     55:     option transport.tcp-user-timeout 0
     56:     option transport.socket.keepalive-time 20
     57:     option transport.socket.keepalive-interval 2
     58:     option transport.socket.keepalive-count 9
     59:     option strict-locks off
     60:     option send-gids true
     61: end-volume
     62:  
     63: volume gv0-replicate-0
     64:     type cluster/replicate
     65:     option background-self-heal-count 0
     66:     option halo-enabled off
     67:     option afr-pending-xattr gv0-client-0,gv0-client-1,gv0-client-2
     68:     option volume-id 0ce5aeb8-59f0-46a7-8523-7cd2b1cc1d6b
     69:     option granular-entry-heal on
     70:     option use-compound-fops off
     71:     option use-anonymous-inode yes
     72:     subvolumes gv0-client-0 gv0-client-1 gv0-client-2
     73: end-volume
     74:  
     75: volume gv0-dht
     76:     type cluster/distribute
     77:     option lock-migration off
     78:     option force-migration off
     79:     subvolumes gv0-replicate-0
     80: end-volume
     81:  
     82: volume gv0-utime
     83:     type features/utime
     84:     option noatime on
     85:     subvolumes gv0-dht
     86: end-volume
     87:  
     88: volume gv0-write-behind
     89:     type performance/write-behind
     90:     subvolumes gv0-utime
     91: end-volume
     92:  
     93: volume gv0-open-behind
     94:     type performance/open-behind
     95:     subvolumes gv0-write-behind
     96: end-volume
     97:  
     98: volume gv0-quick-read
     99:     type performance/quick-read
    100:     subvolumes gv0-open-behind
    101: end-volume
    102:  
    103: volume gv0-md-cache
    104:     type performance/md-cache
    105:     subvolumes gv0-quick-read
    106: end-volume
    107:  
    108: volume gv0
    109:     type debug/io-stats
    110:     option log-level INFO
    111:     option threads 16
    112:     option latency-measurement off
    113:     option count-fop-hits off
    114:     option global-threading off
    115:     subvolumes gv0-md-cache
    116: end-volume
    117:  
    118: volume meta-autoload
    119:     type meta
    120:     subvolumes gv0
    121: end-volume
    122:  
    +------------------------------------------------------------------------------+
    [2023-03-04 13:10:26.569390 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-gv0-client-0: Connected, attached to remote volume [{conn-name=gv0-client-0}, {remote_subvol=/data/brick1/gv0}] 
    [2023-03-04 13:10:26.569441 +0000] I [MSGID: 108005] [afr-common.c:6065:__afr_handle_child_up_event] 0-gv0-replicate-0: Subvolume 'gv0-client-0' came back up; going online. 
    [2023-03-04 13:10:26.570813 +0000] E [MSGID: 114058] [client-handshake.c:1201:client_query_portmap_cbk] 0-gv0-client-2: failed to get the port number for remote subvolume. Please run gluster volume status on server to see if brick process is running [] 
    [2023-03-04 13:10:26.570882 +0000] I [socket.c:848:__socket_shutdown] 0-gv0-client-2: intentional socket shutdown(9)
    [2023-03-04 13:10:26.570967 +0000] I [MSGID: 114018] [client.c:2229:client_rpc_notify] 0-gv0-client-2: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=gv0-client-2}] 
    [2023-03-04 13:10:26.573631 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-gv0-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] 
    [2023-03-04 13:10:26.575275 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-gv0-client-1: Connected, attached to remote volume [{conn-name=gv0-client-1}, {remote_subvol=/data/brick1/gv0}] 
    [2023-03-04 13:10:26.575517 +0000] I [MSGID: 108002] [afr-common.c:6435:afr_notify] 0-gv0-replicate-0: Client-quorum is met 
    [2023-03-04 13:10:26.581581 +0000] I [MSGID: 104041] [glfs-resolve.c:974:__glfs_active_subvol] 0-gv0: switched to graph [{subvol=62782d61-7070-3033-2e6f-72672e746573}, {id=0}]