We had 2 nodes (GFS1 & GFS2) that run glusterFS v3.6.2. The 2 nodes are peer to each other so both have same replicates of data.
But, recently GFS2 went down and we had formatted the GFS2 and installed the new OS (Debian 12) with the new node name called GFS3. Our aim is to do gluster heal on GFS3 so it can sync the data to GFS1.
The following is the steps that I had ran successfully:
- I have installed glusterfs version 3.6.2 in the GFS3.
- I also did removed bricks gfs2 from the gfsvolume (Our volume name) and added bricks gfs3 to the gfsvolume.
- I also have added gfs3 to peer with gfs1
But then, when I ran gluster heal gfsvolume full on GFS1, there's nothing been sync even though it stated that "Launching heal operation to perform full self heal on volume gfsvolume has been successful".
Running gluster heal info on GFS3, it stated that the "Volume heal failed".
The following is my glusterfs volume info gfsvolume on gfs1:
Volume Name: gfsvolume
Type: Replicate
Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gfs1:/export/sda/brick
Brick2: gfs3:/export/sda/brick
Options Reconfigured:
nfs.disable: off
performance.quick-read: off
network.ping-timeout: 30
network.frame-timeout: 90
performance.cache-max-file-size: 2MB
cluster.server-quorum-type: none
nfs.addr-namelookup: off
nfs.trusted-write: off
performance.write-behind-window-size: 1MB
cluster.data-self-heal-algorithm: diff
performance.cache-refresh-timeout: 60
performance.cache-size: 1GB
cluster.quorum-type: fixed
auth.allow: 172.*
cluster.quorum-count: 1
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
performance.io-thread-count: 16
performance.readdir-ahead: enable
performance.read-ahead: disable
performance.client-io-threads: on
cluster.readdir-optimize: on
cluster.server-quorum-ratio: 50%
The following is my glusterfs volume status gfsvolume on gfs1:
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick gfs1:/export/sda/brick 49155 Y 28553
Brick gfs3:/export/sda/brick 49154 Y 110595
NFS Server on localhost N/A N N/A
Self-heal Daemon on localhost N/A Y 32154
NFS Server on gfs3 2049 Y 110607
Self-heal Daemon on gfs3 N/A Y 110615
Task Status of Volume gfsvolume
------------------------------------------------------------------------------
There are no active volume tasks
Any help would be appreciated. Please let me know if you required to see any logs. Thank you.
I am trying to perform glusterfs heal on our gfs3 node, so that it sync the data with our gfs1.