i_chips
i_chips

Reputation: 36

Synchronization of Data is Aborted by remove-brick for GlusterFS Replicated Volume

There is something wrong with my GlusterFS storage cluster using replicated volume.

I've been trying for several days in many ways. I hope someone could help me.

First, I create a distributed volume using only one brick by the command as below:

gluster volume create my_vol transport tcp 192.168.100.80:/opt/my_brick force

Then, I write a large amount of data (such as 1 GB) into this volume by GlusterFS FUSE client.

After that, I change this volume from a distributed volume into a replicated volume by the command as below:

gluster volume add-brick my_vol replica 2 192.168.100.81:/opt/my_brick force

Quickly GlusterFS tells me that "volume add-brick: success". However, I find that synchronization of data is still going on background between 192.168.100.80 and 192.168.100.81.

Now, I try to remove the first brick from this volume by the command as below:

yes | gluster volume remove-brick my_vol replica 1 192.168.100.80:/opt/my_brick force

And GlusterFS tells me that "Removing brick(s) can result in data loss. Do you want to Continue? (y/n) volume remove-brick commit force: success".

So, I find that synchronization of data is aborted, and some data is lost permanently!

Is there some command to check if GlusterFS is synchronizing data background?

I wonder how I could do remove-brick operation safely, which means no data loss.

Thanks a lot.

Upvotes: 0

Views: 493

Answers (1)

itisravi
itisravi

Reputation: 3561

You'd have to wait until gluster volume heal <volname> info shows zero entries to be healed before performing remove-brick.

Upvotes: 1

Related Questions