Glusterfs - Part B

更新時間: 2014-08-04

Glusterfs - Part B

 

 

Geo-replication:

* continuous, asynchronous, and incremental replication service
* master–slave model

Slave Format:

* A local directory (file:///path/to/dir)
* GlusterFS Volume (gluster://localhost:volname)

* ssh://root@remote-host:/path/to/dir
* ssh://root@remote-host:gluster://localhost:volname     (亦可以寫成 root@remote-host::volname)

 

Replicated Volumes 與 Geo-replication 的分別

Replicated Volumes:

Synchronous replication
(each and every file operation is sent across all the bricks)    

Geo-replication:

Asynchronous replication
(checks for the changes in files periodically and syncs them on detecting differences)
( Ensures backing up of data for disaster recovery )

準備:

* NTP

* Secure access to the slave (IP based Access Control)

* ssh key:

# ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem

# ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub root@slave

 

# IP based Access Control:

# This will refuse all requests for spawning slave agents except for requests initiated locally.

gluster volume geo-replication '/*' config allow-network 127.0.0.1

 

# Starting

gluster volume geo-replication [volume_name slave_path] start

 

# Verifying

gluster volume geo-replication [Volume] status

gluster volume geo-replication MASTER status

 

STATUS: Starting, OK, Faulty, Corrupt

 

# Configuring

gluster volume geo-replication [options] config

 

# Stopping

gluster volume geo-replication stop

 

 


# Restoring Data

 

gluster volume info
..................
Type: Distribute
Brick1: machine1:/export/dir16
Brick2: machine2:/export/dir16
..................

# gluster volume geo-replication Volume1 status

MASTER    SLAVE                             STATUS
______    ______________________________    ____________
Volume1  [email protected]:/data/remote_dir   OK

After Faulty

# machine1
ls /mnt/gluster | wc –l
52

# slave
ls /data/remote_dir/ | wc –l
100

Restoring Step:

gluster volume geo-replication stop

gluster volume replace-brick start

gluster volume replace-brick commit force

# gluster volume info  <-- check new add "replace-brick"

# Restore Data
rsync -PavhS --xattrs --ignore-existing /data/remote_dir/ client:/mnt/gluster

gluster volume geo-replication start

===============================
# Manually Setting Time

* Setting time backward corrupts the geo-replication index

解決:

# 1
gluster volume geo-replication stop

# 2
gluster volume set geo-replication.indexing off

# 3
Set uniform time on all bricks

# 4
gluster volume geo-replication start
 

 

 

Creative Commons license icon Creative Commons license icon