2. lvm - snapshoot

更新時間: 2020-07-26

目錄


原理

 

LVM 的 Snapshot 係 Copy On First Write

當為 vg0-lv0 建立 snapshot 時

vg0-lv0 -rename-> vg0-lv0-real

USER 對原本 (當前) 的 Volume Read&Write 時

USER <- vg0-lv0(snapshot-origin) <- vg0-lv0-real

USER -> vg0-lv0(snapshot-origin) -new block-> vg0-lv0-real [vg0-lv0-real -orig block-> vg0-snap1-cow]

[...] = 暗地裡進行

 * vg0-snap1-cow hold a copy of the original data, if the original data is modified.
    the snapshot contains the old data, while the LV holds the current data.

 * If a snapshot is not big enough to hold all the modified data of the original volume,
   the snapshot is removed and suddenly disappears from the system,
   with all the consequences of removing a mounted device from a live system.
   (Overfilling the snapshot will simply mean no more old data is saved as it's changed)

USER 對 snapshot 進行 Read&Write 時

USER <- vg0-snap1(snapshot) <- vg0-snap1-cow / vg0-lv0-real

USER -> vg0-snap1(snapshot) -> vg0-snap1-cow [vg0-lv0-cow <-orig block- vg0-snap1-real]

 * The whole chunk is copied over to the snapshot first. # 即使只修改 1 byte

If a chunk on a snapshot is changed, that chunk is marked and never gets copied from the original volume.

Snapshot 可寫

Snapshot 是可以寫入的. 當 snapshot 被 mount 到 /mnt/tmp 後, 我們可以對它寫入東西

不過, 新寫入的東西是會占用 "Allocated to snapshot"

Delete 新建立的 File 是不會 release "Allocated to snapshot"

Filesystem slow when write

 * Take snapshoot 後, 寫東西會變得很慢, 所以 snapshot 狀態不應該長 keep

 * 每次寫入都會引發2次"寫入", 一次是 backup 舊版本, 一次是寫入新版本

原因:

LVM first makes a copy of the original version which is stored in the snapshot,
and then the modification is written on the normal Logical-Volume.

So the normal Logical-Volume always contains the latest version of the data and
the snapshot only contains a copy of the blocks which have been modified.

測試

dstat -d -D loop0

# 第一次寫入
# dd if=/dev/zero of=/dev/vg0/lv0 bs=1M count=100

 read  writ
  32M   62M
  32M   63M
  31M   67M

# 寫入相同位置
# dd if=/dev/zero of=/dev/vg0/lv0 bs=1M count=100

 read  writ
 516k  100M

# 寫入另一位置
# dd if=/dev/zero of=/dev/vg0/lv0 seek=100M bs=1M count=100

 read  writ
  33M   63M
  40M   74M
  28M   63M

 * If the snapshot logical volume becomes full it will be dropped
    so it is vitally important to allocate enough space !!

 


lvm snapshoot 測試

 

準備假的 Device 建立 PV, VG, LV

dd if=/dev/zero of=dummy.img bs=1M count=4096

losetup /dev/loop0 dummy.img

pvcreate /dev/loop0

vgcreate vg0 /dev/loop0

lvcreate -n lv0 -L 1024M vg0        # 建立 size 為 1GiB lv0

未 take snapshot 前的 table

dmsetup table

vg0-lv0: 0 2097152 linear 7:0 2048

建立 snap ( name: snap1, size: 512Mbyte )

lvcreate -s -n snap1 -L 512M /dev/vg0/lv0

Take snapshot 後的情況

lvs

  LV        VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv0       vg0       owi-a-s---   1.00g
  snap1     vg0       swi-a-s--- 512.00m      lv0    0.00

dmsetup table    # 說明

vg0-lv0: 0 2097152 snapshot-origin 253:3
vg0-lv0-real: 0 2097152 linear 7:0 2048
vg0-snap1: 0 2097152 snapshot 253:3 253:4 P 8
vg0-snap1-cow: 0 1048576 linear 7:0 2099200

原本的 vg0-lv0 改名成了 vg0-lv0-real

新的 vg0-lv0 的 type 是 snapshot-origin

lvdisplay

  LV Name                lv0
  ...
  LV snapshot status     source of snap1 [active]

  LV Name                snap1
  ...
  LV snapshot status     active destination for lv0

刪除 snapshoot

# 會刪除 vg0-snap1-cow, vg0-lv0 and vg0-snap1, and rename vg0-lv0-real to vg0-lv0

lvremove vg0/snap1

 


block device backup - Snapshot

 

建立 Snapshot:

# 這 5G free space 是用來作寫入 Buffer
# 當它 Full 時, Snapshot 就會被 drop, 那時你將會失去 舊版本 的檔案 !!(snapshot 內的檔案)

lvcreate -L 5G -s -n my_snapshot_name /path/to/lv

查看 snapshot 的資料:

lvdisplay [-c]        <--  -c  會有一行一個的效果

i.e.

lvdisplay /dev/mapper/myraidvg-snap_mytestlw

  --- Logical volume ---
  LV Path                /dev/myraidvg/snap_mytestlw
  LV Name                snap_mytestlw
  VG Name                myraidvg
  LV UUID                yoOJsN-6ASk-fj3c-vU4H-uczN-5uLw-N51z6O
  LV Write Access        read/write
  LV Creation host, time server, 2018-01-03 14:57:32 +0800
  LV snapshot status     active destination for mytestlw
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Current LE             2560
  COW-table size         1.00 GiB
  COW-table LE           256
  Allocated to snapshot  0.00%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:10

lvs | grep MyBackup.Snap

MyBackup.Snap myraidvg swi-a-s---   1.00t      MyBackup 0.10

 * sync 完才見到 'Allocated to snapshot' 更新

當 Full 了時:

mount | grep snap

mount 不見了

dmesg

[78560.181614] device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception.

lvdisplay /dev/mapper/myraidvg-snap_mytestlw

  LV snapshot status     INACTIVE destination for mytestlw

再 mount 亦只會失敗

[78662.630747] Buffer I/O error on dev dm-10, logical block 2621424, async page read
[78662.630951] Buffer I/O error on dev dm-10, logical block 16, async page read
[78662.631055] EXT4-fs (dm-10): unable to read superblock
[78662.631150] EXT4-fs (dm-10): unable to read superblock
[78662.631240] EXT4-fs (dm-10): unable to read superblock

移除 Snapshot:

# 此時會有很多的 IO, 因為新寫入的資料要寫回原來的 LV

# -y|--yes    Do not prompt for confirmation interactively

lvremove -y /path/to/snapshot

 

Creative Commons license icon Creative Commons license icon