最後更新: 2017-05-21
目錄
- 資訊
- 建立 vps (lxc-create)
- freeze
- create, destory
- start, stop, shutdown
資訊
Version:
lxc-info --version
2.0.7
Check OS support lxc:
lxc-checkconfig
VPS List:
# lxc-ls --fancy
lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 centos5 STOPPED 0 - - - centos6 STOPPED 0 - - - debian6 STOPPED 0 - - - lamp RUNNING 0 web 192.168.123.11 - nginx RUNNING 0 web 192.168.123.14 - sshgw STOPPED 0 - - - u14 STOPPED 0 - - -
VPS Info: lxc-info
lxc-info -n nginx
CPU use: 1.29 seconds BlkIO use: 49.96 MiB Memory use: 28.90 MiB KMem use: 0 bytes Link: nginx TX bytes: 1022 bytes RX bytes: 1.56 KiB Total bytes: 2.56 KiB
lxc-info -i -n nginx
IP: 192.168.123.14
lxc-info -p -n nginx
PID: 3241
Realtime Resource Usage
lxc-top
Container CPU CPU CPU BlkIO Mem KMem Name Used Sys User Total Used Used lamp 331.56 90.62 239.17 181.32 MB 324.93 MB 9.21 MB nginx 56.61 7.65 48.82 315.19 MB 187.80 MB 26.52 MB TOTAL 2 of 2 388.16 98.27 287.99 496.50 MB 512.73 MB 35.72 MB
建立 vps (lxc-create)
lxc-create
lxc-create -n name [-f config_file] [-t template] [-B backingstore] [-- template-options]
沒有設定檔時, 會用 Default isolation: processes, sysv ipc and mount points.
Example
lxc-create -n demo -t debian
# VPS 會建立在 /var/lib/lxc/demo, 並且使用 debian template 作為 rootfs
參數:
- -t template <--- 當沒有指定 template 時, rootfs 將會是空的
所有可用的 templates 放在 /usr/lib/lxc/templates 內, 它們是建立 rootfs 的 script 來
- lxc-busybox
- lxc-fedora
- lxc-sshd
- lxc-ubuntu-cloud
- lxc-debian <-- 透過 debootstrap 去建立 rootfs (/var/cache/lxc/debian/rootfs-squeeze-i386)
- lxc-opensuse
- lxc-ubuntu
ubuntu 的 template-options:
lxc-create -t ubuntu -h
template-specific help follows: (these options follow '--') /usr/lib/lxc/templates/lxc-ubuntu -h|--help [-a|--arch] [-b|--bindhome <user>] [--trim] [-d|--debug] [-F | --flush-cache] [-r|--release <release>] [ -S | --auth-key <keyfile>] release: the ubuntu release (e.g. precise): defaults to host release on ubuntu, otherwise uses latest LTS trim: make a minimal (faster, but not upgrade-safe) container bindhome: bind <user>'s home into the container The ubuntu user will not be created, and <user> will have sudo access. arch: the container architecture (e.g. i386, amd64): defaults to host arch auth-key: SSH Public key file to inject into container
e.g.
lxc-create -t ubuntu -n ngnix -B lvm --vgname=myvg --fssize=10G -- --arch i386
-B backingstore
- none(default) <-- /var/lib/lxc/container/rootfs
- btrfs <-- /var/lib/lxc 要是btrfs 的 fs
- lvm <-- Default --vgname=lxc --fstype=ext4 --fssize=1G
Cache 位置:
/var/cache/lxc/xxx/xxx
刪除 VPS
lxc-destroy
此 cmd 不會 "確認" 多一次就會直接刪除 vps 的一切 !!
start, stop, shutdown
lxc-start
# 一般 start
lxc-start -n name [-d] [command]
- comand: 在新的 cgroup 內執行的第一個程式, Default /sbin/init
- -d, --daemon
# 有 log 的 start
lxc-start -n memlimit -l debug -o debug.out
lxc-stop
lxc-shutdown
script 來 => kill -PWR $pid
在 VPS 內要有 /etc/init.d/powerfail 才用到
在 /etc/inittab 有內下內容
pf::powerwait:/etc/init.d/powerfail start
不過, 在 Debian 6 上此檔並不存在, 不在是正常的, 因為它是由 powstatd 所提供 (UPS 之類軟件)
沒有別的要求的話, 大可改為
pf::powerfail:/sbin/shutdown -h now "Power Failure; System Shutting Down"
之後重讀設定檔
init q
P.S.
IIRC xen's "xm shutdown" command does something like this, which can
be a starting design point:
- check whether the container can handle a clean shutdown, by checking
whether anything on the guest is listening on xenbus. If something is
listening, then it's assumed the guest has PV drivers that can do
clean shutdown.
- if yes, issue clean shutdown command. The shutdown command returns
immediately unless a "-w" is specified
- if no, then it does "xm destroy" (i.e. force kill)
lxc-kill
* binary file
Send a signal to the process 1 of the container.
Usage:
lxc-kill --name=NAME SIGNUM
To send the signal 26 to container 123:
lxc-kill --name=123 26
freeze
freeze
lxc-freeze -n testvps
相當於
echo FROZEN > /sys/fs/cgroup/freezer/lxc/testvps/freezer.state
P.S.
freeze 了後, Network 仍是 ping 到的 !
unfreeze:
lxc-unfreeze -n testvps
相當於
echo THAWED > freezer.state
在 freeze 的過程中, 如果耐的話, 有機會見到 FREEZING (partially frozen)
Backup 與 restore
lxc-backup
lxc-backup CONTAINER [BACKUP_NUMBER]
在 vps STOP 的情況下, 用 rsync 把 /var/lib/lxc/$CONTAINER /rootfs
備份到 /var/lib/lxc/$CONTAINER/rootfs.backup$BACKUP_NUMBER
所以, 在 LVM Backend 的情況沒有什麼內
lxc-restore
lxc-restore CONTAINER [BACKUP_NUMBER]
lxc-wait
lxc-wait -- wait for a specific container state
exits when 'RUNNING' is reached.
lxc-wait -n foo -s RUNNING
- RUNNING # 開得著已經就叫 RUNNING
- STOPPED
lxc-monitor
不停監測 vps 的狀態
lxc-monitor -n debian
'debian' changed state to [STARTING] 'debian' changed state to [RUNNING] 'debian' changed state to [STOPPING] 'debian' changed state to [STOPPED]
lxc-execute
lxc-execute -- run an application inside a container.
lxc-execute -n 123 ps -e
lxc-device
lxc-device add -n p1 /dev/ttyUSB0 /dev/ttyS0
lxc-attach
# it is possible to attach to a container's namespaces.
# Spawn bash directly in the container (bypassing the console login), requires a >= 3.8 kernel
we’ve never yet found a cleaner way to pass a mount into a running container.
There is now a new setns() system call which is used by lxc-attach to let you enter namespaces,
but because mount() does not allow mixing different namespaces between the mount source and target
mount them at container startup, or use mounts propagation.
i.e.
lxc-attach -n p1
lxc-clone
[Basic]
sudo lxc-clone -o p1 -n p4
[Adv.]
lxc-clone -o p1 -n p1-test -B overlayfs -s
lvm and btrfs
“-s” option also works with lvm and btrfs (possibly zfs too) containers and
tells lxc-clone to use a snapshot rather than copy the whole rootfs across.
Consoles
# log into console 3
# if the -t N option is not specified, an unused console will be automatically chosen
lxc-console -n container -t 3
Each container console is actually a Unix98 pty in the host's (not the guest's) pty mount,
bind-mounted over the guest's /dev/ttyN and /dev/console.
LXC Home Page
https://linuxcontainers.org/