0 lxd setup

最後更新: 2020-04-06

介紹

LXD is a daemon which provides a REST API to drive LXC containers

Supports clustering

New

LXD v.s. Docker

LXD focuses on infrastructure containers.

In contrast, Docker focuses on ephemeral, stateless, minimal containers
that won’t typically get upgraded or re-configured but instead just be replaced entirely.

Security

  • Kernel namespaces
  • Seccomp - To filter some potentially dangerous system calls.
  • AppArmor - To provide additional restrictions on mounts, socket, ptrace and file access.
  • Capabilities - To prevent the container from loading kernel modules, altering the host system time
  • CGroups - To restrict resource usage and prevent DoS attacks against the host.

目錄

 


Install

 

# Ubuntu 18

apt-get install lxcfs lxd-client squashfs-tools

# Install by snap

snap install lxd

# Tell LXD a little bit about your storage and network needs.

lxd init

 * The root user as well as members of the "lxd" group can interact with the local daemon.

 * WARNING: Anyone with access to the LXD socket can fully control LXD,
    which includes the ability to attach host devices and filesystems

 


First time

 

# 回答一些問題

lxd init

Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: lxd-storage-pool
Name of the storage backend to use (dir, lvm, ceph, btrfs) [default=btrfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default= yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default= no]: yes
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]: 9443
Trust password for new clients:
Again:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

# In a non-interactive way

# fully configure LXD daemon settings, storage pools, network devices and profiles

lxd init --preseed < input.yaml

# Dump Config

lxd init --dump

Data Location

# deb package               snap package

/var/lib/lxd        - >        /var/snap/lxd/common/lxd

 


設定 default 的 editor

 

Change default editor in LXD snap (i.e. "lxc profile edit default")

export EDITOR=vim

echo 'export EDITOR=vim' >> ~/.profile

 


Profiles

 

Profiles can store any configuration that an instance can (key/value or devices) and
any number of profiles can be applied to an instance.

Profiles are applied in the order they are specified so the last profile to specify a specific key wins.

LXD ships with two pre-configured profiles:

    “default”

It is automatically applied to all containers unless an alternative list of profiles is provided by the user.

This profile currently does just one thing, define a “eth0” network device for the container.

    “docker”

It is a profile you can apply to a container which you want to allow to run Docker containers.

It requests LXD load some required kernel modules, turns on container nesting and sets up a few device entries.

Show profile info

lxc profile show default

 


Project

 

As a way to split your LXD server

Each project holds its own set of instances and may also have its own images and profiles.

類型

  • features (What part of the project feature set is in use)
  • restrictions ( project won’t be able to access security-sensitive features)
  • limits (Resource limits applied on containers and VMs belonging to the project)
  • user (free form key/value for user metadata)

i.e.

lxc project set default limits.virtual-machines=0

lxc project info default

+------------------+-----------+---------+
|     RESOURCE     |   LIMIT   |  USAGE  |
+------------------+-----------+---------+
| CONTAINERS       | UNLIMITED | 3       |
+------------------+-----------+---------+
| CPU              | UNLIMITED | 3       |
+------------------+-----------+---------+
| DISK             | UNLIMITED | 0B      |
+------------------+-----------+---------+
| INSTANCES        | UNLIMITED | 3       |
+------------------+-----------+---------+
| MEMORY           | UNLIMITED | 1.45GiB |
+------------------+-----------+---------+
| NETWORKS         | UNLIMITED | 3       |
+------------------+-----------+---------+
| PROCESSES        | UNLIMITED | 0       |
+------------------+-----------+---------+
| VIRTUAL-MACHINES | 0         | 0       |
+------------------+-----------+---------+

Database

 

Rather than keeping the configuration and state within each instance's directory as is traditionally done by LXC,
LXD has an internal database which stores all of that information.

Since LXD supports clustering, and all members of the cluster must share the same database state,
the database engine is based on a distributed version of SQLite,

which provides replication, fault-tolerance and automatic failover without the need of external database processes.

We refer to this database as the "global" LXD database.

Global database

cluster-specific data (such as profiles, containers, etc)

Path: /var/snap/lxd/common/lxd/database/global

Cluster member “local” DB

(contains member-specific data)

Path: /var/snap/lxd/common/lxd/database/local.db

# Dumping DB content or schema

lxd sql <local|global> [.dump|.schema]

# flush the content of the cluster database(db.bin) to disk

lxd sql global .sync

 


Services

 

  • snap
  • lxd.activate
  • lxd.daemon

snap services

Service       Startup  Current   Notes
lxd.activate  enabled  inactive  -
lxd.daemon    enabled  active    socket-activated

lxd.activate service

=> Starting LXD activation
==> Loading snap configuration
==> Checking for socket activation support
==> Setting LXD socket ownership
==> Checking if LXD needs to be activated

systemctl status snap.lxd.activate.service (/etc/systemd/system/snap.lxd.activate.service)

-> /usr/bin/snap run lxd.activate

--> /snap/lxd/current/commands/daemon.activate

lxd.daemon service

By default, LXD is socket activated and configured to listen only on a local UNIX socket.

 


Debug

 

cat /var/snap/lxd/common/lxd/logs/lxd.log

Try running:

rm /var/snap/lxd/common/lxd/unix.socket

lxd --debug --group lxd

Maybe it will give us some more details on what 's going on.

 


Config lxd & lxfs

 

/var/snap/lxd/common/config

daemon.start 讀它去 start lxd 及 lxcfs

ceph_builtin=false
criu_enable=false
daemon_debug=false
daemon_group=lxd
lxcfs_loadavg=false          
lxcfs_cfs=false             
openvswitch_builtin=false
shiftfs_enable=auto

# --enable-loadavg => Enable loadavg virtualization (LXCFS 4.0)

# --enable-cfs => /proc/cpuinfo and cpu output in /proc/stat based on cpu shares (LXCFS 4.0)

 


Basic CLI Usage

 

List Image

# By default no images are loaded into the image store(Local)

lxc image list

  • Architectures: i686 x86 ...

# LXD knows about 3 default image servers: (built-in image server)

  • ubuntu: (for Ubuntu stable images)
  • ubuntu-daily: (for Ubuntu daily images)
  • images: (for a bunch of other distributions)

# The stable Ubuntu images can be listed with:

lxc image list ubuntu:

| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |

# List images on "https://images.linuxcontainers.org"

lxc image list images:

Launch

1. Download image

2. Start it

# To launch a first container called "u18" using the Ubuntu 18.04 image, use:

lxc launch images:ubuntu/18.04 u18

# 建立名叫 myCentos 的 Centos 7

lxc launch images:centos/7 myCentos

Info

lxc info u18

Start/Stop/Delete VPS

# stop & delete

lxc stop first

lxc delete first

Remark: stop & delete

lxc delete --force second

 


"lxc image" CLI

 

lxc image [command]

Available Commands:
  alias       Manage image aliases
  copy        Copy images between servers
  delete      Delete images
  edit        Edit image properties
  export      Export and download images
  import      Import images into the image store
  info        Show useful information about images
  list        List images
  refresh     Refresh images
  show        Show image properties

Export image

lxc image export FINGERPRINT|ALIAS

會有 export 了兩個 file

  • Full_Fingerprint_ID.squashfs
  • meta-Full_Fingerprint_ID.tar.xz

刪除 "local" 的 image

lxc image delete FINGERPRINT|ALIAS

Import 之前 Export 的 image

lxc image import META ROOTFS --alias u22.04

 * import 的 image 係再用唔到 "lxc image refresh FINGERPRINT", 因為它沒有了 "Source:"

lxc image info ea031499f8b7

...
Cached: yes
Auto update: enabled
Source:
    Server: https://images.linuxcontainers.org
    Protocol: simplestreams
    Alias: ubuntu/22.04

設定 Alias 的 Name

# Create aliases for existing images

lxc image alias create u22.04  fdf0d5d10f44

# alias <-> fingerprint

lxc image alias list

+--------+--------------+-----------+-------------+
| ALIAS  | FINGERPRINT  |   TYPE    | DESCRIPTION |
+--------+--------------+-----------+-------------+
| u22.04 | fdf0d5d10f44 | CONTAINER |             |
+--------+--------------+-----------+-------------+

下載 server 的 image 到 local

lxc image copy images:rockylinux/8 local:

Image Info.

lxc image info 6de3bbd44fa9

Fingerprint: 6de3bbd44fa9...
Size: 441.84MB
Architecture: x86_64
Type: container
Public: no
Timestamps:
    Created: 2023/02/10 00:00 UTC
    Uploaded: 2023/02/14 03:48 UTC
    Expires: 2027/04/21 00:00 UTC
    Last used: 2023/02/14 03:58 UTC
Properties:
    serial: 20230210
    type: squashfs
    version: 22.04
    architecture: amd64
    description: ubuntu 22.04 LTS amd64 (release) (20230210)
    label: release
    os: ubuntu
    release: jammy
Aliases:
Cached: no
Auto update: disabled
Source:
    Server: https://cloud-images.ubuntu.com/releases
    Protocol: simplestreams
    Alias: 22.04
Profiles:
    - default

Image Caching

When you create an instance using a remote image, LXD downloads the image and caches it locally.

It is stored in the local image store with the cached flag set.

The image is kept locally as a private image until either:

  • last_used_at + images.remote_cache_expiry
  • expires_at

Notes:

[1] 查看 image settings 的真正名稱("last_used_at" v.s. "Last used")

lxc image list 6de3bbd44fa9 -f yaml

[2] image 永不過期

lxc config set images.remote_cache_expiry 0

 


Global Settings

 

lxc config set <key> <value>

lxc config get <key>

i.e.

images.auto_update_cached true     # 是否自動 update image

images.auto_update_interval 6      # 每 6 小時 update 一次

 


修改 Image 及 Contrainer 預設定存放位置

 

Image

# Image default: /var/snap/lxd/common/lxd/images

storage.images_volume POOL/VOLUME

lxc config set storage.images_volume btrfs-pool/images

Contrainer

# Just set the "pool" property of the "root" device in the "default" profile to point to whatever pool you want.

# That's what "lxd init" does for you.

lxc profile device add default root disk path=/ pool=btrfs-pool

lxc profile device show default

eth0:
  name: eth0
  network: lxdfan0
  type: nic
root:
  path: /
  pool: btrfs-pool
  type: disk

 


Config VPS Resource

 

# Container info

lxc config show u18

 * By default your container comes with no resource limitation and inherits from its parent environment.

# Run CLI inside container

lxc exec u18 -- free -m

# To apply a memory limit to your container, do:

lxc config set first limits.memory 128MB

 


pull/push a file

 

lxc file pull second/etc/hosts .

lxc file push hosts second/etc/hosts

i.e.

# view container 's log

lxc file pull second/var/log/syslog - | less

 


Snapshot

 

# Take

# --reuse       If the snapshot name already exists, delete and create a new one

lxc snapshot u18 my-clean-snap

# Restore

lxc restore u18 my-clean-snap

# info

lxc info <instance_name>

lxc config show <instance_name>/<snapshot_name>

# Delete

lxc delete <instance_name>/<snapshot_name>

# Start a new container 'u18ii' by image 'clean-ubuntu'

lxc launch clean-ubuntu u18ii

# Schedule instance snapshots

lxc config set <instance_name> snapshots.schedule @daily

lxc config set <instance_name> snapshots.schedule "0 6 * * *"

 


Copy & Move File to Container

 

lxc file

Available Commands:

  delete      Delete files in instances
  edit        Edit files in instances
  mount       Mount files from instances
  pull        Pull files from instances
  push        Push files into instances

* container 沒有 start 也能 pull/push

i.e.

lxc file push test.txt tim-test/root

Error: sftp: "open /root: is a directory" (SSH_FX_FAILURE)

lxc file push test.txt tim-test/root/test.txt

 


Alias

 

設定 alias

lxc list -c nsN4tSL

N - Number of Processes
S - Number of snapshots

lxc alias add list "list -c nsN4tSL"

 


Creating images

 

# snapshot -> image

lxc publish u18/clean --alias clean-ubuntu-snap

# Manually importing an image

lxc image import <file> --alias my-alias

 


lxd-benchmark

 

ls -l /snap/bin

...
lrwxrwxrwx 1 root root 13 Apr  3 17:58 lxd.benchmark -> /usr/bin/snap
lrwxrwxrwx 1 root root 13 Apr  3 17:58 lxd.check-kernel -> /usr/bin/snap

ls -1 /snap/lxd/current/commands

...
lxd-benchmark
lxd-check-kernel

# Start benchmark: Spawn 20 Ubuntu containers in batches of 4

lxd-benchmark launch --count 20 --parallel 4 [images:alpine/edge]

# Delete all benchmark containers

lxd-benchmark delete

 


Migration

 

Syntax

lxc move [<source_remote>:]<source_instance_name> <target_remote>:[<target_instance_name>] [--mode]

 * 當略 target_instance_name 時就會用 source_instance_name

--mode flag

  • pull (default)
  • push
  • relay # 經 Client 到另一架機

 


Storage pools & volumes

 

設定 Contrainer 在那 storage pool 建立

[方式1]

lxc launch <image> <instance_name> --storage <storage_pool>

[方式2]

lxc profile device add <profile_name> root disk path=/ pool=<storage_pool>

lxc launch <image> <instance_name> --profile <profile_name>

Create a storage pool

lxc storage create <pool_name> <driver> [configuration_options...]

i.e.

lxc storage create dir-pool dir source=/lxd

lxc storage create btrfs-pool btrfs source=/lxd

lxc storage create lvm-pool lvm source=my-vg

lxc storage create pool4 lvm source=my-vg lvm.thinpool_name=my-pool

 * 此 cmd 會自動建立一堆 folder (containers, images, custom ...)

 * 自行建立的 volume 會在 custom 之下

Notes

Snap 預設的 pool 在 /var/snap/lxd/common/lxd/

  • images
  • storage-pools

List storage

lxc storage list

+------------+--------+-------------+---------+---------+
|    NAME    | DRIVER | DESCRIPTION | USED BY |  STATE  |
+------------+--------+-------------+---------+---------+
| btrfs-pool | btrfs  |             | 0       | CREATED |
+------------+--------+-------------+---------+---------+

Volume Operations

lxc storage volume create <POOL> <VOL>

lxc storage volume list <POOL>

lxc storage volume info <POOL> <VOL>

lxc storage volume show <POOL> <VOL>/<snapshot_name>

lxc storage volume edit <POOL> <VOL>/<snapshot_name>

lxc storage volume delete <POOL> <VOL>/<snapshot_name>

lxc storage volume snapshot <POOL> <VOL> [<snapshot_name>]

i.e.

lxc storage volume create btrfs-pool images

lxc storage volume list btrfs-pool                     # 在 custom/images

+--------+--------+-------------+--------------+---------+-------------+
|  TYPE  |  NAME  | DESCRIPTION | CONTENT-TYPE | USED BY |  LOCATION   |
+--------+--------+-------------+--------------+---------+-------------+
| custom | images |             | filesystem   | 0       | lxd-i.local |
+--------+--------+-------------+--------------+---------+-------------+

lxc storage volume show btrfs-pool images

config: {}
description: ""
name: images
type: custom
used_by: []
location: lxd-i.local
content_type: filesystem
project: default
created_at: 0001-01-01T00:00:00Z

Move instance storage volumes to another pool

lxc move <instance_name> --storage <target_pool_name>

Copy or move between LXD servers

lxc storage volume copy \
    <source_remote>:<source_pool_name>/<source_volume_name> \
    <target_remote>:<target_pool_name>/<target_volume_name>

lxc storage volume move \
    <source_remote>:<source_pool_name>/<source_volume_name> \
    <target_remote>:<target_pool_name>/<target_volume_name>

Schedule snapshots

lxc storage volume set <pool_name> <volume_name> snapshots.schedule @daily

lxc storage volume set <pool_name> <volume_name> snapshots.schedule "0 6 * * *"

 


Backup Contrainer

 

Container

Backup by "lxc export"

它會為 container 建立 snapshot 之後再 export

Syntax

lxc export container-name [/path/to/file] [flags]

當沒有設定 /path/to/file 時, 會在當前目錄建立 backup.tar.gz

flags

  • --instance-only             # without snapshots   
  • --optimized-storage      # storage driver optimized format
  • --compression              # Default: gzip(default) | bzip2 | none

i.e.

lxc export MyU22 --instance-only --optimized-storage

Notes

在 export 時行 "lxc info MyU22" 會見到有 backup0 snapshot

...
Backups:
+---------+----------------------+----------------------+---------------+-------------------+
|  NAME   |       TAKEN AT       |      EXPIRES AT      | INSTANCE ONLY | OPTIMIZED STORAGE |
+---------+----------------------+----------------------+---------------+-------------------+
| backup0 | 2023/02/15 18:05 HKT | 2023/02/16 18:05 HKT | NO            | NO                |
+---------+----------------------+----------------------+---------------+-------------------+

Snapshot

snapshot 不能用 export 生成, 要用將它建立成 image 再 "lxc image export"

lxc publish [<remote>:]<instance>[/<snapshot>] [<remote>:] [flags]

flags

--alias              New alias to define at target

i.e.

lxc info tim-test

Snapshots:
+-------+----------------------+------------+----------+
| NAME  |       TAKEN AT       | EXPIRES AT | STATEFUL |
+-------+----------------------+------------+----------+
| test1 | 2023/02/14 17:34 HKT |            | NO       |
+-------+----------------------+------------+----------+

lxc publish tim-test/test1 --alias tim-test

Publishing instance: Image pack: 50% (14.20MB/s)
... 一陣之後 ...
Instance published with fingerprint: 7833..

 

Storage

Backup(export)

Export custom storage volume

lxc storage volume export <pool_name> <volume_name> [<file_path>] [flag]

flag

--volume-only

By default, the export file contains all snapshots of the storage volume.

Add this flag to export the volume without its snapshots.

--optimized-storage

當使用 btrfs / zfs 時會以 binary blob 去 export 而不是 file (比較快)

Restore backup(import)

lxc storage volume import <pool_name> <file_path> [<volume_name>]

 


snap 與 lxd

 

  • ~/snap/lxd/current/.config/    # per-user configuration
  • /snap/lxd                              # LXD installation files
  • /var/snap/lxd                       # LXD files are located under

 


Unprivileged containers

 

unprivileged containers (the default in LXD)

 => UID 0 in the container may be 100000 on the host

cat /etc/subuid /etc/subgid

Remark

root:100000:65536
root:100000:65536

# 如果沒有否設定它

usermod -v 100000-165535 -w 100000-165535 root         # 用 root 的身份行

 * If none of those files can be found, then LXD will assume a 1000000000 UID/GID range

 * Containers with security.idmap.isolated will have a unique ID range computed for them

Checking

lxc info lxd-dashboard | grep PID

PID: 10654

ps -F --pid 10654

UID          PID    PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
1000000    10654   10643  0 24970 10888   0 05:12 ?        00:00:00 /sbin/init

# A container to run without a UID mapping

lxc config get lxd-dashboard security.privileged

 


Backup

 

lxd init --dump

 


Turning

 

/etc/sysctl.conf

net.ipv4.ip_forward=1
kernel.dmesg_restrict = 1
net.core.netdev_max_backlog = 182757

sysctl -p

NIC

br0 係主機對外的 NIC

ip link set br0 txqueuelen 10000

ip link set eth0 txqueuelen 10000

 


Cheat Notes

 

  • snap start lxd
  • snap services

 

 

Creative Commons license icon Creative Commons license icon