5 LXD Cluster

 

介紹

A LXD cluster consists of one bootstrap server and at least two further cluster members.

It stores its state in a dqlite

dqlite = distributed SQLite(https://dqlite.io/)

When you create the cluster, the Dqlite database runs on only the bootstrap server

until a third member joins the cluster.

Then both the second and the third server receive a replica of the database.

At each time, there is an elected cluster leader that monitors the health of the other members.

Each member that replicates the database has either the role of a voter or of a stand-by.

Role: database-leader, database, database-standby, event-hub, ovn-chassis

The default number of voter members (cluster.max_voters) is 3

The default number of stand-by members (cluster.max_standby) is 2

Cluster certificate

In a LXD cluster, the API on all servers responds with the same shared certificate

/var/snap/lxd/common/lxd/cluster.crt

lxc cluster update-certificate

 


Member configuration

 

LXD cluster members are generally assumed to be identical systems.

This means that all LXD servers joining a cluster must have an identical configuration to the bootstrap server
(storage pools and networks)

 * All members must be upgraded to the same version of LXD.

 

1. bootstrap server

lxd init

Would you like to use LXD clustering? Select yes.
Are you joining an existing cluster? Select no.
Setup password authentication on the cluster? Select yes to use a trust password

 

 

 


Storage pools & volumes

 

All nodes must have identical storage pools.

lxc storage create --target node1 data zfs source=/dev/vdb1
lxc storage create --target node2 data zfs source=/dev/vdc1
lxc storage create data zfs

Each volume lives on a specific node.

Different volumes can have the same name as long as they live on different nodes
(for example image volumes).

 


Networks

 

All nodes must have identical networks defined.

The only difference between networks on different nodes might be their bridge.external_interfaces

lxc network create --target node1 my-network
lxc network create --target node2 my-network
lxc network list
lxc network create my-network

 


# Generate a join token on the existing instance with the command.

lxc cluster add <new member name>

 

lxc cluster show <member_name>

lxc cluster info <member_name>

 

指定在那 member 啟動 Contaner

lxc launch ubuntu:20.04 C1

lxc launch --target node2 ubuntu:22.04 C1

non-responding member

You can tweak the amount of seconds after which a non-responding member

is considered offline by setting the cluster.offline_threshold configuration.

The default value is 20 seconds. The minimum value is 10 seconds.

 


Images

 

By default, LXD will replicate images on as many cluster members as you have database members.

# You can disable the image replication in the cluster by setting the count down to 1:
# “-1” may be used to have the image copied on all nodes.
lxc config set cluster.images_minimal_replica 1
 

 

 


Remote API authentication

 

Remote communications with the LXD daemon happen using JSON over HTTPS.

Trusted TLS clients

Client contact server over HTTPS

lxc remote add SERVER

If the client certificate is not in the server’s trust store,
the server prompts the user for a token or the trust password.

Adding trusted certificates to the server

# Server list trusted certificates

lxc config trust list

# Adding trusted certificates to the server

lxc config trust add file.crt

lxc config trust remove FINGERPRINT

 

Form Cluster by Join Token

lxc cluster add hostY    # run this on the master

trust password(core.trust_password)

Clients can then add their own certificate to the server’s trust store by providing the trust password when prompted.

lxc config get core.trust_password

true

In a production setup, unset core.trust_password after all clients have been added.
This prevents brute-force attacks trying to guess the password.

 

 

Creative Commons license icon Creative Commons license icon