Xen - info

enterprise customers
cloud computing providers
virtualization solutions

toolstack(libxenlight library)
credit 2 scheduler
each CPU pool runs its own scheduler

Support for x86, x86-64, Itanium, Power PC, and ARM processors

Privileged Domain (Dom0) <-- communicate with the hardware via the hypervisor
Unprivileged Domain Guests (DomU)

Xen has “thin hypervisor” model (Operating System Neutrality)

Pass-through technology


(dom0-min-mem 1024)
(enable-dom0-ballooning no) <-- fixed amount of RAM
"xm list" to verify the amount of memory dom0 has
"xm info" to verify the amount of free memory in Xen hypervisor.

Linux kernel :
calculates various network related parameters based on the boot time amount of memory.
memory to store the memory metadata (per page info structures)

make sure dom0 always gets enough CPU time to process and serve the IO requests for guest VMs.

As a default Xen gives every guest (including dom0) the default weight of 256

use "xm sched-credit -d Domain-0" to check the current Xen credit scheduler parameters for Dom0.
use "xm sched-credit -d Domain-0 -w 512" to give dom0 weight of 512, giving it more

dedicate (pin) a CPU core only for dom0 use

/etc/default/grub and add:

GRUB_SERIAL_COMMAND="serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1"
GRUB_TERMINAL="console serial"
GRUB_CMDLINE_XEN="com1=9600,8n1 console=com1,vga"
GRUB_CMDLINE_LINUX="console=tty0 console=hvc0"

In /etc/inittab you need at least these lines:

1:2345:respawn:/sbin/getty 38400 hvc0
2:23:respawn:/sbin/getty 38400 tty1
# NO getty on ttyS0!

DomU (guests)

xen-create-image --hostname <hostname> --ip <ip> --vcpus 2 --pygrub --dist <lenny|maverick|whatever>

2.6.32 kernel images have paravirt_ops-based Xen dom0 and domU support.

if Xen is crashing and reboot automatically


Network Layout

Hosting Switch
xenbr0                      xenbr1
 | - PubIP - dom0 - RFC1918 - |
 | - PubIP - domU - RFC1918 - |

dom0 (xend)

I run debian as a dom0, and chose to use as my RFC1918 range, so I did the following (as root):

modprobe dummy

echo dummy >> /etc/modules

cat <<EOF >> /etc/network/interfaces
# Xen Backend
auto dummy0
iface dummy0 inet static

ifup dummy0

cat <<EOF > /etc/xen/scripts/my_network_script
dir=$(dirname "$0")
"$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=xenbr0
"$dir/network-bridge" "$@" vifnum=1 netdev=dummy0 bridge=xenbr1

chmod +x /etc/xen/scripts/my_network_script

Don't forget to make the new script executable

Then edit /etc/xen/xend-config.sxp to have the following two lines of config:

(network-script my_network_script)
(vif-script vif-bridge)

And restart xend

/etc/init.d/xend restart

If you run ifconfig now you should see the following interfaces:

    eth0 (Your public IP address)
    dummy0 ( - or other RFC1918 address)


Configure your domU as normal - but edit the cfg file to have the following entry (substitute your public IP and an available private IP):

vif  = [ 'ip=<PublicIP>,bridge=xenbr0','ip=,bridge=xenbr1' ]

When you boot it up - set the two interfaces up in /etc/network/interfaces:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
 address x.y.z.2
 gateway x.y.z.1
auto eth1
iface eth1 inet static

brctl show command.

 bridge name     bridge id               STP enabled     interfaces
 xenbr0          8000.000e0cb30550       yes             eth0
Example 4: An internal bridge with no external connectivity. Note that $IFACE here can be entered literally, it is substituted automatically by ifupdown

iface xenbr0 inet manual
        pre-up brctl addbr $IFACE
        up ip link set $IFACE up
        post-down brctl delbr $IFACE
        down ip link set $IFACE down

Some other useful options to use in any stanza in a virtualised environment are:

        bridge_stp off          # disable Spanning Tree Protocol
        bridge_waitport 0       # no delay before a port becomes available
        bridge_fd 0             # no forwarding delay
paravirt_ops (pv-ops for short) is a piece of Linux kernel infrastructure to allow it to run paravirtualized on a hypervisor. It currently supports VMWare's VMI, Rusty's lguest, and most interestingly, Xen.

The infrastructure allows you to compile a single kernel binary which will either boot native on bare hardware (or in hvm mode under Xen),
 Dom0 means the first guest that Xen boots. It usually (mostly always) is the one that has the driver support. The other guests that are booted (HVM or PV) are called DomU.
 type-1 hypervisor built independent of any operating system


  • backend driver = driver required in the Xen dom0 kernel
  • frontend driver = driver required in the Xen domU guest kernel
  • pciback and pcifront = drivers required for PCI passthrough. These drivers are not related to using PCI devices in dom0!
  • usbback and usbfront = drivers required for USB passthrough. These drivers are not related to using physical usb devices in dom0!
  • scsiback and scsifront = drivers required for PVSCSI passthrough. These drivers are not related to using SCSI devices in dom0!


xen-utils-4.0                    XEN administrative tools
許多 .py 檔



  • xen-create-image  
  • xen-create-nfs    
  • xen-delete-image  
  • xen-list-images   
  • xen-update-image