最後更新: 2014/10/07
目錄
- 有關 package
- Controller node 的設定
- Compute node 的設定
- To enable KVM explicitly
- Nested KVM
- Setting
- Commands
- quota setting
有關 package
# Compte node:
- nova-api-metadata - used when you run in multi-host mode with nova-network installations
- nova-compute - A worker daemon that creates and terminates virtual machine instances through hypervisor APIs ( libvirt for KVM or QEMU)
- nova-network - it accepts networking tasks from the queue and performs tasks to manipulate the network
# Controllor node:
- nova-api service - Accepts and responds to end user compute API calls (OpenStack Compute API, the Amazon EC2 API, and a special Admin API )
- nova-scheduler - Takes a VM instance request from the queue and determines on which nova-compute host it should run
- nova-conductor - Mediates interactions between nova-compute and the database
- nova-dhcpbridge script - Tracks IP address leases and records them in the database by using the dnsmasq dhcp-script facility
# browser-based novnc clients
- nova-consoleauth - Authorizes tokens for users that console proxies provide. ( M node )
-
nova-novncproxy - Provides a proxy for accessing running instances through a VNC connection. ( M node )
* uses noVNC to provide VNC support through a web browser.
* The proxies rely on nova-consoleauth to validate tokens - nova-xvpnvncproxy - A proxy for accessing running instances through a VNC connection. ( M node )
- nova-cert - Manages x509 certificates. ( M node )
# Command-line clients and other interfaces
- nova client. Enables users to submit commands as a tenant administrator or end user.
- nova-manage client. Enables cloud administrators to submit commands.
# Other components
- The queue service
- SQL database
P.S.
它們全部都係 python 來
Controller node 的設定
# 安裝必要的 package
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \
python-novaclient
# 設定 database
mysql -u root -p
mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
mysql> flush privileges;
mysql> exit
# 建立 DB 的 Tables
openstack-config --set /etc/nova/nova.conf \
database connection mysql://nova:NOVA_DBPASS@controller_pri/nova
su -s /bin/sh -c "nova-manage db sync" nova
# 設定 Q Server
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller_pri
# 設定 vnc
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip controller_pub
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen controller_pub
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address controller_pub
# keystone
keystone user-create --name=nova --pass=NOVA_PASS [email protected]
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | [email protected] | | enabled | True | | id | b43b32b5a69c4b92a4da525eec73d612 | | name | nova | | username | nova | +----------+----------------------------------+
keystone user-role-add --user=nova --tenant=service --role=admin
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 15e5e2fbb97546759da99be4f66d0ccc | | name | nova | | type | compute | +-------------+----------------------------------+
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://controller_pub:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller_pri:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller_pri:8774/v2/%\(tenant_id\)s
+-------------+---------------------------------------------+ | Property | Value | +-------------+---------------------------------------------+ | adminurl | http://controller_pri:8774/v2/%(tenant_id)s | | id | 70a44eee093444929cef2c6212b75e39 | | internalurl | http://controller_pri:8774/v2/%(tenant_id)s | | publicurl | http://controller_pub:8774/v2/%(tenant_id)s | | region | regionOne | | service_id | 15e5e2fbb97546759da99be4f66d0ccc | +-------------+---------------------------------------------+
# 設定nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller_pri:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller_pri
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS
# Start 有關 Service
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on
# Verify your configuration
nova image-list
+--------------------------------------+------------------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+------------------------------+--------+--------+ | dff9056a-81d8-4a21-97ae-f8b6eb20302b | precise-server-cloudimg-i386 | ACTIVE | | +--------------------------------------+------------------------------+--------+--------+
Compute node 的設定
安裝
yum install openstack-nova-compute
設定
# 設定連過去 keystone
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:NOVA_DBPASS@controller_pri/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller_pri:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller_pri
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS
# 設定 Q
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller_pri
# 設定 VNC
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.123.72
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
# default value
# vnc_enabled=true
# vnc_keymap=en-us
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
# instanse 的 XML
<graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' keymap='en-us'>
<listen type='address' address='0.0.0.0'/>
</graphics>
# compute node management_ip
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.2
# 行 "nova get-vnc-console <instance> novnc" 時獲得的 link
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller_pub:6080/vnc_auto.html
Proxy node:
nova-novncproxy services. Supports browser-based noVNC clients.
typically runs on the same machine as nova-api
(because it operates as a proxy between the public network and the private compute host network.)
ps aux 會見到
/usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/
# By default
nova.conf:
novncproxy_host=0.0.0.0 novncproxy_port=6080
修改設定後要
/etc/init.d/openstack-nova-novncproxy restart
# 設定 image service
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller_pri
# 設定用 libvirt - qemu (在 vm 內測試)
grep -c '(vmx|svm)' /proc/cpuinfo
0
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
# Start Service
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on
To enable KVM explicitly
/etc/nova/nova.conf
# # Options defined in nova.virt.driver # compute_driver=libvirt.LibvirtDriver [libvirt] #virt_type=qemu libvirt_type=kvm inject_password=true inject_key=true
/etc/init.d/openstack-nova-compute restart
/etc/init.d/openstack-nova-api restart
Nested KVM
http://datahunter.org/nested_kvm
Setting
#
# Options defined in nova.virt.driver
#
compute_driver=nova.virt.libvirt.LibvirtDriver
# none. No storage provisioning occurs up front.
# space. Storage is fully allocated at instance start.
preallocate_images=none
# For per-compute-host cached images
image_cache_subdirectory_name=_base
# Should unused base images be removed?
# When set to True, the interval at which base images are removed are set with the following two settings.
# If set to False base images are never removed by Compute.
remove_unused_base_images=false
# image cache manager 多久行一次
image_cache_manager_interval=2400
# Unused unresized base images younger than this are not removed.
remove_unused_original_minimum_age_seconds=86400
# Unused resized base images younger than this are not removed.
remove_unused_resized_minimum_age_seconds=3600
# To see how the settings affect the deletion of a running instance
# check the directory where the images are stored:
ls -lash /var/lib/nova/instances/_base/
# 設定好後 restart 它
/etc/init.d/openstack-nova-compute restart
Command
http://datahunter.org/cmd_nova
flavors
Virtual hardware templates are called
( default install provides five flavors )
# Disk
mount of disk space in gigabytes
# Ephemeral
* 0 by default
Ephemeral disks offer machine local disk storage linked to the life cycle of a VM instance. When a VM is terminated, all data on the ephemeral disk is lost. Ephemeral disks are not included in any snapshots.
nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
nova flavor-show 1
+----------------------------+---------+ | Property | Value | +----------------------------+---------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 1 | | extra_specs | {} | | id | 1 | | name | m1.tiny | | os-flavor-access:is_public | True | | ram | 512 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------+
Usage:
nova flavor-create <name> <id> <ram> <disk> <vcpus>
Default:
--is-public yes
--rxtx-factor 1
--ephemeral 0
i.e.
nova flavor-create m1.test 11 512 10 1
Quota
# Options defined in nova.quota quota_instances=10 quota_cores=4 quota_ram=51200 quota_floating_ips=3 quota_security_group_rules=20 quota_security_groups=10 quota_key_pairs=100