在测试机上手动部署安装

[toc]

OpenStack安装环境搭建

1. 安全:

关闭防火墙与selinux: systemctl disable firewalld.service systemctl stop firewalld.service setenforce 0 配置文件/etc/selinux/config,将SELINUX设置为disabled。

2. 主机

10.58.10.19 controller 10.58.10.95 computer

2.基础环境配置

1. 启用OpenStack仓库

yum install centos-release-openstack-ocata 完成安装: 安装OpenStack客户端– yum install python-openstackclient 安装selinux安装包– yum install openstack-selinux

2. 设置内外网IP对应主机名

修改配置文件/etc/hosts 退出重新登录即可生效

3. MySQL数据库安装配置

  1. 安装相关软件包: yum install mariadb-server python2-PyMySQL

  2. 编辑配置文件 vim /etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 10.58.10.19
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
  1. 启动数据库服务: systemctl enable mariadb.service systemctl start mariadb.service
  2. 设置数据库密码: mysql_secure_installation # 设置密码为’Td@123’ 一路y回车
  3. 测试登录: mysql -uroot -pTd@123
  4. 远程连接 支持root用户允许远程连接mysql数据库 grant all privileges on . to ‘root’@'%’ identified by ‘Td@123’ with grant option; flush privileges;

4. 消息队列RabbitMQ安装与配置

  1. 安装软件包: yum install rabbitmq-server
  2. 启用消息队列服务: systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service
  3. 添加opensatck用户: rabbitmqctl add_user openstack openstack #添加用户及密码 4.设置权限: rabbitmqctl set_permissions openstack “.” “.” “.*” #允许配置、写、读访问 rabbitmq-plugins list #查看支持的插件
  4. 使用此插件实现 web 管理 rabbitmq-plugins enable rabbitmq_management #启动插件 systemctl restart rabbitmq-server.service
  5. 查看端口 lsof -i:15672
  6. 访问RabbitMQ,访问地址是 http://controller:15672/ 默认用户名密码都是guest,浏览器添加openstack用户到组并登陆测试,连不上情况一般是防火墙没有关闭所致!

5. Memcached安装与配置

Memcached的作用为缓存tokens。 安装相关软件包: yum install memcached python-memcached 修改配置文件 vim /etc/sysconfig/memcached

PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
#OPTIONS="-l 127.0.0.1,::1i,controller,computer"
OPTIONS="-vv >> /var/log/memcache/memcached.log 2>&1"

3. Keystone—认证服务

云安全需要考虑数据安全、身份与访问管理安全、虚拟化安全和基础设施安全四个部分。Keystone为OpenStack中的一个独立的提供安全认证的模块,主要负责OpenStack用户的身份认证、令牌管理、提供访问资源的服务目录,以及基于用户角色的访问控制。在OpenStack整体框架中,Keystone作用类似于服务总线,其他服务需要通过Keystone注册服务端点,其中服务端点为服务的访问点或URL。 Keystone几个基本概念:

  1. User–用户 通过Keystone访问OpenStack服务的个人、系统或者某个服务,Keystone通过认证信息验证用户请求合法性。
  2. Role–角色 一个用户所具有的角色,代表其被赋予的权限。
  3. Service–服务
  4. Endpoint–端点 一个可以用来访问某个具体服务的网络地址。
  5. Token–令牌
  6. Catalog–服务查询目录

1. 安装前准备

  1. 创建keystone数据库 CREATE DATABASE keystone;
  2. 授权数据库访问 GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘localhost’ IDENTIFIED BY ‘keystone’; GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@'%’ IDENTIFIED BY ‘keystone’; flush privileges;

2. Keystone组件安装与配置

  1. 安装相关软件包: yum install openstack-keystone httpd mod_wsgi

  2. 修改配置文件 vim /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:keystone@controller/keystone
[token]
provider = fernet
  1. 填充认证服务数据库: su -s /bin/sh -c “keystone-manage db_sync” keystone
  2. 初始化Fernet key仓库: keystone-manage fernet_setup –keystone-user keystone –keystone-group keystone keystone-manage credential_setup –keystone-user keystone –keystone-group keystone
  3. 引导认证服务: keystone-manage bootstrap –bootstrap-password admin
    –bootstrap-admin-url http://controller:35357/v3/
    –bootstrap-internal-url http://controller:5000/v3/
    –bootstrap-public-url http://controller:5000/v3/
    –bootstrap-region-id RegionOne

3. Apache Http服务器配置

  1. 修改配置文件 vim /etc/httpd/conf/httpd.conf ServerName controller

  2. 创建链接: ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

  3. 启动服务 systemctl enable httpd.service systemctl start httpd.service

  4. 配置管理账户 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

  5. 验证keystone openstack token issue

4. 创建域/项目/用户/角色

1.创建admin项目 openstack project create –domain default –description “Service Project” admin

创建 admin 项目—创建 admin 用户(密码 admin,生产不要这么玩) —创建 admin 角色—把 admin 用户加入到 admin 项目赋予 admin 的角色(三个 admin 的位置:项目,用户,角色)

openstack role add –project admin –user admin admin 2.创建Demo项目 openstack project create –domain default –description “Demo Project” demo 3. 创建Demo用户 openstack user create –domain default –password-prompt demo (需输入密码,我的"demo”) 4. 创建demo用户相关的角色 openstack role create user 5. 创建 service 项目,用来管理其他服务用 openstack user create –domain default –password-prompt demo (需输入密码,我的"demo”)

5. 验证keystone

  1. 查看项目 openstack project list
  2. 查看用户 openstack user list
  3. 查看role openstack role list

6. keystone功能验证

  1. 关闭token临时认证机制 vim /etc/keystone/keystone-paste.ini 删除以下三个部分中的admin_token_auth
  2. 取消临时环境变量 unset OS_AUTH_URL OS_PASSWORD
  3. admin用户token认证 openstack –os-auth-url http://controller:35357/v3 –os-project-domain-name default –os-user-domain-name default –os-project-name admin –os-username admin token issue 输入密码admin 4.demo用户token认证 openstack –os-auth-url http://controller:5000/v3 –os-project-domain-name default –os-user-domain-name default –os-project-name demo –os-username demo token issue 输入密码demo

7. 创建客户端认证脚本

  1. 创建文件admin-openrc.sh echo " export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 " > admin-openrc.sh
  2. 创建文件demo-openrc.sh echo " export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 " > demo-openrc.sh
  3. 测试脚本 source admin-openrc.sh openstack token issue source demo-openrc.sh openstack token issue

4. Glance—镜像服务

Glance为OpenStack提供虚拟机的镜像服务,由glance-api与glance-registry两个服务组成。glance-api是进入Glance的入口,负责接收用户的RESTful请求,再通过后台的存储系统完成镜像的存储与获取。

1.安装前准备

  1. 创建glance数据库及后续操作: CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘localhost’ IDENTIFIED BY ‘glance’; GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@'%’ IDENTIFIED BY ‘glance’; flush privileges;
  2. 使用admin认证 source admin-openrc.sh
  3. 创建glance用户 openstack user create –domain default –password-prompt glance 输入密码:glance 4.将admin角色加入glance用户及service项目 openstack role add –project service –user glance admin 5.创建glance服务实体 openstack service create –name glance –description “OpenStack Image” image 6.创建镜像服务API接入点 openstack endpoint create –region RegionOne image public http://controller:9292 openstack endpoint create –region RegionOne image internal http://controller:9292 openstack endpoint create –region RegionOne image admin http://controller:9292

2.glance组件安装及配置

  1. 安装软件包 yum install openstack-glance

  2. 编辑文件 vim /etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
# ...
flavor = keystone
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
  1. 编辑文件 vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
# ...
flavor = keystone

查看配置文件的命令 cat /etc/glance/glance-registry.conf|grep -v "^#"|grep -v "^$" cat /etc/glance/glance-api.conf|grep -v "^#"|grep -v "^$"

  1. 填充glance数据库 su -s /bin/sh -c “glance-manage db_sync” glance

  2. 启动glance systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service netstat -lnutp |grep 9191 #registry

tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 24890/python2

netstat -lnutp |grep 9292 #api

tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 24877/python2

3. 验证glance

  1. 使用admin认证 source admin-openrc.sh
  2. 下载镜像 wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
  3. 上传镜像至服务器 openstack image create “cirros”
    –file cirros-0.3.5-x86_64-disk.img
    –disk-format qcow2
    –container-format bare
    –public
  4. 查看镜像是否上传成功 openstack image list
  5. 查看镜像目录是否存在 ll /var/lib/glance/images/

下载ios格式镜像,需要用OZ工具制造openstack镜像,具体操作请见另一篇博客: 实际生产环境下,肯定要使用ios镜像进行制作了 http://www.cnblogs.com/kevingrace/p/5821823.html 或者直接下载centos的qcow2格式镜像进行上传,qcow2格式镜像直接就可以在openstack里使用,不需要进行格式转换! 下载地址:http://cloud.centos.org/centos,可以到里面下载centos5/6/7的qcow2格式的镜像

5. Nova—计算服务

Nova为OpenStack的计算组件,由API、Compute、Conductor、Scheduler四个核心服务所组成,服务之间通过AMQP消息队列进行通信。 API是进入Nova的HTTP接口,Compute和VMM交互运行虚拟机并管理虚拟机的生命周期。Schedular从可用资源池中选择最合适的计算节点来创建新的虚拟机实例,Conductor为数据库的访问提供一层安全保障。 虚拟机创建服务流程:首先用户执行novaclient提供的用于创建虚拟机的命令,API服务监听到novaclient发送的HTTP请求并且将它转换成AMQP消息,通过消息队列(Queue)调用Conductor服务,Conductor服务通过消息队列接受到任务之后,先完成一些准备工作,再通过消息队列告诉Schedular去选择一个满足虚拟机创建要求的主机,Conductor拿到Schedular提供的目标主机之后,会要求Compute服务创建虚拟机。

1.安装前准备工作

  1. 添加nova数据库 CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘nova’; GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@'%’ IDENTIFIED BY ‘nova’; CREATE DATABASE nova_api; GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘nova’; GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@'%’ IDENTIFIED BY ‘nova’; CREATE DATABASE nova_cell0; GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘nova’; GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@'%’ IDENTIFIED BY ‘nova’; flush privileges;

  2. 使用admin认证 source admin-openrc.sh

  3. 创建nova用户 openstack user create –domain default –password-prompt nova

  4. 将admin角色加给nova用户 openstack role add –project service –user nova admin

  5. 创建nova服务实体 openstack service create –name nova –description “OpenStack Compute” compute

  6. 创建计算API服务端点 openstack endpoint create –region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create –region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create –region RegionOne compute admin http://controller:8774/v2.1

  7. 创建placement用户 openstack user create –domain default –password-prompt placement

  8. 将placement用户添加到service项目及admin角色中 openstack role add –project service –user placement admin

  9. 创建placementAPI实体 openstack service create –name placement –description “Placement API” placement

  10. 创建placementAPI服务端点 openstack endpoint create –region RegionOne placement public http://controller:8778 openstack endpoint create –region RegionOne placement internal http://controller:8778 openstack endpoint create –region RegionOne placement admin http://controller:8778

2.安装与配置组件(controller)

  1. 安装nova相关软件包 yum install openstack-nova-api
    openstack-nova-conductor
    openstack-nova-console
    openstack-nova-novncproxy
    openstack-nova-scheduler
    openstack-nova-placement-api

  2. 修改配置文件 vim /etc/nova/nova.conf

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
[api_database]
# ...
connection = mysql+pymysql://nova:nova@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:nova@controller/nova
[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack@controller
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[DEFAULT]
# ...
my_ip = 10.58.10.19
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = placement
  1. 修改配置文件 vim /etc/httpd/conf.d/00-nova-placement-api.conf 末尾增加:
<Directory /usr/bin>
    <IfVersion >= 2.4>
        Require all granted
    </IfVersion>
    <IfVersion < 2.4>
        Order allow,deny
        Allow from all
    </IfVersion>
</Directory>

  1. 重启httpd服务 systemctl restart httpd.service

  2. 填充nova-api数据库 su -s /bin/sh -c “nova-manage api_db sync” nova

  3. 注册cell0数据库 su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova 7.创建cell1 cell su -s /bin/sh -c “nova-manage cell_v2 create_cell –name=cell1 –verbose” nova

  4. 填充nova数据库 su -s /bin/sh -c “nova-manage db sync” nova

  5. 验证cell0和cell1 nova-manage cell_v2 list_cells

  6. 启动服务 systemctl enable openstack-nova-api.service
    openstack-nova-consoleauth.service
    openstack-nova-scheduler.service
    openstack-nova-conductor.service
    openstack-nova-novncproxy.service

systemctl restart openstack-nova-api.service
openstack-nova-consoleauth.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service

3.安装与配置组件(computer)

1.安装与配置compute组件 yum install openstack-nova-compute 2.修改文件 vim /etc/nova/nova.conf

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack@controller
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[DEFAULT]
# ...
my_ip = 10.58.10.19
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
# ...
enabled = True
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html #如果控制节点也作为计算节点(单机部署的话),这一行也添加上(这行是计算节点配置的),配置控制节点的公网ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = placement
  1. 查看硬件支持信息 egrep -c ‘(vmx|svm)’ /proc/cpuinfo
  2. 修改配置文件 vim /etc/nova/nova.conf
[libvirt]
# ...
virt_type=kvm                 #如果控制节点也作为计算节点(单机部署的话),这一行也添加上(这行是计算节点配置的)

3.启动服务 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service 4.将计算节点加入cell数据库中 source admin-openrc.sh openstack hypervisor list su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova

4. nova功能验证

  1. 查看机器list,应该有四个服务时up的(conductor, consoleauth, scheduler, compute) openstack compute service list
  2. openstack catalog list
  3. nova-status upgrade check

6. Neutron—网络服务

OpenStack所在的整个物理网络在Neutron中被泛化为网络资源池,Neutron能够为同一物理网络的每个租户提供独立的虚拟网络环境。 通用配置:一个管理员创建的外部网络对象来负责OpenStack环境与Internet的连接,一个私有网络提供给租户创建自己的虚拟机。为了使内部网络中的机器能够连接互联网,必须创建一个路由器将内部网络连接到外部网络。在该过程中,Neutron提供了一个L3(三层)的抽象router与一个L2(二层)的抽象network,router对应于真实网络环境中的路由器,为用户提供路由、NAT等服务,network则对应于一个真实物理网络中的二层局域网(LAN)。 另一个重要概念是子网subnet,功能为附加在二层网络上指明属于这个网络的虚拟机可使用的IP地址范围。

1. controller节点安装与配置

1.创建neutron数据库 CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘localhost’ IDENTIFIED BY ‘neutron’; GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@'%’ IDENTIFIED BY ‘neutron’; 2.使用admin认证 source admin-openrc.sh 3.创建neutron用户 openstack user create –domain default –password-prompt neutron 4.将admin角色加入neutron用户中 openstack role add –project service –user neutron admin 5.创建neutron服务实体 openstack service create –name neutron –description “OpenStack Networking” network 6.创建网络服务API端点 openstack endpoint create –region RegionOne network public http://controller:9696 openstack endpoint create –region RegionOne network internal http://controller:9696 openstack endpoint create –region RegionOne network admin http://controller:9696

2. 网络类型配置-self-service network

  1. 安装neutron网络组件 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
  2. 修改配置文件 cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.back vim /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:neutron@controller/neutron
使能Modular Layer2(ML2)插件、路由服务、重叠IP 
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
  1. 修改ModularLayer2插件配置文件,使能flat/vlan/vxlan类型 cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.back vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
  1. 修改linux bridge agent配置文件: cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.back vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 10.58.10.19
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. 修改layer-3 agent配置文件 cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.back vim /etc/neutron/l3_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
  1. 修改DHCP agent配置文件 cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.back vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
  1. 修改metadata agent配置文件 cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.back

vim /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = neutron #这个是 Nova Metadata 与 Neutron 通讯的密钥。两边保持一致即可
  1. 在计算服务配置文件nova.conf中添加neutron网络配置 vim /etc/nova/nova.conf
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron   
service_metadata_proxy = true
metadata_proxy_shared_secret = neutron
  1. 建立链接 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  2. 填充neutron数据库 su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
  3. 重启nova-api服务 systemctl restart openstack-nova-api.service
  4. 启动服务 systemctl enable neutron-server.service
    neutron-linuxbridge-agent.service
    neutron-dhcp-agent.service
    neutron-metadata-agent.service
    neutron-l3-agent.service

systemctl restart neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
neutron-l3-agent.service

4.neutron功能验证

  1. openstack extension list –network
  2. openstack network agent list

7. Horizon—前台界面

模块化的基于web的图形界面,通过浏览器访问。Horizon采用Django框架,一种基于Python语言的开源Web应用程序框架。

1. horizon安装与配置

  1. 安装horizon软件包 yum install openstack-dashboard
  2. 修改配置文件 cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.back vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
CACHES = {
   'default': {
      'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
      'LOCATION': 'controller:11211',
   }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
   "identity": 3,
   "image": 2,
   "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
TIME_ZONE = "TIME_ZONE"
  1. 启动服务 systemctl restart httpd.service memcached.service

2. horizon功能验证

  1. 访问http://controller/dashboard 域:default 用户名:admin 密码:admin
  2. 登录成功

8. 增加新节点

1. 安装OpenStack-compute

  1. yum安装 vim /etc/yum.conf yum install openstack-nova-compute openstack-neutron-linuxbridge ebtables ipset
  2. 拷贝其他compute的nova.conf scp root@compute1:/etc/nova/nova.conf /etc/nova/nova.conf vim /etc/nova/nova.conf
state_path=/var/lib/docker/nova
my_ip = 10.58.10.95
  1. 启动compute systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service

2. controller节点同步新的计算节点

  1. 加载环境变量 source admin-openrc.sh
  2. 添加新的计算节点 su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova

Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell ‘cell1’: 007243d4-2b22-4939-811f-1f8f9dc41020 Found 1 computes in cell: 007243d4-2b22-4939-811f-1f8f9dc41020 Checking host mapping for compute host ‘controller’: e33268d7-1382-4347-b8b9-deccf2a38cdd

  1. 检查节点list openstack hypervisor list
  2. 修改配置文件 scp root@compute1:/etc/neutron/neutron.conf /etc/neutron/neutron.conf cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.back vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 10.58.10.95
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. 启动网络服务 systemctl restart openstack-nova-compute.service

systemctl enable neutron-linuxbridge-agent.service systemctl restart neutron-linuxbridge-agent.service

错误

1. 在keystone生成数据库的时候报错:

/usr/lib/python2.7/site-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.24.1

解决:

pip uninstall urllib3 pip uninstall chardet pip install -i https://pypi.tuna.tsinghua.edu.cn/simple requests

2. openstack token issue

Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL. Service Unavailable (HTTP 503)

解决:

机器为了访问外网,开启了代理,导致在访问http://controller:35357/v3时候有有问题 解决方式关闭代理:unset http_proxy 重新测试一下:curl http://controller:35357/v3 问题解决

3. openstack token issue

The request you have made requires authentication. (HTTP 401) (Request-ID: req-719deb9d-9123-4489-b6d5-456fe64b7f58)

解决:

重新初始化Fernet key仓库 keystone-manage fernet_setup –keystone-user keystone –keystone-group keystone keystone-manage credential_setup –keystone-user keystone –keystone-group keystone

4. neutron-server启动不了

/var/log/neutron/server.log NoMatchingPlugin: The plugin password  could not be found

解决:

/etc/neutron/neutron.conf中的auth_type = password,“password “多了一个空格

5. neutron-linuxbridge-agent启动不了

/var/log/neutron/linuxbridge-agent.log 2019-05-24 01:17:36.587 15183 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface INTERFACE for physical network provider does not exist. Agent terminated!

解决:

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini 配置错误physical_interface_mappings = provider:INTERFACE 正确配置:physical_interface_mappings = provider:eth0

6. 安装好dashboard,启动之后登录报错

/var/log/httpd/error_log RuntimeError: Unable to create a new session key. It is likely that the cache is unavailable.

解决:

vim /etc/openstack-dashboard/local_settings 将SESSION_ENGINE值修改 SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’ 改为 SESSION_ENGINE = ‘django.contrib.sessions.backends.file’ 重启systemctl restart httpd.service memcached.service

7. Could not access KVM kernel module: Permission denied

failed to initialize KVM: Permission denied No accelerator found!

解决:

是权限问题,做如下处理: chown root:kvm /dev/kvm 修改/etc/libvirt/qemu.conf, user="root"改为 user="root” group="root"改为 group="root” 重启服务 service libvirtd restart,问题解决了