3 6 5 V ⒐c о м#UTwae 认真讨论下,哪些世界级球员如果到中超很难踢出来

chuck手把手带你搞定openstack
来源:本站原创
一、OpenStack初探
1.1 OpenStack简介
 OpenStack是一整套开源软件项目的综合,它允许企业或服务提供者建立、运行自己的云计算和存储设施。Rackspace与NASA是最初重要的两个贡献者,前者提供了“云文件”平台代码,该平台增强了OpenStack对象存储部分的功能,而后者带来了“Nebula”平台形成了OpenStack其余的部分。而今,OpenStack基金会已经有150多个会员,包括很多知名公司如“Canonical、DELL、Citrix”等。
1.2 OpenStack的几大组件
1.2.1 图解各大组件之间关系
1.2.2 谈谈openstack的组件
OpenStack 认证(keystone)
  Keystone为所有的OpenStack组件提供认证和访问策略服务,它依赖自身REST(基于Identity API)系统进行工作,主要对(但不限于)Swift、Glance、Nova等进行认证与授权。事实上,授权通过对动作消息来源者请求的合法性进行鉴定
  Keystone采用两种授权方式,一种基于用户名/密码,另一种基于令牌(Token)。除此之外,Keystone提供以下三种服务:
a.令牌服务:含有授权用户的授权信息
b.目录服务:含有用户合法操作的可用服务列表
c.策略服务:利用Keystone具体指定用户或群组某些访问权限
认证服务组件
1)通过宾馆对比keystone
住宾馆的人
Credentials
Authentication 认证你的身份证
宾馆可以提供的服务类别,比如,饮食类,娱乐类
具体的一种服务,比如吃烧烤,打羽毛球
VIP 等级,VIP越高,享有越高的权限
2)keystone组件详细说明
a.服务入口endpoint:如Nova、Swift和Glance一样每个OpenStack服务都拥有一个指定的端口和专属的URL,我们称其为入口(endpoints)。
b.用户user:Keystone授权使用者
注:代表一个个体,OpenStack以用户的形式来授权服务给它们。用户拥有证书(credentials),且可能分配给一个或多个租户。经过验证后,会为每个单独的租户提供一个特定的令牌。
c.服务service:总体而言,任何通过Keystone进行连接或管理的组件都被称为服务。举个例子,我们可以称Glance为Keystone的服务。
d.角色role:为了维护安全限定,就内特定用户可执行的操作而言,该用户关联的角色是非常重要的。注:一个角色是应是某个租户的使用权限集合,以允许某个指定用户访问或使用特定操作。角色是使用权限的逻辑分组,它使得通用的权限可以简单地分组并绑定到与某个指定租户相关的用户。
e.租间project:租间指的是具有全部服务入口并配有特定成员角色的一个项目。注:一个租间映射到一个Nova的“project-id”,在对象存储中,一个租间可以有多个容器。根据不同的安装方式,一个租间可以代表一个客户、帐号、组织或项目。
OpenStack Dashboard界面 (horizon)
  Horizon是一个用以管理、控制OpenStack服务的Web控制面板,它可以管理实例、镜像、创建密匙对,对实例添加卷、操作Swift容器等。除此之外,用户还可以在控制面板中使用终端(console)或VNC直接访问实例。总之,Horizon具有如下一些特点:
a.实例管理:创建、终止实例,查看终端日志,VNC连接,添加卷等
b.访问与安全管理:创建安全群组,管理密匙对,设置浮动IP等
c.偏好设定:对虚拟硬件模板可以进行不同偏好设定
d.镜像管理:编辑或删除镜像
e.查看服务目录
f.管理用户、配额及项目用途
g.用户管理:创建用户等
h.卷管理:创建卷和快照
i.对象存储处理:创建、删除容器和对象
j.为项目下载环境变量
OpenStack nova
API:负责接收和响应外部请求,支持OpenStackAPI,EC2API
nova-api 组件实现了RESTfulAPI功能,是外部访问Nova的唯一途径,接收外部的请求并通过Message Queue将请求发送给其他服务组件,同时也兼容EC2API,所以可以用EC2的管理工具对nova进行日常管理
Cert:负责身份认证
Scheduler:用于云主机调度
Nova Scheduler模块在openstack中的作用是决策虚拟机创建在哪个主机(计算节点),一般会根据过滤计算节点或者通过加权的方法调度计算节点来创建虚拟机。
首先得到未经过过滤的主机列表,然后根据过滤属性,选择服务条件的计算节点主机
经过过滤后,需要对主机进行权值的计算,根据策略选择相应的某一台主机(对于每一个要创建的虚拟机而言)
注:Openstack默认不支持指定的计算节点创建虚拟机
你可以得到更多nova的知识==&&
Conductor:计算节点访问,数据的中间件
Consloeauth:用于控制台的授权认证
Novncproxy:VNC代理
OpenStack 对象存储 (swift)
  Swift为OpenStack提供一种分布式、持续虚拟对象存储,它类似于Amazon Web Service的S3简单存储服务。Swift具有跨节点百级对象的存储能力。Swift内建冗余和失效备援管理,也能够处理归档和媒体流,特别是对大数据(千兆字节)和大容量(多对象数量)的测度非常高效。
swift功能及特点
海量对象存储
大文件(对象)存储
数据冗余管理
归档能力—–处理大数据集
为虚拟机和云应用提供数据容器
处理流媒体
对象安全存储
备份与归档
良好的可伸缩性
Swift的组件
Swift RING
Swift代理服务器
  用户都是通过Swift-API与代理服务器进行交互,代理服务器正是接收外界请求的门卫,它检测合法的实体位置并路由它们的请求。
此外,代理服务器也同时处理实体失效而转移时,故障切换的实体重复路由请求。
Swift对象服务器
  对象服务器是一种二进制存储,它负责处理本地存储中的对象数据的存储、检索和删除。对象都是文件系统中存放的典型的二进制文件,具有扩展文件属性的元数据(xattr)。注:xattr格式被Linux中的ext3/4,XFS,Btrfs,JFS和ReiserFS所支持,但是并没有有效测试证明在XFS,JFS,ReiserFS,Reiser4和ZFS下也同样能运行良好。不过,XFS被认为是当前最好的选择。
Swift容器服务器
  容器服务器将列出一个容器中的所有对象,默认对象列表将存储为SQLite文件(译者注:也可以修改为MySQL,安装中就是以MySQL为例)。容器服务器也会统计容器中包含的对象数量及容器的存储空间耗费。
Swift账户服务器
  账户服务器与容器服务器类似,将列出容器中的对象。
Ring(索引环)
  Ring容器记录着Swift中物理存储对象的位置信息,它是真实物理存储位置的实体名的虚拟映射,类似于查找及定位不同集群的实体真实物理位置的索引服务。这里所谓的实体指账户、容器、对象,它们都拥有属于自己的不同的Rings。
OpenStack 块存储(cinder)
  API service:负责接受和处理Rest请求,并将请求放入RabbitMQ队列。Cinder提供Volume API V2
  Scheduler service:响应请求,读取或写向块存储数据库为维护状态,通过消息队列机制与其他进程交互,或直接与上层块存储提供的硬件或软件交互,通过driver结构,他可以与中队的存储
提供者进行交互
  Volume service: 该服务运行在存储节点上,管理存储空间。每个存储节点都有一个Volume Service,若干个这样的存储节点联合起来可以构成一个存储资源池。为了支持不同类型和型号的存储
OpenStack Image service (glance)
  glance 主要有三个部分构成:glance-api,glance-registry以及image store
glance-api:接受云系统镜像的创建,删除,读取请求
glance-registry:云系统的镜像注册服务
OpenStack 网络 (neutron)
这里就不详细介绍了,后面会有详细的讲解
二、环境准备
2.1 准备机器
  本次实验使用的是VMvare虚拟机。详情如下
hostname:linux-node1.oldboyedu.com
ip地址:192.168.56.11 网卡NAT
系统及硬件:CentOS 7.1 内存2G,硬盘50G
计算节点:
hostname:linux-node2.oldboyedu.com
ip地址:192.168.56.12
系统及硬件:CentOS 7.1 内存2G,硬盘50G
2.2 OpenStack版本介绍
本文使用的是最新版L(Liberty)版,其他版本如下图
2.3 安装组件服务
2.3.1 控制节点安装
yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm -yyum install centos-release-openstack-liberty -yyum install python-openstackclient -y
yum install
mariadb mariadb-server MySQL-python -y
yum install
rabbitmq-server -y
yum install
openstack-keystone httpd mod_wsgi memcached python-memcached -y
yum install
openstack-glance python-glance python-glanceclient -y
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient -y
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset -y
yum install openstack-dashboard -y
2.3.2 计算节点安装
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpmyum install centos-release-openstack-liberty -yyum install python-openstackclient -y
linux-node2.example.com
yum install openstack-nova-compute sysfsutils -y
yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y
三、实战OpenStack之控制节点
3.1 CentOS7的时间同步服务器chrony
下载chrony
[root@linux-node1 ~]# yum install -y chrony
修改其配置文件
[root@linux-node1 ~]# vim /etc/chrony.confallow 192.168/16
chrony开机自启动,并且启动
[root@linux-node1 ~]#systemctl enable chronyd.service[root@linux-node1 ~]#systemctl start
chronyd.service
设置Centos7的时区
[root@linux-node1 ~]# timedatectl set-timezoneb Asia/Shanghai
查看时区和时间
[root@linux-node1 ~]# timedatectl status
Local time: Tue 2015-12-15 12:19:55 CST
Universal time: Tue 2015-12-15 04:19:55 UTC
RTC time: Sun 2015-12-13 15:35:33
Timezone: Asia/Shanghai (CST, +0800)
NTP enabled: yesNTP synchronized: no RTC in local TZ: no
DST active: n/a[root@linux-node1 ~]# dateTue Dec 15 12:19:57 CST 2015
3.2 入手mysql
Openstack的所有组件除了Horizon,都要用到数据库,本文使用的是mysql,在CentOS7中,默认叫做MariaDB。
拷贝配置文件
[root[@linux-node1 ~]#cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
修改mysql配置并启动
[root@linux-node1 ~]# vim /etc/my.cnf(在mysqld模块下添加如下内容)[mysqld]default-storage-engine = innodb 默认的存储引擎innodb_file_per_table
使用独享的表空间collation-server = utf8_general_ci设置校对标准init-connect = 'SET NAMES utf8'
设置连接的字符集character-set-server = utf8
设置创建数据库时默认的字符集
开机自启和启动mysql
[root@linux-node1 ~]# systemctl enable mariadb.serviceln -s '/usr/lib/systemd/system/mariadb.service' '/etc/systemd/system/multi-user.target.wants/mariadb.service'[root@linux-node1 ~]# systemctl start mariadb.service
设置mysql的密码
[root@linux-node1 ~]# mysql_secure_installation
创建所有组件的库并授权
[root@linux-node1 ~]# mysql -uroot -p123456
CREATE DATABASE keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';CREATE DATABASE glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';CREATE DATABASE nova;GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';CREATE DATABASE cinder;GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
3.3 Rabbit消息队列
  SOA架构:面向服务的体系结构是一个组件模型,它将应用程序的不同功能单元(称为服务)通过这些服务之间定义良好的接口和契约联系起来。接口是采用中立的方式进行定义的,它应该独立于实现服务的硬件平台、操作系统和编程语言。这使得构建在各种各样的系统中的服务可以使用一种统一和通用的方式进行交互。
在这里Openstack采用了SOA架构方案,结合了SOA架构的松耦合特点,单独组件单独部署,每个组件之间可能互为消费者和提供者,通过消息队列(openstack 支持Rabbitmq,Zeromq,Qpid)进行通信,保证了当某个服务当掉的情况,不至于其他都当掉。
启动Rabbitmq[root@linux-node1 ~]# systemctl enable rabbitmq-server.service ln -s '/usr/lib/systemd/system/rabbitmq-server.service' '/etc/systemd/system/multi-user.target.wants/rabbitmq-server.service'[root@linux-node1 ~]# systemctl start rabbitmq-server.service
新建Rabbitmq用户并授权
[root@linux-node1 ~]# rabbitmqctl add_user openstack openstack[root@linux-node1 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
启用Rabbitmq的web管理插件
[root@linux-node1 ~]rabbitmq-plugins enable rabbitmq_management
重启Rabbitmq
[root@linux-node1 ~]# systemctl restart rabbitmq-server.service
查看Rabbit的端口,其中5672是服务端口,15672是web管理端口,25672是做集群的端口
[root@linux-node1 ~]# netstat -lntup |grep 5672tcp
0 0.0.0.0:25672
52448/beam
0 0.0.0.0:15672
52448/beam
52448/beam
在web界面添加openstack用户,设置权限,首次登陆必须使用账号和密码必须都是guest
role设置为administrator,并设置openstack的密码
若想要监控Rabbit,即可使用下图中的API
3.4 Keystone组件
修改keystone的配置文件
[root@linux-node1 opt]# vim /etc/keystone/keystone.conf admin_token = 863de846d9 用作无用户时,创建用户来链接,此内容使用openssl随机产生connection = mysql://keystone:keystone@192.168.56.11/keystone用作链接数据库,三个keysthone分别为keystone组件,keystone用户名,mysql中的keysthone库名
切换到keystone用户,导入keystoe数据库
[root@linux-node1 opt]# su -s /bin/sh -c "keystone-manage db_sync" keystone[root@linux-node1 keystone]# cd /var/log/keystone/[root@linux-node1 keystone]# lltotal 8-rw-r--r-- 1 keystone keystone 7064 Dec 15 14:43 keystone.log(通过切换到keystone用户下导入数据库,当启动的时候回把日志写入到该日志中,如果使用root执行倒库操作,则无法通过keysthone启动keystone程序)31:verbose = true开启debug模式1229:servers = 192.168.57.11:11211更改servers标签,填写memcache地址1634:driver = sql开启默认sql驱动1827:provider = uuid开启并使用唯一识别码1832:driver = memcache(使用用户密码生成token时,存储到memcache中,高性能提供服务)
查看更改结果
[root@linux-node1 keystone]#
"^[a-Z]" /etc/keystone/keystone.conf12:admin_token = 863de846d931:verbose = true419:connection = mysql://keystone:keystone@192.168.56.11/keystone1229:servers = 192.168.57.11:112111634:driver = sql1827:provider = uuid1832:driver = memcache
检查数据库导入结果
MariaDB [keystone]& show tables;+------------------------+| Tables_in_keystone
|+------------------------+| access_token
|| assignment
|| config_register
|| consumer
|| credential
|| endpoint
|| endpoint_group
|| federation_protocol
|| id_mapping
|| identity_provider
|| idp_remote_ids
|| mapping
|| migrate_version
|| policy_association
|| project
|| project_endpoint
|| project_endpoint_group || region
|| request_token
|| revocation_event
|| sensitive_config
|| service
|| service_provider
|| trust_role
|| user_group_membership
|| whitelisted_config
|+------------------------+33 rows in set (0.00 sec)
添加一个apache的wsgi-keystone配置文件,其中5000端口是提供该服务的,35357是为admin提供管理用的
[root@linux-node1 keystone]# cat /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000Listen 35357&VirtualHost *:5000&
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
&IfVersion &= 2.4&
ErrorLogFormat "%{cu}t %M"
&/IfVersion&
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
&Directory /usr/bin&
&IfVersion &= 2.4&
Require all granted
&/IfVersion&
&IfVersion & 2.4&
Order allow,deny
Allow from all
&/IfVersion&
&/Directory&&/VirtualHost&&VirtualHost *:35357&
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
&IfVersion &= 2.4&
ErrorLogFormat "%{cu}t %M"
&/IfVersion&
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
&Directory /usr/bin&
&IfVersion &= 2.4&
Require all granted
&/IfVersion&
&IfVersion & 2.4&
Order allow,deny
Allow from all
&/IfVersion&
&/Directory&&/VirtualHost&
配置apache的servername,如果不配置servername,会影响keystone服务
[root@linux-node1 httpd]# vim conf/httpd.conf ServerName 192.168.56.11:80
启动memcached,httpd,keystone
[root@linux-node1 httpd]# systemctl enable memcached httpdln -s '/usr/lib/systemd/system/memcached.service' '/etc/systemd/system/multi-user.target.wants/memcached.service'ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'[root@linux-node1 httpd]# systemctl start memcached httpd
查看httpd占用端口情况
[root@linux-node1 httpd]# netstat -lntup|grep httpdtcp6
70482/httpd
70482/httpd
0 :::35357
70482/httpd
创建用户并连接keystone,在这里可以使用两种方式,通过keystone –help后家参数的方式,或者使用环境变量env的方式,下面就将使用环境变量的方式,分别设置了token,API及控制版本(SOA种很适用)
[root@linux-node1 ~]# export OS_TOKEN=863de846d9[root@linux-node1 ~]# export OS_URL=http://192.168.56.11:35357/v3[root@linux-node1 ~]# export OS_IDENTITY_API_VERSION=3
创建admin项目(project)
[root@linux-node1 httpd]# openstack project create --domain default
--description "Admin Project" admin+-------------+----------------------------------+| Field
|+-------------+----------------------------------+| description | Admin Project
|| domain_id
|| enabled
| 45ec9fd0f7d || is_domain
|| parent_id
|+-------------+----------------------------------+
创建admin用户(user)并设置密码(生产环境一定设置一个复杂的)
[root@linux-node1 httpd]# openstack user create --domain default --password-prompt adminUser Password:Repeat User Password:+-----------+----------------------------------+| Field
|+-----------+----------------------------------+| domain_id | default
|| enabled
| bb6d73c0b025bb72c06a1 || name
|+-----------+----------------------------------+
创建admin的角色(role)
[root@linux-node1 httpd]# openstack role create admin+-------+----------------------------------+| Field | Value
|+-------+----------------------------------+| id
| b0bd00e6164243ceaa794dbe || name
|+-------+----------------------------------+
把admin用户加到admin项目,赋予admin角色,把角色,项目,用户关联起来
[root@linux-node1 httpd]# openstack role add --project admin --user admin admin
创建一个普通用户demo,demo项目,角色为普通用户(uesr),并把它们关联起来
[root@linux-node1 httpd]# openstack project create --domain default --description "Demo Project" demo+-------------+----------------------------------+| Field
|+-------------+----------------------------------+| description | Demo Project
|| domain_id
|| enabled
| 4a213e53e9ff1dcb559f || is_domain
|| parent_id
|+-------------+----------------------------------+[root@linux-node1 httpd]# openstack user create --domain default --password=demo demo+-----------+----------------------------------+| Field
|+-----------+----------------------------------+| domain_id | default
|| enabled
| eb29c091e0ec490cbfa5d11dc2388766 || name
|+-----------+----------------------------------+[root@linux-node1 httpd]# openstack role create user+-------+----------------------------------+| Field | Value
|+-------+----------------------------------+| id
| 4b3daaf67feb19a8a55cf || name
|+-------+----------------------------------+[root@linux-node1 httpd]# openstack role add --project demo --user demo user
创建一个service的项目,此服务用来管理nova,neuturn,glance等组件的服务
[root@linux-node1 httpd]# openstack project create --domain default --description "Service Project" service+-------------+----------------------------------+| Field
|+-------------+----------------------------------+| description | Service Project
|| domain_id
|| enabled
| d8dc92073 || is_domain
|| parent_id
|+-------------+----------------------------------+
查看创建的用户,角色,项目
[root@linux-node1 httpd]# openstack user list+----------------------------------+-------+| ID
|+----------------------------------+-------+| bb6d73c0b025bb72c06a1 | admin || eb29c091e0ec490cbfa5d11dc2388766 | demo
|+----------------------------------+-------+[root@linux-node1 httpd]# openstack project list+----------------------------------+---------+| ID
|+----------------------------------+---------+| d8dc92073 | service || 45ec9fd0f7d | admin
|| 4a213e53e9ff1dcb559f | demo
|+----------------------------------+---------+[root@linux-node1 httpd]# openstack role list
+----------------------------------+-------+| ID
|+----------------------------------+-------+| 4b3daaf67feb19a8a55cf | user
|| b0bd00e6164243ceaa794dbe | admin |+----------------------------------+-------+
注册keystone服务,虽然keystone本身是搞注册的,但是自己也需要注册服务
创建keystone认证
[root@linux-node1 httpd]# openstack service create --name keystone --description "OpenStack Identity" identity+-------------+----------------------------------+| Field
|+-------------+----------------------------------+| description | OpenStack Identity
|| enabled
| 46228b6dae0bbde371c3 || name
| keystone
| identity
|+-------------+----------------------------------+
分别创建三种类型的endpoint,分别为public:对外可见,internal内部使用,admin管理使用
[root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity public http://192.168.56.11:5000/v2.0+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 1143dcd58bf2b9bf101d5 || interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| 46228b6dae0bbde371c3 || service_name | keystone
|| service_type | identity
| http://192.168.56.11:5000/v2.0
|+--------------+----------------------------------+[root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity internal http://192.168.56.11:5000/v2.0+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 496fe5fbe99b62ed8a76acd || interface
| internal
| RegionOne
|| region_id
| RegionOne
|| service_id
| 46228b6dae0bbde371c3 || service_name | keystone
|| service_type | identity
| http://192.168.56.11:5000/v2.0
|+--------------+----------------------------------+[root@linux-node1 httpd]# openstack endpoint create --region RegionOne identity admin http://192.168.56.11:35357/v2.0+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 28283cbf90b0fac9308df || interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| 46228b6dae0bbde371c3 || service_name | keystone
|| service_type | identity
| http://192.168.56.11:35357/v2.0
|+--------------+----------------------------------+
查看创建的endpoint
[root@linux-node1 httpd]# openstack endpoint list+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+| ID
| Service Name | Service Type | Enabled | Interface | URL
|+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+| 1143dcd58bf2b9bf101d5 | RegionOne | keystone
| identity
| http://192.168.56.11:5000/v2.0
|| 28283cbf90b0fac9308df | RegionOne | keystone
| identity
| http://192.168.56.11:35357/v2.0 || 496fe5fbe99b62ed8a76acd | RegionOne | keystone
| identity
| internal
| http://192.168.56.11:5000/v2.0
|+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
链接到keystone,请求token,在这里由于已经添加了用户名和密码,就不在使用token,所有就一定要取消环境变量了
[root@linux-node1 httpd]# unset OS_TOKEN[root@linux-node1 httpd]# unset OS_URL[root@linux-node1 httpd]#openstack --os-auth-url http://192.168.56.11:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issuePassword: +------------+----------------------------------+| Field
|+------------+----------------------------------+| expires
| 2015-12-16T17:45:52.926050Z
| ba1d3c403bfb || project_id | 45ec9fd0f7d || user_id
| bb6d73c0b025bb72c06a1 |+------------+----------------------------------+
配置admin和demo用户的环境变量,并添加执行权限,以后执行命令,直接source一下就行了
[root@linux-node1 ~]# cat admin-openrc.sh export OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=adminexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL=http://192.168.56.11:35357/v3export OS_IDENTITY_API_VERSION=3[root@linux-node1 ~]# cat demo-openrc.sh export OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=demoexport OS_TENANT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=demoexport OS_AUTH_URL=http://192.168.56.11:5000/v3export OS_IDENTITY_API_VERSION=3[root@linux-node1 ~]# chmod +x demo-openrc.sh [root@linux-node1 ~]# chmod +x admin-openrc.sh [root@linux-node1 ~]# source admin-openrc.sh[root@linux-node1 ~]# openstack token issue+------------+----------------------------------+| Field
|+------------+----------------------------------+| expires
| 2015-12-16T17:54:06.632906Z
| ade4b0c451b936555db75 || project_id | 45ec9fd0f7d || user_id
| bb6d73c0b025bb72c06a1 |+------------+----------------------------------+
3.5 Glance部署
修改glance-api和glance-registry的配置文件,同步数据库
[root@linux-node1 glance]# vim glance-api.conf538 connection=mysql://glance:glance@192.168.56.11/glance[root@linux-node1 glance]# vim glance-registry.conf 363 connection=mysql://glance:glance@192.168.56.11/glance[root@linux-node1 glance]# su -s /bin/sh -c "glance-manage db_sync" glanceNo handlers could be found for logger "oslo_config.cfg"(可以忽略)
检查导入glance库的表情况
MariaDB [(none)]& use glance;Database changedMariaDB [glance]& show tables;+----------------------------------+| Tables_in_glance
|+----------------------------------+| artifact_blob_locations
|| artifact_blobs
|| artifact_dependencies
|| artifact_properties
|| artifact_tags
|| artifacts
|| image_locations
|| image_members
|| image_properties
|| image_tags
|| metadef_namespace_resource_types || metadef_namespaces
|| metadef_objects
|| metadef_properties
|| metadef_resource_types
|| metadef_tags
|| migrate_version
|| task_info
|+----------------------------------+20 rows in set (0.00 sec)
配置glance连接keystone,对于keystone,每个服务都要有一个用户连接keystone
[root@linux-node1 ~]# source admin-openrc.sh [root@linux-node1 ~]# openstack user create --domain default --password=glance glance+-----------+----------------------------------+| Field
|+-----------+----------------------------------+| domain_id | default
|| enabled
| f4c340ba02bf44bf83d5c3ccfec77359 || name
|+-----------+----------------------------------+[root@linux-node1 ~]# openstack role add --project service --user glance admin
修改glance-api配置文件,结合keystone和mysql
[root@linux-node1 glance]# vim glance-api.conf
978 auth_uri = http://192.168.56.11:5000 979 auth_url = http://192.168.56.11:35357 980 auth_plugin = password 981 project_domain_id = default 982 user_domain_id = default 983 project_name = service 984 username = glance 985 password = glance 1485 flavor=keystone 491 notification_driver = noop 镜像服务不需要使用消息队列 642 default_store=file镜像存放成文件 701 filesystem_store_datadir=/var/lib/glance/images/ 镜像存放位置 363 verbose=True
打开debug ``` 修改glance-registry配置文件,结合keystone和mysql ```bash [root@linux-node1 glance]# vim glance-registry.conf 188:verbose=True 316:notification_driver =noop 767 auth_uri = http://192.168.56.11:5000 768 auth_url = http://192.168.56.11:35357 769 auth_plugin = password 770 project_domain_id = default 771 user_domain_id = default 772 project_name = service 773 username = glance 774 password = glance 1256:flavor=keystone ``` 检查glance修改过的配置 ```bash[root@linux-node1 ~]# grep -n '^[a-z]' /etc/glance/glance-api.conf 363:verbose=True491:notification_driver = noop538:connection=mysql://glance:glance@192.168.56.11/glance642:default_store=file701:filesystem_store_datadir=/var/lib/glance/images/978:auth_uri = http://192.168.56.11:5000979:auth_url = http://192.168.56.11:35357980:auth_plugin = password981:project_domain_id = default982:user_domain_id = default983:project_name = service984:username = glance985:password = glance1485:flavor=keystone[root@linux-node1 ~]# grep -n '^[a-z]' /etc/glance/glance-registry.conf 188:verbose=True316:notification_driver =noop363:connection=mysql://glance:glance@192.168.56.11/glance767:auth_uri = http://192.168.56.11:5000768:auth_url = http://192.168.56.11:35357769:auth_plugin = password770:project_domain_id = default771:user_domain_id = default772:project_name = service773:username = glance774:password = glance1256:flavor=keystone ```对glance设置开机启动并启动glance服务```bash[root@linux-node1 ~]# systemctl enable openstack-glance-apiln -s '/usr/lib/systemd/system/openstack-glance-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-api.service'[root@linux-node1 ~]# systemctl enable openstack-glance-registryln -s '/usr/lib/systemd/system/openstack-glance-registry.service' '/etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service'[root@linux-node1 ~]# systemctl start openstack-glance-api[root@linux-node1 ~]# systemctl start openstack-glance-registry
查看galnce占用端口情况,其中9191是registry占用端口,9292是api占用端口
[root@linux-node1 ~]# netstat -lntup|egrep "" tcp
0 0.0.0.0:9191
13180/python2
0 0.0.0.0:9292
13162/python2 ```bash使glance服务在keystone上注册,才可以允许其他服务调用glance```bash[root@linux-node1 ~]# source admin-openrc.sh [root@linux-node1 ~]# openstack service create --name glance --description "OpenStack Image service" image+-------------+----------------------------------+| Field
|+-------------+----------------------------------+| description | OpenStack Image service
|| enabled
| cc8b4b4c712f47aa86e2d484c20a65c8 || name
|+-------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne
image public http://192.168.56.11:9292+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 56cf6132fef14bfaa01ca6 || interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| cc8b4b4c712f47aa86e2d484c20a65c8 || service_name | glance
|| service_type | image
| http://192.168.56.11:9292
|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne
image internal http://192.168.56.11:9292+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 8005e8fcd85f4ea281eb || interface
| internal
| RegionOne
|| region_id
| RegionOne
|| service_id
| cc8b4b4c712f47aa86e2d484c20a65c8 || service_name | glance
|| service_type | image
| http://192.168.56.11:9292
|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne
image admin http://192.168.56.11:9292+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 2b55d6db62eb47e9b11e0 || interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| cc8b4b4c712f47aa86e2d484c20a65c8 || service_name | glance
|| service_type | image
| http://192.168.56.11:9292
|+--------------+----------------------------------+
在admin和demo中加入glance的环境变量,告诉其他服务glance使用的环境变量,一定要在admin-openrc.sh的路径下执行
[root@linux-node1 ~]#
echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.shexport OS_IMAGE_API_VERSION=2[root@linux-node1 ~]# tail -1 admin-openrc.sh export OS_IMAGE_API_VERSION=2[root@linux-node1 ~]# tail -1 demo-openrc.sh
export OS_IMAGE_API_VERSION=2
如果出现以下情况,表示glance配置成功,由于没有镜像,所以看不到
[root@linux-node1 ~]# glance image-list+----+------+| ID | Name |+----+------++----+------+
下载一个镜像
[root@linux-node1 ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img--2015-12-17 02:12:55--
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.imgResolving download.cirros-cloud.net (download.cirros-cloud.net)... 69.163.241.114Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|69.163.241.114|:80... connected.HTTP request sent, awaiting response... 200 OKLength:
(13M) [text/plain]Saving to: ‘cirros-0.3.4-x86_64-disk.img’100%[======================================&] 13,287,936
2015-12-17 02:14:08 (183 KB/s) - ‘cirros-0.3.4-x86_64-disk.img’ saved [/]
上传镜像到glance,要在上一步所下载的镜像当前目录执行
[root@linux-node1 ~]# glance image-create --name "cirros"
--file cirros-0.3.4-x86_64-disk.img
--disk-format qcow2 --container-format bare
--visibility public --progress[=============================&] 100%+------------------+--------------------------------------+| Property
|+------------------+--------------------------------------+| checksum
| ee1eca47dc88fcc70a07c6
|| container_format | bare
|| created_at
| 2015-12-16T18:16:46Z
|| disk_format
| 4b36361f-1946-4026-b0cb-0f7073d48ade || min_disk
|| min_ram
| 45ec9fd0f7d
|| protected
|| updated_at
| 2015-12-16T18:16:47Z
|| virtual_size
|| visibility
|+------------------+--------------------------------------+
查看上传镜像
[root@linux-node1 ~]# glance image-list
+--------------------------------------+--------+| ID
|+--------------------------------------+--------+| 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros |+--------------------------------------+--------+[root@linux-node1 ~]# cd /var/lib/glance/images/[root@linux-node1 images]# ls4b36361f-1946-4026-b0cb-0f7073d48ade(和上述ID一致)
3.6 Nova控制节点的部署
创建nova用户,并加入到service项目中,赋予admin权限
[root@linux-node1 ~]# source admin-openrc.sh .[root@linux-node1 ~]# openstack user create --domain default --password=nova nova+-----------+----------------------------------+| Field
|+-----------+----------------------------------+| domain_id | default
|| enabled
| a842dc1e7b9 || name
|+-----------+----------------------------------+[root@linux-node1 ~]# openstack role add --project service --user nova admin
修改nova的配置文件,配置结果如下
[root@linux-node1 ~]# grep -n "^[a-Z]"
/etc/nova/nova.conf 61:rpc_backend=rabbit
使用rabbitmq消息队列124:my_ip=192.168.56.11
变量,方便调用268:enabled_apis=osapi_compute,metadata 禁用ec2的API425:auth_strategy=keystone
(使用keystone验证,分清处这个是default模块下的)1053:network_api_class=nova.network.neutronv2.api.API
网络使用neutron的,中间的.代表目录结构1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver (以前的类的名称LinuxBridgeInterfaceDriver,现在叫做NeutronLinuxBridgeInterfaceDriver)1331:security_group_api=neutron 设置安全组sg为neutron1370:debug=true 1374:verbose=True1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver(关闭防火墙)1828:vncserver_listen= $my_ip
vnc监听地址1832:vncserver_proxyclient_address= $my_ip 代理客户端地址2213:connection=mysql://nova:nova@192.168.56.11/nova2334:host=$my_ip
glance的地址2546:auth_uri = http://192.168.56.11:50002547:auth_url = http://192.168.56.11:353572548:auth_plugin = password2549:project_domain_id = default2550:user_domain_id = default2551:project_name = service
使用service项目2552:username = nova2553:password = nova3807:lock_path=/var/lib/nova/tmp
锁路径3970:rabbit_host=192.168.56.11
指定rabbit主机3974:rabbit_port=5672
rabbitmq端口3986:rabbit_userid=openstack
rabbitmq用户3990:rabbit_password=openstack
rabbitmq密码
同步数据库
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" novaMariaDB [nova]& use nova;Database changedMariaDB [nova]& show tables;+--------------------------------------------+| Tables_in_nova
|+--------------------------------------------+| agent_builds
|| aggregate_hosts
|| aggregate_metadata
|| aggregates
|| block_device_mapping
|| bw_usage_cache
|| certificates
|| compute_nodes
|| console_pools
|| consoles
|| dns_domains
|| fixed_ips
|| floating_ips
|| instance_actions
|| instance_actions_events
|| instance_extra
|| instance_faults
|| instance_group_member
|| instance_group_policy
|| instance_groups
|| instance_id_mappings
|| instance_info_caches
|| instance_metadata
|| instance_system_metadata
|| instance_type_extra_specs
|| instance_type_projects
|| instance_types
|| instances
|| key_pairs
|| migrate_version
|| migrations
|| networks
|| pci_devices
|| project_user_quotas
|| provider_fw_rules
|| quota_classes
|| quota_usages
|| reservations
|| s3_images
|| security_group_default_rules
|| security_group_instance_association
|| security_group_rules
|| security_groups
|| services
|| shadow_agent_builds
|| shadow_aggregate_hosts
|| shadow_aggregate_metadata
|| shadow_aggregates
|| shadow_block_device_mapping
|| shadow_bw_usage_cache
|| shadow_cells
|| shadow_certificates
|| shadow_compute_nodes
|| shadow_console_pools
|| shadow_consoles
|| shadow_dns_domains
|| shadow_fixed_ips
|| shadow_floating_ips
|| shadow_instance_actions
|| shadow_instance_actions_events
|| shadow_instance_extra
|| shadow_instance_faults
|| shadow_instance_group_member
|| shadow_instance_group_policy
|| shadow_instance_groups
|| shadow_instance_id_mappings
|| shadow_instance_info_caches
|| shadow_instance_metadata
|| shadow_instance_system_metadata
|| shadow_instance_type_extra_specs
|| shadow_instance_type_projects
|| shadow_instance_types
|| shadow_instances
|| shadow_key_pairs
|| shadow_migrate_version
|| shadow_migrations
|| shadow_networks
|| shadow_pci_devices
|| shadow_project_user_quotas
|| shadow_provider_fw_rules
|| shadow_quota_classes
|| shadow_quota_usages
|| shadow_quotas
|| shadow_reservations
|| shadow_s3_images
|| shadow_security_group_default_rules
|| shadow_security_group_instance_association || shadow_security_group_rules
|| shadow_security_groups
|| shadow_services
|| shadow_snapshot_id_mappings
|| shadow_snapshots
|| shadow_task_log
|| shadow_virtual_interfaces
|| shadow_volume_id_mappings
|| shadow_volume_usage_cache
|| snapshot_id_mappings
|| snapshots
|| task_log
|| virtual_interfaces
|| volume_id_mappings
|| volume_usage_cache
|+--------------------------------------------+105 rows in set (0.01 sec)
启动nova的全部服务
[root@linux-node1 ~]# systemctl enable openstack-nova-api.service
openstack-nova-cert.service openstack-nova-consoleauth.service
openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service[root@linux-node1 ~]# systemctl start openstack-nova-api.service
openstack-nova-cert.service openstack-nova-consoleauth.service
openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service
在keystone上注册nova,并检查控制节点的nova服务是否配置成功
[root@linux-node1 ~]# openstack service create --name nova --description "OpenStack Compute" compute+-------------+----------------------------------+| Field
|+-------------+----------------------------------+| description | OpenStack Compute
|| enabled
| f94da6e28d55 || name
|+-------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute public http://192.168.56.11:8774/v2/%\(tenant_id\)s+--------------+--------------------------------------------+| Field
|+--------------+--------------------------------------------+| enabled
| 23e9132aeb3a4dcbad7301
|| interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| f94da6e28d55
|| service_name | nova
|| service_type | compute
| http://192.168.56.11:8774/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.56.11:8774/v2/%\(tenant_id\)s+--------------+--------------------------------------------+| Field
|+--------------+--------------------------------------------+| enabled
| 1d67fe9d6ff53bcc657fb6
|| interface
| internal
| RegionOne
|| region_id
| RegionOne
|| service_id
| f94da6e28d55
|| service_name | nova
|| service_type | compute
| http://192.168.56.11:8774/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.56.11:8774/v2/%\(tenant_id\)s+--------------+--------------------------------------------+| Field
|+--------------+--------------------------------------------+| enabled
| b7f7c210becc4e54b76bb
|| interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| f94da6e28d55
|| service_name | nova
|| service_type | compute
| http://192.168.56.11:8774/v2/%(tenant_id)s |+--------------+--------------------------------------------+[root@linux-node1 ~]# openstack host list+---------------------------+-------------+----------+| Host Name
|+---------------------------+-------------+----------+| linux-node1.oldboyedu.com | conductor
| internal || linux-node1.oldboyedu.com | consoleauth | internal || linux-node1.oldboyedu.com | cert
| internal || linux-node1.oldboyedu.com | scheduler
| internal |+---------------------------+-------------+----------+
3.7 Nova compute 计算节点的部署
图解Nova cpmpute
nova-compute一般运行在计算节点上,通过Message Queue接收并管理VM的生命周期
nova-compute通过Libvirt管理KVN,通过XenAPI管理Xen等
配置时间同步
修改其配置文件
[root@linux-node1 ~]# vim /etc/chrony.confserver 192.168.56.11 iburst(只保留这一个server,也就是控制节点的时间)
chrony开机自启动,并且启动
[root@linux-node1 ~]#systemctl enable chronyd.service[root@linux-node1 ~]#systemctl start
chronyd.service
设置Centos7的时区
[root@linux-node1 ~]# timedatectl set-timezone``` Asia/Shanghai查看时区和时间```bash[root@linux-node ~]# timedatectl status
Local time: Fri 2015-12-18 00:12:26 CST
Universal time: Thu 2015-12-17 16:12:26 UTC
RTC time: Sun 2015-12-13 15:32:36
Timezone: Asia/Shanghai (CST, +0800)
NTP enabled: yesNTP synchronized: no RTC in local TZ: no
DST active: n/a[root@linux-node1 ~]# dateFri Dec 18 00:12:43 CST 2015
开始部署计算节点
更改计算节点上的配置文件,直接使用控制节点的配置文件
[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/ (在控制节点上操作的scp)
更改配置文件后的过滤结果
[root@linux-node ~]# grep -n '^[a-Z]' /etc/nova/nova.conf 61:rpc_backend=rabbit124:my_ip=192.168.56.12
改成本机ip268:enabled_apis=osapi_compute,metadata425:auth_strategy=keystone1053:network_api_class=nova.network.neutronv2.api.API1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver1331:security_group_api=neutron1370:debug=true1374:verbose=True1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver1820:novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html
指定novncproxy的IP地址和端口1828:vncserver_listen=0.0.0.0 vnc监听0.0.0.01832:vncserver_proxyclient_address= $my_ip1835:vnc_enabled=true
启用vnc1838:vnc_keymap=en-us
英语键盘2213:connection=mysql://nova:nova@192.168.56.11/nova2334:host=192.168.56.112546:auth_uri = http://192.168.56.11:50002547:auth_url = http://192.168.56.11:353572548:auth_plugin = password2549:project_domain_id = default2550:user_domain_id = default2551:project_name = service2552:username = nova2553:password = nova2727:virt_type=kvm 使用kvm虚拟机,需要cpu支持,可通过grep "vmx" /proc/cpuinfo查看3807:lock_path=/var/lib/nova/tmp3970:rabbit_host=192.168.56.113974:rabbit_port=56723986:rabbit_userid=openstack3990:rabbit_password=openstack
启动计算节点的libvirt和nova-compute
[root@linux-node ~]# systemctl enable libvirtd openstack-nova-computeln -s '/usr/lib/systemd/system/openstack-nova-compute.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service'[root@linux-node ~]# systemctl start libvirtd openstack-nova-compute
在控制节点中查看注册的host,最后一个compute即是注册的host
[root@linux-node1 ~]# openstack host list+---------------------------+-------------+----------+| Host Name
|+---------------------------+-------------+----------+| linux-node1.oldboyedu.com | conductor
| internal || linux-node1.oldboyedu.com | consoleauth | internal || linux-node1.oldboyedu.com | cert
| internal || linux-node1.oldboyedu.com | scheduler
| internal || linux-node.oldboyedu.com
|+---------------------------+-------------+----------+
在控制节点中测试nova和glance连接正常,nova链接keystone是否正常
[root@linux-node1 ~]# nova image-list+--------------------------------------+--------+--------+--------+| ID
| Status | Server |+--------------------------------------+--------+--------+--------+| 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros | ACTIVE |
|+--------------------------------------+--------+--------+--------+[root@linux-node1 ~]# nova endpointsWARNING: keystone has no endpoint in ! Available endpoints for this service:+-----------+----------------------------------+| keystone
|+-----------+----------------------------------+| id
| 1143dcd58bf2b9bf101d5 || interface | public
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:5000/v2.0
|+-----------+----------------------------------++-----------+----------------------------------+| keystone
|+-----------+----------------------------------+| id
| 28283cbf90b0fac9308df || interface | admin
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:35357/v2.0
|+-----------+----------------------------------++-----------+----------------------------------+| keystone
|+-----------+----------------------------------+| id
| 496fe5fbe99b62ed8a76acd || interface | internal
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:5000/v2.0
|+-----------+----------------------------------+WARNING: nova has no endpoint in ! Available endpoints for this service:+-----------+---------------------------------------------------------------+| nova
|+-----------+---------------------------------------------------------------+| id
| 1d67fe9d6ff53bcc657fb6
|| interface | internal
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:8774/v2/45ec9fd0f7d |+-----------+---------------------------------------------------------------++-----------+---------------------------------------------------------------+| nova
|+-----------+---------------------------------------------------------------+| id
| 23e9132aeb3a4dcbad7301
|| interface | public
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:8774/v2/45ec9fd0f7d |+-----------+---------------------------------------------------------------++-----------+---------------------------------------------------------------+| nova
|+-----------+---------------------------------------------------------------+| id
| b7f7c210becc4e54b76bb
|| interface | admin
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:8774/v2/45ec9fd0f7d |+-----------+---------------------------------------------------------------+WARNING: glance has no endpoint in ! Available endpoints for this service:+-----------+----------------------------------+| glance
|+-----------+----------------------------------+| id
| 2b55d6db62eb47e9b11e0 || interface | admin
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:9292
|+-----------+----------------------------------++-----------+----------------------------------+| glance
|+-----------+----------------------------------+| id
| 56cf6132fef14bfaa01ca6 || interface | public
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:9292
|+-----------+----------------------------------++-----------+----------------------------------+| glance
|+-----------+----------------------------------+| id
| 8005e8fcd85f4ea281eb || interface | internal
| RegionOne
|| region_id | RegionOne
| http://192.168.56.11:9292
|+-----------+----------------------------------+
3.8 Neturn 服务部署
注册neutron服务
[root@linux-node1 ~]# source admin-openrc.sh[root@linux-node1 ~]# openstack service create --name neutron --description "OpenStack Networking" network+-------------+----------------------------------+| Field
|+-------------+----------------------------------+| description | OpenStack Networking
|| enabled
| e698fcb250e9fdd8205565 || name
|+-------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 3cf4a13ec1b94e66a47e27bfccd95318 || interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| e698fcb250e9fdd8205565 || service_name | neutron
|| service_type | network
| http://192.168.56.11:9696
|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne network internal http://192.168.56.11:9696+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 5cd1e54d14f046dda2f7bf45b418f54c || interface
| internal
| RegionOne
|| region_id
| RegionOne
|| service_id
| e698fcb250e9fdd8205565 || service_name | neutron
|| service_type | network
| http://192.168.56.11:9696
|+--------------+----------------------------------+[root@linux-node1 ~]# openstack endpoint create --region RegionOne network admin http://192.168.56.11:9696+--------------+----------------------------------+| Field
|+--------------+----------------------------------+| enabled
| 2c68cbe6a3f0656eff03 || interface
| RegionOne
|| region_id
| RegionOne
|| service_id
| e698fcb250e9fdd8205565 || service_name | neutron
|| service_type | network
| http://192.168.56.11:9696
|+--------------+----------------------------------+创建neutron用户,并添加大service项目,给予admin权限[root@linux-node1 config]# openstack user create --domain default --password=neutron neutron+-----------+----------------------------------+| Field
|+-----------+----------------------------------+| domain_id | default
|| enabled
| 541d68efb8bba8b2539fc || name
|+-----------+----------------------------------+[root@linux-node1 config]# openstack role add --project service --user neutron admin
修改neturn配置文件
[root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/neutron.conf 20:state_path = /var/lib/neutron60:core_plugin = ml2 核心插件为ml277:service_plugins = router
服务插件为router92:auth_strategy = keystone360:notify_nova_on_port_status_changes = True端口改变需通知nova364:notify_nova_on_port_data_changes = True367:nova_url = http://192.168.56.11:8774/v2573:rpc_backend=rabbit717:auth_uri = http://192.168.56.11:5000718:auth_url = http://192.168.56.11:35357719:auth_plugin = password720:project_domain_id = default721:user_domain_id = default722:project_name = service723:username = neutron724:password = neutron737:connection = mysql://neutron:neutron@192.168.56.11:3306/neutron780:auth_url = http://192.168.56.11:35357781:auth_plugin = password782:project_domain_id = default783:user_domain_id = default784:region_name = RegionOne785:project_name = service786:username = nova787:password = nova818:lock_path = $state_path/lock998:rabbit_host = 192.168.56.111002:rabbit_port = 56721014:rabbit_userid = openstack1018:rabbit_password = openstack
修改ml2的配置文件,ml2后续会有详细说明
[root@linux-node1 ~]# grep "^[a-Z]" /etc/neutron/plugins/ml2/ml2_conf.ini type_drivers = flat,vlan,gre,vxlan,geneve 各种驱动tenant_network_types = vlan,gre,vxlan,geneve 网络类型mechanism_drivers = openvswitch,linuxbridge 支持的底层驱动extension_drivers = port_security
端口安全flat_networks = physnet1
使用单一扁平网络(和host一个网络)enable_ipset = True
修改的linuxbridge配置文件、
[root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini 9:physical_interface_mappings = physnet1:eth0 网卡映射eth16:enable_vxlan = false
关闭vxlan51:prevent_arp_spoofing = True57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver61:enable_security_group = True
修改dhcp的配置文件
[root@linux-node1 ~]# grep -n "^[a-Z]" /etc/neutron/dhcp_agent.ini 27:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver31:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq 使用Dnsmasq作为dhcp服务52:enable_isolated_metadata = true
修改metadata_agent.ini配置文件
[root@linux-node1 config]# grep -n "^[a-Z]" /etc/neutron/metadata_agent.ini 4:auth_uri = http://192.168.56.11:50005:auth_url = http://192.168.56.11:353576:auth_region = RegionOne7:auth_plugin = password8:project_domain_id = default9:user_domain_id = default10:project_name = service11:username = neutron12:password = neutron29:nova_metadata_ip = 192.168.56.1152:metadata_proxy_shared_secret = neutron
在控制节点的nova中添加关于neutron的配置,`添加如下内容到neutron模块即可
3033:url = http://192.168.56.11:96963034:auth_url = http://192.168.56.11:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name = RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3043:service_metadata_proxy = True3044:metadata_proxy_shared_secret = neutron````创建ml2的软连接```bash[root@linux-node1 config]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步neutron数据库,并检查结果
[root@linux-node1 config]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronMariaDB [(none)]& use neutron;Database changedMariaDB [neutron]& show tables;+-----------------------------------------+| Tables_in_neutron
|+-----------------------------------------+| address_scopes
|| alembic_version
|| allowedaddresspairs
|| arista_provisioned_nets
|| arista_provisioned_tenants
|| arista_provisioned_vms
|| brocadenetworks
|| brocadeports
|| cisco_csr_identifier_map
|| cisco_hosting_devices
|| cisco_ml2_apic_contracts
|| cisco_ml2_apic_host_links
|| cisco_ml2_apic_names
|| cisco_ml2_n1kv_network_bindings
|| cisco_ml2_n1kv_network_profiles
|| cisco_ml2_n1kv_policy_profiles
|| cisco_ml2_n1kv_port_bindings
|| cisco_ml2_n1kv_profile_bindings
|| cisco_ml2_n1kv_vlan_allocations
|| cisco_ml2_n1kv_vxlan_allocations
|| cisco_ml2_nexus_nve
|| cisco_ml2_nexusport_bindings
|| cisco_port_mappings
|| cisco_router_mappings
|| consistencyhashes
|| csnat_l3_agent_bindings
|| default_security_group
|| dnsnameservers
|| dvr_host_macs
|| embrane_pool_port
|| externalnetworks
|| extradhcpopts
|| firewall_policies
|| firewall_rules
|| firewalls
|| flavors
|| flavorserviceprofilebindings
|| floatingips
|| ha_router_agent_port_bindings
|| ha_router_networks
|| ha_router_vrid_allocations
|| healthmonitors
|| ikepolicies
|| ipallocationpools
|| ipallocations
|| ipamallocationpools
|| ipamallocations
|| ipamavailabilityranges
|| ipamsubnets
|| ipavailabilityranges
|| ipsec_site_connections
|| ipsecpeercidrs
|| ipsecpolicies
|| lsn_port
|| maclearningstates
|| members
|| meteringlabelrules
|| meteringlabels
|| ml2_brocadenetworks
|| ml2_brocadeports
|| ml2_dvr_port_bindings
|| ml2_flat_allocations
|| ml2_geneve_allocations
|| ml2_geneve_endpoints
|| ml2_gre_allocations
|| ml2_gre_endpoints
|| ml2_network_segments
|| ml2_nexus_vxlan_allocations
|| ml2_nexus_vxlan_mcast_groups
|| ml2_port_binding_levels
|| ml2_port_bindings
|| ml2_ucsm_port_profiles
|| ml2_vlan_allocations
|| ml2_vxlan_allocations
|| ml2_vxlan_endpoints
|| multi_provider_networks
|| networkconnections
|| networkdhcpagentbindings
|| networkgatewaydevicereferences
|| networkgatewaydevices
|| networkgateways
|| networkqueuemappings
|| networkrbacs
|| networks
|| networksecuritybindings
|| neutron_nsx_network_mappings
|| neutron_nsx_port_mappings
|| neutron_nsx_router_mappings
|| neutron_nsx_security_group_mappings
|| nexthops
|| nsxv_edge_dhcp_static_bindings
|| nsxv_edge_vnic_bindings
|| nsxv_firewall_rule_bindings
|| nsxv_internal_edges
|| nsxv_internal_networks
|| nsxv_port_index_mappings
|| nsxv_port_vnic_mappings
|| nsxv_router_bindings
|| nsxv_router_ext_attributes
|| nsxv_rule_mappings
|| nsxv_security_group_section_mappings
|| nsxv_spoofguard_policy_network_mappings || nsxv_tz_network_bindings
|| nsxv_vdr_dhcp_bindings
|| nuage_net_partition_router_mapping
|| nuage_net_partitions
|| nuage_provider_net_bindings
|| nuage_subnet_l2dom_mapping
|| ofcfiltermappings
|| ofcnetworkmappings
|| ofcportmappings
|| ofcroutermappings
|| ofctenantmappings
|| packetfilters
|| poolloadbalanceragentbindings
|| poolmonitorassociations
|| poolstatisticss
|| portbindingports
|| portinfos
|| portqueuemappings
|| portsecuritybindings
|| providerresourceassociations
|| qos_bandwidth_limit_rules
|| qos_network_policy_bindings
|| qos_policies
|| qos_port_policy_bindings
|| qosqueues
|| quotausages
|| reservations
|| resourcedeltas
|| router_extra_attributes
|| routerl3agentbindings
|| routerports
|| routerproviders
|| routerroutes
|| routerrules
|| routers
|| securitygroupportbindings
|| securitygrouprules
|| securitygroups
|| serviceprofiles
|| sessionpersistences
|| subnetpoolprefixes
|| subnetpools
|| subnetroutes
|| subnets
|| tz_network_bindings
|| vcns_router_bindings
|| vpnservices
|+-----------------------------------------+155 rows in set (0.00 sec)
重启nova-api,并启动neutron服务
[root@linux-node1 config]# systemctl restart openstack-nova-api[root@linux-node1 config]# systemctl enable neutron-server.service
neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service[root@linux-node1 config]# systemctl start neutron-server.service
neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service
检查neutron-agent结果
[root@linux-node1 config]# neutron agent-list+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+| id
| agent_type
| alive | admin_state_up | binary
|+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+| 5a9a522f-e2dc-42dc-ab37-b26da0bfe416 | Metadata agent
| linux-node1.oldboyedu.com |
| neutron-metadata-agent
|| 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent
| linux-node1.oldboyedu.com |
| neutron-dhcp-agent
|| f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com |
| neutron-linuxbridge-agent |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
| neutron-metadata-agent
|| 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent
| linux-node1.oldboyedu.com |
| neutron-dhcp-agent
|| f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com |
| neutron-linuxbridge-agent |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
开始部署neutron的计算节点,在这里直接scp过去,不需要做任何更改
[root@linux-node1 config]#
scp /etc/neutron/neutron.conf 192.168.56.12:/etc/neutron/[root@linux-node1 config]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/
修改计算节点的nova配置,添加如下内容到neutron模块即可
3033:url = http://192.168.56.11:96963034:auth_url = http://192.168.56.11:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name = RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3043:service_metadata_proxy = True3044:metadata_proxy_shared_secret = neutron````复制linuxbridge_agent文件,无需更改,并创建ml2软连接```bash[root@linux-node1 ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/[root@linux-node ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
重启计算节点的nova-computer
[root@linux-node ml2]# systemctl restart openstack-nova-compute.service
计算机点上启动linuxbridge_agent服务
[root@linux-node ml2]# systemctl restart openstack-nova-compute.service
[root@linux-node ml2]# systemctl enable neutron-linuxbridge-agent.serviceln -s '/usr/lib/systemd/system/neutron-linuxbridge-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service'[root@linux-node ml2]# systemctl start neutron-linuxbridge-agent.service
检查neutron的结果,有四个(控制节点一个,计算节点两个)结果代表正确
[root@linux-node1 config]# neutron agent-list+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+| id
| agent_type
| alive | admin_state_up | binary
|+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+| 5a9a522f-e2dc-42dc-ab37-b26da0bfe416 | Metadata agent
| linux-node1.oldboyedu.com |
| neutron-metadata-agent
|| 7d81019e-ca3b-4b32-ae32-c3de9452ef9d | Linux bridge agent | linux-node.oldboyedu.com
| neutron-linuxbridge-agent || 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent
| linux-node1.oldboyedu.com |
| neutron-dhcp-agent
|| f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com |
| neutron-linuxbridge-agent |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
| neutron-metadata-agent
|| 7d81019e-ca3b-4b32-ae32-c3de9452ef9d | Linux bridge agent | linux-node.oldboyedu.com
| neutron-linuxbridge-agent || 8ba06bd7-896c-47aa-a733-8a9a9822361c | DHCP agent
| linux-node1.oldboyedu.com |
| neutron-dhcp-agent
|| f16eef03-4592-4352-8d5e-c08fb91dc983 | Linux bridge agent | linux-node1.oldboyedu.com |
| neutron-linuxbridge-agent |+--------------------------------------+--------------------+---------------------------+-------+----------------+---------------------------+
四、创建一台虚拟机
图解网络,并创建一个真实的桥接网络
创建一个单一扁平网络(名字:flat),网络类型为flat,网络适共享的(share),网络提供者:physnet1,它是和eth0关联起来的
[root@linux-node1 ~]# source admin-openrc.sh [root@linux-node1 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flatCreated a new network:+---------------------------+--------------------------------------+| Field
|+---------------------------+--------------------------------------+| admin_state_up
| 7a3c7391-cea7-47eb-a0ef-f7b || mtu
|| port_security_enabled
|| provider:network_type
|| provider:physical_network | physnet1
|| provider:segmentation_id
|| router:external
|| subnets
|| tenant_id
| 45ec9fd0f7d
|+---------------------------+--------------------------------------+
对上一步创建的网络创建一个子网,名字为:subnet-create flat,设置dns和网关
[root@linux-node1 ~]# neutron subnet-create flat 192.168.56.0/24 --name flat-subnet --allocation-pool start=192.168.56.100,end=192.168.56.200 --dns-nameserver 192.168.56.2 --gateway 192.168.56.2Created a new subnet:+-------------------+------------------------------------------------------+| Field
|+-------------------+------------------------------------------------------+| allocation_pools
| {"start": "192.168.56.100", "end": "192.168.56.200"} || cidr
| 192.168.56.0/24
|| dns_nameservers
| 192.168.56.2
|| enable_dhcp
|| gateway_ip
| 192.168.56.2
|| host_routes
| 6841c8ae-78f6-44e2-ab74-c2
|| ip_version
|| ipv6_address_mode |
|| ipv6_ra_mode
| flat-subnet
|| network_id
| 7a3c7391-cea7-47eb-a0ef-f7b
|| subnetpool_id
|| tenant_id
| 45ec9fd0f7d
|+-------------------+------------------------------------------------------+
查看创建的网络和子网
[root@linux-node1 ~]# neutron net-list+--------------------------------------+------+------------------------------------------------------+| id
| name | subnets
|+--------------------------------------+------+------------------------------------------------------+| 7a3c7391-cea7-47eb-a0ef-f7b | flat | 6841c8ae-78f6-44e2-ab74-c2 192.168.56.0/24 |+--------------------------------------+------+------------------------------------------------------+
注:创建虚拟机之前,由于一个网络下不能存在多个dhcp,所以一定关闭其他的dhcp选项
下面开始正式创建虚拟机,为了可以连上所创建的虚拟机,在这里要创建一对公钥和私钥,并添加到openstack中
[root@linux-node1 ~]# source demo-openrc.sh[root@linux-node1 ~]# ssh-keygen -q -N ""Enter file in which to save the key (/root/.ssh/id_rsa): [root@linux-node1 ~]# nova keypair-add --pub-key .ssh/id_rsa.pub mykey[root@linux-node1 ~]# nova keypair-list+-------+-------------------------------------------------+| Name
| Fingerprint
|+-------+-------------------------------------------------+| mykey | 9f:25:57:44:45:a3:6d:0d:4b:e7:ca:3a:9c:67:32:6f |+-------+-------------------------------------------------+[root@linux-node1 ~]# ls .ssh/id_rsa
id_rsa.pub
known_hosts
创建一个安全组,打开icmp和开放22端口
[root@linux-node1 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0+-------------+-----------+---------+-----------+--------------+| IP Protocol | From Port | To Port | IP Range
| Source Group |+-------------+-----------+---------+-----------+--------------+| icmp
| 0.0.0.0/0 |
|+-------------+-----------+---------+-----------+--------------+[root@linux-node1 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0+-------------+-----------+---------+-----------+--------------+| IP Protocol | From Port | To Port | IP Range
| Source Group |+-------------+-----------+---------+-----------+--------------+| tcp
| 0.0.0.0/0 |
|+-------------+-----------+---------+-----------+--------------+
创建虚拟机之前要进行的确认虚拟机类型flavor(相当于EC2的intance的type)、需要的镜像(EC2的AMI),需要的网络(EC2的VPC),安全组(EC2的sg)
[root@linux-node1 ~]# nova flavor-list+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+| ID | Name
| Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+| 1
| m1.small
| m1.medium | 4096
| m1.large
| m1.xlarge | 16384
|+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+[root@linux-node1 ~]# nova image-list+--------------------------------------+--------+--------+--------+| ID
| Status | Server |+--------------------------------------+--------+--------+--------+| 4b36361f-1946-4026-b0cb-0f7073d48ade | cirros | ACTIVE |
|+--------------------------------------+--------+--------+--------+[root@linux-node1 ~]# neutron net-list+--------------------------------------+------+------------------------------------------------------+| id
| name | subnets
|+--------------------------------------+------+------------------------------------------------------+| 7a3c7391-cea7-47eb-a0ef-f7b | flat | 6841c8ae-78f6-44e2-ab74-c2 192.168.56.0/24 |+--------------------------------------+------+------------------------------------------------------+[root@linux-node1 ~]# nova secgroup-list
+--------------------------------------+---------+------------------------+| Id
| Description
|+--------------------------------------+---------+------------------------+| 2946cecd-0933-45d0-a6e2-0606abe418ee | default | Default security group |+--------------------------------------+---------+------------------------+
创建一台虚拟机,类型为m1.tiny,镜像为cirros(上文wget的),网络id为neutron net-list出来的,安全组就是默认的,选择刚开的创建的key-pair,虚拟机的名字为hello-instance
[root@linux-node1 ~]# nova boot --flavor m1.tiny --image

我要回帖

更多关于 V c 的文章

 

随机推荐