美文网首页
Compute1节点安装及配置

Compute1节点安装及配置

作者: sapbcs | 来源:发表于2018-03-29 14:19 被阅读0次

一、前期准备

1. 安装和配置NTP客户端

  • 安装chrony软件包
    ~]# yum install chrony -y
  • 编辑chrony配置文件
~]# vim /etc/chrony.conf 
# 注销之前所有的server ... iburst行
server controller iburst
  • 启动chronyd服务,并配置开机自动启动
    ~]# systemctl enable chronyd.service
    ~]# systemctl start chronyd.service
  • 验证操作
~]# chronyc sources 
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* controller                    3   6    17     0   +356ns[+5471ns] +/- 4476us

要确保*标出现,才代表与时间服务器同步完成。

2. 软件包准备

  • 安装OpenStack Pike Repository
    ~]# yum install centos-release-openstack-pike -y
  • 升级软件包
    ~]# yum upgrade
  • 安装openstack客户端软件包
    ~]# yum install python-openstackclient -y
  • 安装openstack-selinux软件包
    ~]# yum install openstack-selinux -y

二、Nova的安装和配置

1. 软件包安装

~]# yum -y install openstack-nova-compute

2. 编辑配置文件/etc/nova/nova.conf

~]# cp /etc/nova/nova.conf{,.bak}
~]# vim /etc/nova/nova.conf

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:pike@controller
my_ip = 10.6.10.2
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = pike

[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
# ...
api_servers = http://controller:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = pike

3. 完成安装

(1) 验证compute1主机的宿主机是否支持硬件加速
当返回值大于等于1时,代表支持,无须额外操作;
当返回值等于0时,代表不支持,需要修改/etc/nova/nova.conf配置文件中[libvirt]段为如下字段:

[libvirt]
# ...
virt_type = qemu

此次搭建得到结果如下,无须额外操作。
~]# egrep -c '(vmx|svm)' /proc/cpuinfo
32
(2) 启动libvirtd.service和openstack-nova-compute.service服务,并配置其开机自动启动
~]# systemctl enable libvirtd.service openstack-nova-compute.service
~]# systemctl start libvirtd.service openstack-nova-compute.service

4. 将Compute节点添加到cell数据库中

重要:
一下这个环节的操作要在Controller节点上执行,不是Compute1节点!!!!!

(1) 读取admin-openrc环境变量配置文件,启用admin的CLI,去确认MariaDB中存在compute主机

~]# . admin-openrc
~]# openstack compute service list --service nova-compute
+----+--------------+----------+------+---------+-------+----------------------------+
| ID | Binary       | Host     | Zone | Status  | State | Updated At                 |
+----+--------------+----------+------+---------+-------+----------------------------+
|  6 | nova-compute | compute1 | nova | enabled | up    | 2018-02-08T06:07:19.000000 |
+----+--------------+----------+------+---------+-------+----------------------------+

(2) 发现compute主机

~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': c645de34-d50f-4134-b50d-15855cd0e105
Found 1 unmapped computes in cell: c645de34-d50f-4134-b50d-15855cd0e105
Checking host mapping for compute host 'compute1': 5d7e8f0e-1239-4a61-a2a5-9c71aabd9ec4
Creating host mapping for compute host 'compute1': 5d7e8f0e-1239-4a61-a2a5-9c71aabd9ec4

注意
当后续要再添加compute节点时,必须在controller节点上执行~]# nova-manage cell_v2 discover_hosts以便来注册新的compute节点。或者直接在配置文件/etc/nova/nova.conf中修改适当的发现周期时间,例如:

[scheduler]
discover_hosts_in_cells_interval = 300

三、Neutron的安装和配置

1. 安装软件包

~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

2. 配置通用组件

~]# cp /etc/neutron/neutron.conf{,.bak}
~]# vim /etc/neutron/neutron.conf

# 在[database]部分,注释掉所有connection的行,因为compute节点不需要直接与数据库沟通

[DEFAULT]
# ...
transport_url = rabbit://openstack:pike@controller
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = pike

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

注意:
一定要确保[keystone_authtoken]部分已经注释掉所有其他配置项!

验证配置

~]# grep "^[^#]" /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:pike@controller
auth_strategy = keystone
[agent]
[cors]
[database]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = pike
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

3. 配置Linux网桥代理

~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens192
# physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
# 其中,PROVIDER_INTERFACE_NAME替换为compute节点的外网网卡名称,ens192

[vxlan]
enable_vxlan = true
local_ip = 10.6.10.2
l2_population = true
# local_ip = OVERLAY_INTERFACE_IP_ADDRESS
# 其中,OVERLAY_INTERFACE_IP_ADDRESS替换为compute节点的管理网卡IP地址,10.6.10.2

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

验证配置

~]# grep "^[^#]" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:ens192
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = true
local_ip = 10.6.10.2
l2_population = true

4. 配置compute service去使用networking service

~]# cp /etc/nova/nova.conf{,.neutron.bak}
~]# vim /etc/nova/nova.conf

[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = pike

5. 完成安装

  • 重新启动openstack-nova-compute服务
    ~]# ~]# systemctl restart openstack-nova-compute.service
  • 启动neutron-linuxbridge-agent服务,并配置其开机自动启动
    ~]# systemctl enable neutron-linuxbridge-agent.service
    ~]# systemctl start neutron-linuxbridge-agent.service

相关文章

  • Compute1节点安装及配置

    一、前期准备 1. 安装和配置NTP客户端 安装chrony软件包~]# yum install chrony -...

  • 大数据集群搭建-Hadoop

    安装搭建 各节点安装Java,并配置环境变量。 先在Master节点进行Hadoop安装配置。主要配置文件如下: ...

  • Mesos+zookeeper+marathon安装

    一. 安装环境及配置 所有主机系统:centos6.9 我们通常采用多个master节点和多个slave节点来实现...

  • mysql5.7安装以及mgr搭建

    yum 安装MySQL 5.7的方法 修改域名解析及主机名 修改配置文件 建立复制账号 其它节点与首节点配置文件的区别

  • Ubuntu Redis 集群

    安装环境及版本: 系统:ubuntu 18.04 LTS Redis: redis-5.0.5 配置说明:六个节点...

  • FastDFS +nginx单机版

    FastDFS 配置环境 安装FastDFS 配置tracker节点 配置storage节点 效验 给storag...

  • Controller节点的安装及配置

    官方安装文档:https://docs.openstack.org/install-guide/ 密码汇总表 一、...

  • 九、OpenStack服务-neutron(计算节点)

    一、neutron计算节点安装配置 1、安装neutron计算节点软件: 2、修改配置文件:/etc/neutro...

  • kubeadm方式部署k8s集群

    一、系统设置:(所有节点) 二、安装docker(所有节点) 三、配置阿里源(所有节点) 四、安装kubeadm,...

  • Kafka 部署

    环境需求 需要预安装Zookeeper Kafka单节点broker的部署及使用 配置文件 文件路径:$KAFKA...

网友评论

      本文标题:Compute1节点安装及配置

      本文链接:https://www.haomeiwen.com/subject/usdgzxtx.html