美文网首页JCloud
用RDO部署OpenStack后要做的几件事

用RDO部署OpenStack后要做的几件事

作者: 魔哈Moha | 来源:发表于2017-05-18 22:14 被阅读176次

红帽的RDO部署工具,经过好几个版本的迭代,目前部署起来已经很快很方便了。但是如果要应用在生产环境,还是需要定制下配置。
我这里根据自己的Vlan环境做了几点配置上的修改。

  • Mariadb 配置
  • 开启rabbitmq-server的webUI
  • 更新Qemu和Libvirtd版本
  • 支持metadata服务
  • 添加Neutron的Qos policy支持
  • 配置nova.conf,支持冷迁移和在线块迁移
  • 内核优化

MariaDB

Mariadb的max open files参数默认是1024,当OpenStack集群扩展时,数据库会变成瓶颈。

$ cat /usr/lib/systemd/system/mariadb.service

[Service]

...

LimitNOFILE=65535

如果条件可以,可以对Mariadb做适当的优化如下.

$ cat /etc/my.cnf

...

[mysqld]

symbolic-links=0
skip-name-resolve
max_allowed_packet = 4M
max_connections = 4096
max_connect_errors = 1024
open_files_limit = 65535
table_open_cache = 1024
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
innodb_flush_log_at_trx_commit=2
innodb_buffer_pool_size = 2048M
innodb_log_file_size= 512M
innodb_log_buffer_size=16M
innodb_additional_mem_pool_size=64M
query_cache_size = 0
query_cache_type=0
thread_cache_size = 80

!includedir /etc/my.cnf.d

然后重启mariadb

$ systemctl daemon-reload && systemctl restart mariadb

开启RabbitMQ的WebUI

有时候需要登录到RabbitMQ的后台UI,更直观方便管理,以及观察消息服务器的整体状况。

$ rabbitmq-plugins enable rabbitmq_management  && systemctl restart rabbitmq-server

更新Qemu和Libvirtd版本

现在RDO默认都添加了红帽的QEMU-KVM源了,如果没加,就需要自己手动添加源,更新libvirt套件;

tips:我之前在Mitaka版本中测试过,在虚拟机支持块迁移时,需要更新redhat的虚拟化套件,qemu至少需要1.3+。我的环境都升级到libvirtd-2.0.0和qemu-kvm-ev-2.6.0。

cat CentOS-QEMU-EV.repo
# CentOS-QEMU-EV.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Virtualization for more
# information

[centos-qemu-ev]
name=CentOS-$releasever - QEMU EV
baseurl=http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization

[centos-qemu-ev-test]
name=CentOS-$releasever - QEMU EV Testing
baseurl=http://buildlogs.centos.org/centos/$releasever/virt/$basearch/kvm-common/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization

支持Metadata服务

默认的neutron-metadata服务依赖l3层的neutron-ns-metadata-proxy代理,由于我的环境是二层的Vlan环境,并不是由router port充当metadataip。所以我们需要dhcp的port来代理,需要修改下dhcp的配置。

/etc/neutron/dhcp_agent.ini

...

enable_isolated_metadata = True
force_metadata = True

...

重启dhcp服务

$ systemctl restart neutron-dhcp-agent

重启成功后系统里就会有/bin/neutron-ns-metadata-proxy进程出现,虚拟机里面重启下network就有一条169.254.169.254的路由表信息。

添加Neutron的Qos policy支持

默认RBO安装的Neutron是没有Qos Policy的,需要自己手动修改Neutron配置开启功能。

$ cat /etc/neutron/neutron.conf

service_plugins=neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,router,metering,firewall,neutron.services.qos.qos_plugin.QoSPlugin

以下网络节点和所有计算节点agent都需要修改

$ cat /etc/neutron/plugins/ml2/ml2_conf.ini   
[ml2] 
extension_drivers = port_security,qos

$ cat /etc/neutron/plugins/ml2/openvswitch_agent.ini

[agent]
extensions = qos

允许租户创建qos policy,打开允许住户创建qos的权限。

To enable bandwidth limit rule:

"get_policy_bandwidth_limit_rule": "rule:regular_user",
"create_policy_bandwidth_limit_rule": "rule:admin_only",
"delete_policy_bandwidth_limit_rule": "rule:admin_only",
"update_policy_bandwidth_limit_rule": "rule:admin_only",
"get_rule_type": "rule:regular_user",

To enable DSCP marking rule:

"get_policy_dscp_marking_rule": "rule:regular_user",
"create_dscp_marking_rule": "rule:admin_only",
"delete_dscp_marking_rule": "rule:admin_only",
"update_dscp_marking_rule": "rule:admin_only",
"get_rule_type": "rule:regular_user",

重启neutron服务(包含网络节点和计算节点)

$ systemctl  restart neutron-server neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent
$ systemctl  restart neutron-openvswitch-agent

配置nova.conf,支持冷迁移和在线块迁移

OpenStack中虚拟机迁移都是通过nova账号传输镜像文件,需要对nova配置ssh无密码访问。

nova支持迁移功能需要做以下几个操作:

1. nova账号可登陆和配置密码

usermod -s /bin/bash nova
echo "nova:novamigrate" | chpasswd

2. 生成ssh证书,同步公私钥到/var/lib/nova/.ssh目录下

3. 修改nova.conf

allow_resize_to_same_host=true
inject_password=false

#虚拟机在线迁移flag  (只适用Mitaka)
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE
block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_NON_SHARED_INC
cpu_mode=custom
cpu_model=kvm64

tips: 我经过测试live_migration_flag中取消了VIR_MIGRATE_TUNNELLED在线块迁移才通过。

4. 如果sshd更改了端口,需要在/var/lib/nova/.ssh目录下创建ssh的config,定义port

5. 更新集群所有节点的/etc/hosts文件

6. 重启nova-compute服务

内核优化

这里我就不过多解释了,由于是通过ansible推送下去的,所以有些配置根据自己情况配置,一千台服务器有一千种优化方式,我这里只做一个建议。

{%- set mem_total_bytes = ansible_memtotal_mb * 1024 * 1024 -%}
# See http://www.cyberciti.biz/faq/linux-tcp-tuning/
kernel.panic = 10
kernel.panic_on_oops = 10
kernel.sysrq=0
kernel.ctrl-alt-del=1
kernel.core_pattern=/dev/null
kernel.shmmax = {{ mem_total_bytes//2 }}
kernel.shmall = {{ mem_total_bytes//2//4096 }}
kernel.msgmnb = 65536
kernel.msgmax = 65536
fs.file-max = 819200

vm.swappiness = 0
vm.dirty_ratio = 60
vm.dirty_background_ratio = 5
vm.min_free_kbytes = {{ mem_total_bytes//1024//100*5 }}

net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.conf.all.forwarding = 0
net.ipv4.conf.default.forwarding = 0
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.conf.all.bootp_relay = 0
net.ipv4.conf.all.proxy_arp = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retries2 = 8
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.ip_local_port_range = 8192 65535

net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_syn_backlog = 65535
net.core.somaxconn = 16384
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_adv_win_scale = 2
net.ipv4.tcp_rmem = 4096  524287 16777216
net.core.rmem_default = 524287
net.core.rmem_max = 16777216
net.ipv4.tcp_wmem = 4096 524287 16777216
net.core.wmem_default = 524287
net.core.wmem_max = 16777216
net.core.optmem_max = 524287
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_tw_buckets = 360000

net.ipv4.neigh.default.gc_thresh3 = 2048
net.ipv4.neigh.default.gc_thresh2 = 1024
net.ipv4.neigh.default.gc_thresh1 = 128
net.ipv4.neigh.default.gc_interval = 120
net.ipv4.route.flush = 1

#net.netfilter.nf_conntrack_max = 4194304
#net.netfilter.nf_conntrack_tcp_timeout_established = 86400
#net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 60
#net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 13
#net.netfilter.nf_conntrack_tcp_timeout_time_wait = 60

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

相关文章

网友评论

    本文标题:用RDO部署OpenStack后要做的几件事

    本文链接:https://www.haomeiwen.com/subject/qudexxtx.html