haproxy性能测试参考:https://www.xilinx.com/publications/onload/sf-onload-haproxy-cookbook.pdf
R640 100G nic bond 开启hugepage,32cpu, 192G内存 http请求200 0000 并发/s
32cpu比40cpu性能更好
那么
R620 1G nic,32cpu和256内存性能表现如何,仅仅提升cpu数目对性能的提升如何
理论上性能只有maxconn 10000,从网卡对比数据考虑
测试场景: 链接mysql,维持120s断开
对ha 绑定cpu 1 2 4 8 16
统计网络io,cpu使用率,内存占用,ha保持稳定链接数目
配置修改
frontend Mariadbfrontend
mode tcp
- maxconn 40000
timeout client 3600s
timeout server 3600s
option tcplog
option tcpka
option mysql-check user haproxy post-41
bind 10.164.0.20:3306 process 1
bind 10.164.0.20:3306 process 2
bind 10.164.0.20:3306 process 3
bind 10.164.0.20:3306 process 4
default_backend Mariadbbackend
backend Mariadbbackend
mode tcp
- maxconn 40000
timeout client 3600s
timeout server 3600s
option tcplog
option tcpka
server ccontroller01 10.164.0.21:3306 check inter 2000 rise 2 fall 5 maxconn 40000
server ccontroller02 10.164.0.22:3306 check inter 2000 rise 2 fall 5 maxconn 40000
server ccontroller03 10.164.0.23:3306 check inter 2000 rise 2 fall 5 maxconn 40000
仅仅添加 maxconn 40000参数,原来并发链接从2041提升到5781,
增加cpubind数目目前看来对提升没有影响,因为在并发连接数不在增长时,cpu 使用率一直在10%以下。
backend 3 ccontrol multi-active or master-slave 对实际上 maxconn no effect
在haproxy stats页面查看发现
frontend limit 40000
server limit ccontroller01 40000
server limit ccontroller01 40000
server limit ccontroller01 40000
backend limit 4000 <目前怀疑这个值,但是网络上并发百万并没有配置该项>
目前使用haproxy version
/usr/sbin/haproxy --version
HA-Proxy version 1.8.8-1ubuntu0.9 2019/12/02
而且1.8版本 backend maxconn 目前不支持配置
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html
image.png2.3版本 也不支持
image.png
backend limit 4000 可能只是最高并发1/10,其实没有真正限制并发
第二次测试
ulimit -n 65535
尝试基于vip 3306, 脚本建立655535 个链接
cpu 1 cpu
网卡 1G
concmax: 25K
image.pngimage.png
测试变更cpu数目对性能影响
cpu 2 cpu
网卡 1G
由于2cpu不均衡,25K-->15K
haproxy: how to load balance multi cpus: https://medium.com/@dubiety/multiprocess-haproxy-%E8%AA%BF%E6%95%99%E7%B6%93%E9%A9%97-253804148ea2
追加测试:
docker ,host 都需要改
sysctl -w net.core.somaxconn=32768
or
ansible -i /root/ha 'compute' -m copy -a "src=kolla-ansible/tools/corp/test.py dest=/root/test.py"
ansible compute -i /root/ha -m shell -a "ulimit -n 65535; ulimit -n; python /root/test.py"
echo "net.core.somaxconn=32768" >> /etc/sysctl.conf
sysctl -p
https://blog.csdn.net/mawming/article/details/51952411#commentBox
目前测试环境单核28k cpu饱和,之前测试25K,变化不大,而且这个提升可能innodb_buffer_pool_size = 410M-->20480M 有关系
总结:
48CPU 256G内存
mariadb (galera多主架构)
基于docker维护
3节点基于vip做转发
haproxy主要配置
vim /etc/kolla/haproxy/haproxy.cfg
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
log 10.164.0.23:5140 local1
maxconn 40960
nbproc 1
listen mariadb
mode tcp
timeout client 3600s
timeout server 3600s
maxconn 40960
option tcplog
option tcpka
option mysql-check user haproxy post-41
bind 10.164.0.20:3306
server ccontroller01 10.164.0.21:3306 check inter 2000 rise 2 fall 5 maxconn 40960
server ccontroller02 10.164.0.22:3306 check inter 2000 rise 2 fall 5 maxconn 40960 backup
server ccontroller03 10.164.0.23:3306 check inter 2000 rise 2 fall 5 maxconn 40960 backup
vim /etc/kolla/mariadb/galera.cnf
wsrep_slave_threads = 4
wsrep_notify_cmd = /usr/local/bin/wsrep-notify.sh
wsrep_on = ON
max_connections = 40960
key_buffer_size = 64M
max_heap_table_size = 64M
tmp_table_size = 64M
innodb_buffer_pool_size = 65536M (64G)
innodb_log_file_size = 4096M
innodb_log_file_size = 8G
query_cache_size = 0
query_cache_type = 0
query_cache_limit = 2M
join_buffer_size = 512K
thread_cache_size = 4
performance_schema = ON
innodb_buffer_pool_instances = 64
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 0
内核参数:
仅修改docker内也可以
echo "net.core.somaxconn=32768" >> /etc/sysctl.conf
sysctl -p
数据:
image.png image.pngreference:
-
haproxy install and ab test https://fabianlee.org/2017/10/16/haproxy-zero-downtime-reloads-with-haproxy-1-8-on-ubuntu-16-04-with-systemd/
-
haproxy config: https://www.cnblogs.com/yinzhengjie/p/12121468.html
网友评论