主从MHA

作者: 挑战_bae7 | 来源:发表于2020-11-27 10:02 被阅读0次

主从MHA

工作原理
主库宕机处理过程
1. 监控节点 (通过配置文件获取所有节点信息)
   系统,网络,SSH连接性
   主从状态,重点是主库

2. 选主
(1) 如果判断从库(position或者GTID),数据有差异,最接近于Master的slave,成为备选主
(2) 如果判断从库(position或者GTID),数据一致,按照配置文件顺序,选主.
(3) 如果设定有权重(candidate_master=1),按照权重强制指定备选主.
    1. 默认情况下如果一个slave落后master 100M的relay logs的话,即使有权重,也会失效.
    2. 如果check_repl_delay=0的化,即使落后很多日志,也强制选择其为备选主
3. 数据补偿
(1) 当SSH能连接,从库对比主库GTID 或者position号,立即将二进制日志保存至各个从节点并且应用(save_binary_logs )
(2) 当SSH不能连接, 对比从库之间的relaylog的差异(apply_diff_relay_logs) 
4. Failover
将备选主进行身份切换,对外提供服务
其余从库和新主库确认新的主从关系(change master to)
5. 应用透明(VIP)
6. 故障切换通知(send_reprt)
7. 二次数据补偿(binlog_server)
8. 自愈自治(待开发...)

环境准备:

1.软连接 程序中已经写绝对路径
ln -s /usr/local/mysql/bin/mysqlbinlog    /usr/bin/mysqlbinlog
ln -s /usr/local/mysql/bin/mysql  /usr/bin/mysql
2.配置 互信
db01:
rm -rf /root/.ssh 
ssh-keygen
cd /root/.ssh 
mv id_rsa.pub authorized_keys
scp  -r  /root/.ssh  192.168.122.104:/root 
scp  -r  /root/.ssh  192.168.122.105:/root 
各节点验证
db01:
ssh 192.168.122.103 date
ssh 192.168.122.104 date
ssh 192.168.122.105 date
db02:
ssh 192.168.122.103 date
ssh 192.168.122.104 date
ssh 192.168.122.105 date
db03:
ssh 192.168.122.103 date
ssh 192.168.122.104 date
ssh 192.168.122.105 date
3.下载
mha官网:https://code.google.com/archive/p/mysql-master-ha/
github下载地址:https://github.com/yoshinorim/mha4mysql-manager/wiki/Downloads
http://www.mysql.gr.jp/frame/modules/bwiki/index.php?plugin=attach&pcmd=open&file=mha4mysql-manager-0.56-0.el6.noarch.rpm&refer=matsunobu
http://www.mysql.gr.jp/frame/modules/bwiki/index.php?plugin=attach&pcmd=open&file=mha4mysql-node-0.56-0.el6.noarch.rpm&refer=matsunobu
node 节点 三台都安装
yum install perl-DBD-MySQL -y
yum install -y  mha4mysql-node-0.56-0.el6.noarch.rpm
manager 节点 这里选择db03 安装
yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes
yum install -y mha4mysql-manager-0.56-0.el6.noarch.rpm
4.主库
grant all privileges on *.* to mha@'192.168.122.%' identified by 'mha';
确认下 三node是否都有
select user,host from mysql.user;
5.MHA配置文件
manager节点配置 db03
创建配置文件目录
 mkdir -p /etc/mha
创建日志目录
 mkdir -p /var/log/mha/app1
编辑mha配置文件
cat > /etc/mha/app1.cnf <<EOF
[server default]
manager_log=/var/log/mha/app1/manager        
manager_workdir=/var/log/mha/app1            
master_binlog_dir=/data/binlog       ##主库二进制日志目录必须匹配
user=mha                                   
password=mha                               
ping_interval=2    ##检查主库是否存活每2秒ping一次 如果不通 ping3次
repl_password=123
repl_user=repl
ssh_user=root      ##互信用户                     
[server1]                                   
hostname=192.168.122.103
port=3306                                  
[server2]            
hostname=192.168.122.104
port=3306
[server3]
hostname=192.168.122.105
port=3306
EOF
6.MHA 检查
-rw-r--r--. 1 root root 545 11月 27 09:41 /etc/mha/app1.cnf
[root@db03 ~]# masterha_check_ssh --conf=/etc/mha/app1.cnf
Fri Nov 27 09:45:35 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Nov 27 09:45:35 2020 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Fri Nov 27 09:45:35 2020 - [info] Reading server configuration from /etc/mha/app1.cnf..
Fri Nov 27 09:45:35 2020 - [info] Starting SSH connection tests..
Fri Nov 27 09:45:37 2020 - [debug] 
Fri Nov 27 09:45:35 2020 - [debug]  Connecting via SSH from root@192.168.122.103(192.168.122.103:22) to root@192.168.122.104(192.168.122.104:22)..
Fri Nov 27 09:45:36 2020 - [debug]   ok.
Fri Nov 27 09:45:36 2020 - [debug]  Connecting via SSH from root@192.168.122.103(192.168.122.103:22) to root@192.168.122.105(192.168.122.105:22)..
Fri Nov 27 09:45:36 2020 - [debug]   ok.
Fri Nov 27 09:45:38 2020 - [debug] 
Fri Nov 27 09:45:36 2020 - [debug]  Connecting via SSH from root@192.168.122.104(192.168.122.104:22) to root@192.168.122.103(192.168.122.103:22)..
Fri Nov 27 09:45:36 2020 - [debug]   ok.
Fri Nov 27 09:45:36 2020 - [debug]  Connecting via SSH from root@192.168.122.104(192.168.122.104:22) to root@192.168.122.105(192.168.122.105:22)..
Fri Nov 27 09:45:37 2020 - [debug]   ok.
Fri Nov 27 09:45:38 2020 - [debug] 
Fri Nov 27 09:45:36 2020 - [debug]  Connecting via SSH from root@192.168.122.105(192.168.122.105:22) to root@192.168.122.103(192.168.122.103:22)..
Fri Nov 27 09:45:37 2020 - [debug]   ok.
Fri Nov 27 09:45:37 2020 - [debug]  Connecting via SSH from root@192.168.122.105(192.168.122.105:22) to root@192.168.122.104(192.168.122.104:22)..
Fri Nov 27 09:45:37 2020 - [debug]   ok.
Fri Nov 27 09:45:38 2020 - [info] All SSH connection tests passed successfully.

[root@db03 ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
Fri Nov 27 09:46:36 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Nov 27 09:46:36 2020 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Fri Nov 27 09:46:36 2020 - [info] Reading server configuration from /etc/mha/app1.cnf..
Fri Nov 27 09:46:36 2020 - [info] MHA::MasterMonitor version 0.56.
Fri Nov 27 09:46:37 2020 - [info] GTID failover mode = 1
Fri Nov 27 09:46:37 2020 - [info] Dead Servers:
Fri Nov 27 09:46:37 2020 - [info] Alive Servers:
Fri Nov 27 09:46:37 2020 - [info]   192.168.122.103(192.168.122.103:3306)
Fri Nov 27 09:46:37 2020 - [info]   192.168.122.104(192.168.122.104:3306)
Fri Nov 27 09:46:37 2020 - [info]   192.168.122.105(192.168.122.105:3306)
Fri Nov 27 09:46:37 2020 - [info] Alive Slaves:
Fri Nov 27 09:46:37 2020 - [info]   192.168.122.104(192.168.122.104:3306)  Version=5.7.32-log (oldest major version between slaves) log-bin:enabled
Fri Nov 27 09:46:37 2020 - [info]     GTID ON
Fri Nov 27 09:46:37 2020 - [info]     Replicating from 192.168.122.103(192.168.122.103:3306)
Fri Nov 27 09:46:37 2020 - [info]   192.168.122.105(192.168.122.105:3306)  Version=5.7.32-log (oldest major version between slaves) log-bin:enabled
Fri Nov 27 09:46:37 2020 - [info]     GTID ON
Fri Nov 27 09:46:37 2020 - [info]     Replicating from 192.168.122.103(192.168.122.103:3306)
Fri Nov 27 09:46:37 2020 - [info] Current Alive Master: 192.168.122.103(192.168.122.103:3306)
Fri Nov 27 09:46:37 2020 - [info] Checking slave configurations..
Fri Nov 27 09:46:37 2020 - [info]  read_only=1 is not set on slave 192.168.122.104(192.168.122.104:3306).
Fri Nov 27 09:46:37 2020 - [info]  read_only=1 is not set on slave 192.168.122.105(192.168.122.105:3306).
Fri Nov 27 09:46:37 2020 - [info] Checking replication filtering settings..
Fri Nov 27 09:46:37 2020 - [info]  binlog_do_db= , binlog_ignore_db= 
Fri Nov 27 09:46:37 2020 - [info]  Replication filtering check ok.
Fri Nov 27 09:46:37 2020 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Fri Nov 27 09:46:37 2020 - [info] Checking SSH publickey authentication settings on the current master..
Fri Nov 27 09:46:37 2020 - [info] HealthCheck: SSH to 192.168.122.103 is reachable.
Fri Nov 27 09:46:37 2020 - [info] 
192.168.122.103(192.168.122.103:3306) (current master)
 +--192.168.122.104(192.168.122.104:3306)
 +--192.168.122.105(192.168.122.105:3306)

Fri Nov 27 09:46:37 2020 - [info] Checking replication health on 192.168.122.104..
Fri Nov 27 09:46:37 2020 - [info]  ok.
Fri Nov 27 09:46:37 2020 - [info] Checking replication health on 192.168.122.105..
Fri Nov 27 09:46:37 2020 - [info]  ok.
Fri Nov 27 09:46:37 2020 - [warning] master_ip_failover_script is not defined.
Fri Nov 27 09:46:37 2020 - [warning] shutdown_script is not defined.
Fri Nov 27 09:46:37 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

7.启动
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
检查状态
masterha_check_status --conf=/etc/mha/app1.cnf  
mysql -umha -pmha -h 10.0.0.51 -e "show variables like 'server_id'" ##查看
脚本检测工具
Manager工具包主要包括以下几个工具:
masterha_manger             启动MHA 
masterha_check_ssh      检查MHA的SSH配置状况 
masterha_check_repl         检查MySQL复制状况 
masterha_master_monitor     检测master是否宕机 
masterha_check_status       检测当前MHA运行状态 
masterha_master_switch  控制故障转移(自动或者手动)
masterha_conf_host      添加或删除配置的server信息
masterha_stop  关闭MHA

Node工具包主要包括以下几个工具:
这些工具通常由MHA Manager的脚本触发,无需人为操作
save_binary_logs            保存和复制master的二进制日志 
apply_diff_relay_logs       识别差异的中继日志事件并将其差异的事件应用于其他的
purge_relay_logs            清除中继日志(不会阻塞SQL线程)
额外参数
说明:
主库宕机谁来接管?
1. 所有从节点日志都是一致的,默认会以配置文件的顺序去选择一个新主。
2. 从节点日志不一致,自动选择最接近于主库的从库
3. 如果对于某节点设定了权重(candidate_master=1),权重节点会优先选择。
但是此节点日志量落后主库100M日志的话,也不会被选择。可以配合check_repl_delay=0,关闭日志量的检查,强制选择候选节点。

(1)  ping_interval=1
#设置监控主库,发送ping包的时间间隔,尝试三次没有回应的时候自动进行failover
(2) candidate_master=1
#设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
(3)check_repl_delay=0
#默认情况下如果一个slave落后master 100M的relay logs的话,
MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间,通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master
模拟恢复
pkill mysqld 停掉主库的mysql
MHA 会自动切换 主库 但是只有一次 MHA进程会自动退出
pkill mysqld 停掉从库的mysql 无影响 不会退出 但是如果主库在坏 就无法切换
恢复:重新建主从 后然后在执行
CHANGE MASTER TO 
MASTER_HOST='192.168.122.104',
MASTER_PORT=3306, 
MASTER_AUTO_POSITION=1, 
MASTER_USER='repl', 
MASTER_PASSWORD='123';
start slave;
tailf  /var/log/mha/app1/manager ##日志中会有这个 直接复制粘贴下就好了
CHANGE MASTER TO MASTER_HOST='192.168.122.105', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123'; ##复制账户密码 它显示XXX
MHA的vip漂移
[root@db03 ~]# cat >/usr/local/bin/master_ip_failover  <<EOF
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
#############################添加内容部分#########################################
my $vip = '192.168.122.122';
my $brdc = '192.168.122.255';
my $ifdev = 'eth0';
my $key = '1';
my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;";
my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key";
##################################################################################
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);

exit &main();

sub main {

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {

my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {

my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
EOF
vi /etc/mha/app1.cnf
添加:
master_ip_failover_script=/usr/local/bin/master_ip_failover
注意:
[root@db03 ~]# dos2unix /usr/local/bin/master_ip_failover 
dos2unix: converting file /usr/local/bin/master_ip_failover to Unix format ...
[root@db03 ~]# chmod +x /usr/local/bin/master_ip_failover 
重启
masterha_stop --conf=/etc/mha/app1.cnf
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
第一次手动在主库创建vip
ip addr add 192.168.122.122/24 dev eth0 label eth0:1
ifconfig eth0:1 192.168.122.122/24
ifconfig eth0:1 down 关闭
告警
vim /etc/mha/app1.cnf
report_script=/usr/local/bin/wx.sh  告警微信
masterha_stop --conf=/etc/mha/app1.cnf
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
 MHA binlog server(db03)
必须安装mysql 建议跟主库版本一致
binlogserver配置:
找一台额外的机器,必须要有5.6以上的版本,支持gtid并开启,我们直接用的第二个slave(db03)
vim /etc/mha/app1.cnf 
[binlog1]
no_master=1
hostname=192.168.122.105
master_binlog_dir=/data/mysql/binlog

mkdir -p /data/mysql/binlog
chown -R mysql.mysql /data/*
修改完成后,将主库binlog拉过来(从000001开始拉,之后的binlog会自动按顺序过来)

db03 [(none)]>show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.122.103   
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000006  拉取文件
          Read_Master_Log_Pos: 234 检查点

cd /data/mysql/binlog     -----》必须进入到自己创建好的目录
mysqlbinlog  -R --host=192.168.122.103 --user=mha --password=mha --raw  --stop-never mysql-bin.000006 &
注意:
拉取日志的起点,需要按照目前从库的已经获取到的二进制日志点为起点
masterha_stop --conf=/etc/mha/app1.cnf
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &

pkill mysql
停止 binlog
root@db03 binlog]# 
[1]-  完成                  mysqlbinlog -R --host=192.168.122.103 --user=mha --password=mha --raw --stop-never mysql-bin.000006
发告警微信
停止MHA 切换主从
oot@db03 binlog]# 
[2]+  完成                  nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1
vip 漂移
切换完成 
主库宕机,binlogserver 自动停掉,manager 也会自动停止。
处理思路:
1、重新获取新主库的binlog到binlogserver中
2、重新配置文件binlog server信息
3、最后再启动MHA
职责:
1. 搭建:MHA+VIP+SendReport+BinlogServer
2. 监控及故障处理
3.  高可用架构的优化
 核心是:尽可能降低主从的延时,让MHA花在数据补偿上的时间尽量减少。
5.7 版本,开启GTID模式,开启从库SQL并发复制。 

相关文章

网友评论

      本文标题:主从MHA

      本文链接:https://www.haomeiwen.com/subject/mmsaiktx.html