1.MHA介绍
MHA(Master High Availability)目前在MySQL高可用方面是一个相对
成熟的解决方案,它由日本DeNA公司youshimaton(现就职于
Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障
切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做
到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换
的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上
的高可用。
2.MHA架构
该软件由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点)。MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。
在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失(配合mysql半同步复制效果更佳),但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
注意:目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器,出于机器成本的考虑,淘宝也在该基础上进行了改造,目前淘宝TMHA已经支持一主一从。
3.MHA故障转移过程
(1)从宕机崩溃的master保存二进制日志事件(binlog events);
(2)识别含有最新更新的slave;
(3)应用差异的中继日志(relay log)到其他的slave;
(4)应用从master保存的二进制日志事件(binlog events);
(5)提升一个slave为新的master;
(6)使其他的slave连接新的master进行复制;
(7)在新的master启动vip地址,保证前端请求可以发送到新的master。
环境搭建
node1 192.168.1.192 Centos7.5.18
node2 192.168.1.127 Centos7.5.18
node3 192.168.1.210 Centos7.5.18
node4 192.168.1.46 Centos7.5.18
1.准备工作
1)所有节点配置SSH免密钥登陆
2)所有节点时间同步
3)关闭防火墙和selinux
2.安装mysql数据库
1)安装数据库(node1、node2、node3)
wget https://repo.mysql.com//mysql80-community-release-el7-1.noarch.rpm
rpm -ivh mysql80-community-release-el7-1.noarch.rpm
#关闭其他版本,开启5.6版本
enable=1 改为 enable=0
yum install mysql-community-client mysql-community-server
2)修改配置
主库(node1)
server-id=1
log-bin=mysql-logbin
从库(node2、node3)
log-bin=mysql-logbin
relay-log = relay-bin
server-id = 2
slave-skip-errors = all
relay_log_purge = 0
3)启动数据库,并初始化数据库
systemctl start mysqld; systemctl enable mysqld
mysql_secure_installation
3.Mysql主从复制
1)主库创建复制账号(node1)
grant replication client,replication slave on *.* to 'repluser'@'%' identified by'replpass';
FLUSH PRIVILEGES;
2)从库配置(node2、node3)
mysql >CHANGE MASTER TO MASTER_HOST='192.168.1.196',\
MASTER_USER='repluser',\
MASTER_PASSWORD='replpass',\
MASTER_LOG_FILE='mysql-logbin.000014',\
MASTER_LOG_POS=120;
mysql > start slave ;
4.MHA配置
4.1环境安装
软件下载:https://downloads.mariadb.com/MHA/
1)所有节点安装mha4mysql-node(node1、node2、node3、node4)
yum install -y perl-DBD-MySQL
rpm -ivh mha4mysql-node-0.54-0.el6.noarch.rpm
2)管理节点安装
yum -y install mha4mysql-manager-0.55-0.el6.noarch.rpm
4.2.数据库账号配置
1)创建mha管理账号(node1、node2、node3)
mysql> grant all privileges on *.* TO mha@'%' IDENTIFIED BY 'test';
mysql> flush privileges;
2)在从库创建复制权限用户,与主库保持一致
mysql> grant replication client,replication slave on *.* to 'repluser'@'%' identified by'replpass';
mysql> FLUSH PRIVILEGES;
4.3.配置文件设置
[root@Centos7-4 ~]# mkdir /etc/mha
[root@Centos7-4 ~]# mkdir /var/log/mha/app1/ -p
[root@Centos7-4~]# vim /etc/mha/app1.conf
[server default]
# 设置manager的日志
manager_log=/var/log/mha/app1/manager.log
# 设置manager的工作目录
manager_workdir=/var/log/mha/app1
# 设置自动故障转移时的切换脚本
master_ip_failover_script=/data/perl/master_ip_failover
# 设置手动故障转移时的切换脚本
master_ip_online_change_script=/data/perl/master_ip_failover
# 设置mysql中 manager用户名和密码
user=mha
password=test
# 设置监控主库,发送ping包的时间间隔,默认是3秒,尝试三次没有回应的时候自动进行故障转移
ping_interval=1
# 设置远程mysql在发生切换时binlog的保存位置
remote_workdir=/var/log/mha/app1
# 设置复制用户名密码
repl_password=replpass
repl_user=repluser
# 设置发生切换后发送报警的脚本
report_script=/data/perl/send_report
# 设置ssh的登录用户名
ssh_user=root
[server01]
hostname=192.168.1.196
port=3306
master_binlog_dir=/var/lib/mysql
candidate_master=1 # 设置为候选master,如果设置该参数以后,发生主从切换以后会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
check_repl_delay=0 # 默认情况下如果一个slave落后master 100M的relay logs的话,MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间,通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master
[server02]
candidate_master=1
check_repl_delay=0
hostname=192.168.1.127
master_binlog_dir=/var/lib/mysql
port=3306
[server03]
hostname=192.168.1.210
ignore_fail=1ignore_fail=1
master_binlog_dir=/var/lib/mysql
no_master=1
port=3306
主状态VIP切换脚本
[root@Centos7-4 ~]# cat /data/perl/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '192.168.1.248/24'; # Virtual IP #设置VIP地址
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
$ssh_user = "root";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
主从状态切换邮件报警脚本
[root@Centos7-4 ~]# cat /data/perl/send_report
#!/usr/bin/perl
use strict;
use warnings FATAL => 'all';
use Mail::Sender;
use Getopt::Long;
#new_master_host and new_slave_hosts are set only when recovering master succeeded
my ( $dead_master_host, $new_master_host, $new_slave_hosts, $subject, $body );
my $smtp='smtp服务器地址';
my $mail_from='发件人邮箱';
my $mail_user='邮箱登陆用户名';
my $mail_pass='邮箱登陆密码';
my $mail_to=['收件人地址'];
GetOptions(
'orig_master_host=s' => \$dead_master_host,
'new_master_host=s' => \$new_master_host,
'new_slave_hosts=s' => \$new_slave_hosts,
'subject=s' => \$subject,
'body=s' => \$body,
);
mailToContacts($smtp,$mail_from,$mail_user,$mail_pass,$mail_to,$subject,$body);
sub mailToContacts {
my ( $smtp, $mail_from, $user, $passwd, $mail_to, $subject, $msg ) = @_;
open my $DEBUG, "> /tmp/monitormail.log"
or die "Can't open the debug file:$!\n";
my $sender = new Mail::Sender {
ctype => 'text/plain; charset=utf-8',
encoding => 'utf-8',
smtp => $smtp,
from => $mail_from,
auth => 'LOGIN',
TLS_allowed => '0',
authid => $user,
authpwd => $passwd,
to => $mail_to,
subject => $subject,
debug => $DEBUG
};
$sender->MailMsg(
{ msg => $msg,
debug => $DEBUG
}
) or print $Mail::Sender::Error;
return 1;
}
# Do whatever you want here
exit 0;
5.4检查配置
1)检查SSH登陆
[root@Centos7-4 mha]# masterha_check_ssh --conf=/etc/mha/app1.conf
Thu Jun 21 16:28:59 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Jun 21 16:28:59 2018 - [info] Reading application default configurations from /etc/mha/app1.conf..
Thu Jun 21 16:28:59 2018 - [info] Reading server configurations from /etc/mha/app1.conf..
Thu Jun 21 16:28:59 2018 - [info] Starting SSH connection tests..
Thu Jun 21 16:29:00 2018 - [debug]
Thu Jun 21 16:28:59 2018 - [debug] Connecting via SSH from root@192.168.1.196(192.168.1.196:22) to root@192.168.1.127(192.168.1.127:22)..
Thu Jun 21 16:29:00 2018 - [debug] ok.
Thu Jun 21 16:29:00 2018 - [debug] Connecting via SSH from root@192.168.1.196(192.168.1.196:22) to root@192.168.1.210(192.168.1.210:22)..
Thu Jun 21 16:29:00 2018 - [debug] ok.
Thu Jun 21 16:29:01 2018 - [debug]
Thu Jun 21 16:29:00 2018 - [debug] Connecting via SSH from root@192.168.1.127(192.168.1.127:22) to root@192.168.1.196(192.168.1.196:22)..
Thu Jun 21 16:29:00 2018 - [debug] ok.
Thu Jun 21 16:29:00 2018 - [debug] Connecting via SSH from root@192.168.1.127(192.168.1.127:22) to root@192.168.1.210(192.168.1.210:22)..
Thu Jun 21 16:29:01 2018 - [debug] ok.
Thu Jun 21 16:29:02 2018 - [debug]
Thu Jun 21 16:29:00 2018 - [debug] Connecting via SSH from root@192.168.1.210(192.168.1.210:22) to root@192.168.1.196(192.168.1.196:22)..
Thu Jun 21 16:29:01 2018 - [debug] ok.
Thu Jun 21 16:29:01 2018 - [debug] Connecting via SSH from root@192.168.1.210(192.168.1.210:22) to root@192.168.1.127(192.168.1.127:22)..
Thu Jun 21 16:29:01 2018 - [debug] ok.
Thu Jun 21 16:29:02 2018 - [info] All SSH connection tests passed successfully.
2)检查mysql复制是否成功
[root@Centos7-4 mha]# masterha_check_repl --conf=/etc/mha/app1.conf
Thu Jun 21 16:30:03 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Jun 21 16:30:03 2018 - [info] Reading application default configurations from /etc/mha/app1.conf..
Thu Jun 21 16:30:03 2018 - [info] Reading server configurations from /etc/mha/app1.conf..
Thu Jun 21 16:30:03 2018 - [info] MHA::MasterMonitor version 0.55.
Thu Jun 21 16:30:04 2018 - [info] Dead Servers:
Thu Jun 21 16:30:04 2018 - [info] Alive Servers:
Thu Jun 21 16:30:04 2018 - [info] 192.168.1.196(192.168.1.196:3306)
Thu Jun 21 16:30:04 2018 - [info] 192.168.1.127(192.168.1.127:3306)
Thu Jun 21 16:30:04 2018 - [info] 192.168.1.210(192.168.1.210:3306)
Thu Jun 21 16:30:04 2018 - [info] Alive Slaves:
Thu Jun 21 16:30:04 2018 - [info] 192.168.1.127(192.168.1.127:3306) Version=5.6.40-log (oldest major version between slaves) log-bin:enabled
Thu Jun 21 16:30:04 2018 - [info] Replicating from 192.168.1.196(192.168.1.196:3306)
Thu Jun 21 16:30:04 2018 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Jun 21 16:30:04 2018 - [info] 192.168.1.210(192.168.1.210:3306) Version=5.6.40-log (oldest major version between slaves) log-bin:enabled
Thu Jun 21 16:30:04 2018 - [info] Replicating from 192.168.1.196(192.168.1.196:3306)
Thu Jun 21 16:30:04 2018 - [info] Not candidate for the new Master (no_master is set)
Thu Jun 21 16:30:04 2018 - [info] Current Alive Master: 192.168.1.196(192.168.1.196:3306)
Thu Jun 21 16:30:04 2018 - [info] Checking slave configurations..
Thu Jun 21 16:30:04 2018 - [info] read_only=1 is not set on slave 192.168.1.127(192.168.1.127:3306).
Thu Jun 21 16:30:04 2018 - [info] read_only=1 is not set on slave 192.168.1.210(192.168.1.210:3306).
Thu Jun 21 16:30:04 2018 - [info] Checking replication filtering settings..
Thu Jun 21 16:30:04 2018 - [info] binlog_do_db= , binlog_ignore_db=
Thu Jun 21 16:30:04 2018 - [info] Replication filtering check ok.
Thu Jun 21 16:30:04 2018 - [info] Starting SSH connection tests..
Thu Jun 21 16:30:06 2018 - [info] All SSH connection tests passed successfully.
Thu Jun 21 16:30:06 2018 - [info] Checking MHA Node version..
Thu Jun 21 16:30:07 2018 - [info] Version check ok.
Thu Jun 21 16:30:07 2018 - [info] Checking SSH publickey authentication settings on the current master..
Thu Jun 21 16:30:07 2018 - [info] HealthCheck: SSH to 192.168.1.196 is reachable.
Thu Jun 21 16:30:07 2018 - [info] Master MHA Node version is 0.54.
Thu Jun 21 16:30:07 2018 - [info] Checking recovery script configurations on the current master..
Thu Jun 21 16:30:07 2018 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql --output_file=/var/log/mha/app1/save_binary_logs_test --manager_version=0.55 --start_file=mysql-logbin.000013
Thu Jun 21 16:30:07 2018 - [info] Connecting to root@192.168.1.196(192.168.1.196)..
Creating /var/log/mha/app1 if not exists.. ok.
Checking output directory is accessible or not..
ok.
Binlog found at /var/lib/mysql, up to mysql-logbin.000013
Thu Jun 21 16:30:07 2018 - [info] Master setting check done.
Thu Jun 21 16:30:07 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Thu Jun 21 16:30:07 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mha' --slave_host=192.168.1.127 --slave_ip=192.168.1.127 --slave_port=3306 --workdir=/var/log/mha/app1 --target_version=5.6.40-log --manager_version=0.55 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx
Thu Jun 21 16:30:07 2018 - [info] Connecting to root@192.168.1.127(192.168.1.127:22)..
Checking slave recovery environment settings..
Opening /var/lib/mysql/relay-log.info ... ok.
Relay log found at /var/lib/mysql, up to relay-bin.000002
Temporary relay log file is /var/lib/mysql/relay-bin.000002
Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Thu Jun 21 16:30:08 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='mha' --slave_host=192.168.1.210 --slave_ip=192.168.1.210 --slave_port=3306 --workdir=/var/log/mha/app1 --target_version=5.6.40-log --manager_version=0.55 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx
Thu Jun 21 16:30:08 2018 - [info] Connecting to root@192.168.1.210(192.168.1.210:22)..
Checking slave recovery environment settings..
Opening /var/lib/mysql/relay-log.info ... ok.
Relay log found at /var/lib/mysql, up to relay-bin.000002
Temporary relay log file is /var/lib/mysql/relay-bin.000002
Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
done.
Testing mysqlbinlog output.. done.
Cleaning up test file(s).. done.
Thu Jun 21 16:30:08 2018 - [info] Slaves settings check done.
Thu Jun 21 16:30:08 2018 - [info]
192.168.1.196 (current master)
+--192.168.1.127
+--192.168.1.210
Thu Jun 21 16:30:08 2018 - [info] Checking replication health on 192.168.1.127..
Thu Jun 21 16:30:08 2018 - [info] ok.
Thu Jun 21 16:30:08 2018 - [info] Checking replication health on 192.168.1.210..
Thu Jun 21 16:30:08 2018 - [info] ok.
Thu Jun 21 16:30:08 2018 - [info] Checking master_ip_failover_script status:
Thu Jun 21 16:30:08 2018 - [info] /data/perl/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.1.196 --orig_master_ip=192.168.1.196 --orig_master_port=3306
IN SCRIPT TEST====/sbin/ifconfig ens160:1 down==/sbin/ifconfig ens160:1 192.168.1.248/24===
Checking the Status of the script.. OK
ssh: Could not resolve hostname cluster1: Name or service not known
Thu Jun 21 16:30:08 2018 - [info] OK.
Thu Jun 21 16:30:08 2018 - [warning] shutdown_script is not defined.
Thu Jun 21 16:30:08 2018 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
[root@Centos7-4 mha]#
5.5.在node1配置VIP
ifconfig ens160:1 192.168.1.248/24
5.6启动监控
[root@node4 ~]# nohup masterha_manager --conf=/etc/mha/app1.conf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
6.测试
查看node1IP地址
[root@Centos7-1 ~]# ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.196 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::cbb7:ffd1:4ec4:d6f8 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:b2:dd:01 txqueuelen 1000 (Ethernet)
RX packets 12689799 bytes 869238676 (828.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 70354 bytes 6211829 (5.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens160:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.248 netmask 255.255.255.0 broadcast 192.168.1.255
ether 00:0c:29:b2:dd:01 txqueuelen 1000 (Ethernet)
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 144 bytes 80810 (78.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 144 bytes 80810 (78.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
关闭node1上的mysql服务
[root@Centos7-1 ~]# systemctl stop mysqld
[root@Centos7-1 ~]# ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.196 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::cbb7:ffd1:4ec4:d6f8 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:b2:dd:01 txqueuelen 1000 (Ethernet)
RX packets 12702730 bytes 870050235 (829.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 70636 bytes 6253009 (5.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 144 bytes 80810 (78.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 144 bytes 80810 (78.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
查看node2的IP地址
可以看到VIP到了node2上
[root@Centos7-2 mysql]# ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.127 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::b4fe:6e12:6cf4:ac3b prefixlen 64 scopeid 0x20<link>
inet6 fe80::cbb7:ffd1:4ec4:d6f8 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:30:03:8c txqueuelen 1000 (Ethernet)
RX packets 12703278 bytes 869444462 (829.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 71994 bytes 6709615 (6.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens160:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.248 netmask 255.255.255.0 broadcast 192.168.1.255
ether 00:0c:29:30:03:8c txqueuelen 1000 (Ethernet)
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 994 bytes 73874 (72.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 994 bytes 73874 (72.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
查看node3的slave状态
master指向了node2
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.127
Master_User: repluser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-logbin.000004
Read_Master_Log_Pos: 120
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 286
Relay_Master_Log_File: mysql-logbin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 120
Relay_Log_Space: 453
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 2
Master_UUID: 3a839d6b-7513-11e8-98f3-000c2930038c
Master_Info_File: /var/lib/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
1 row in set (0.00 sec)
MHA+Keepalived
一、安装keepalived(node1、node2)
[root@Centos7-1 ~]# yum install keepalived
二、修改配置文件
1)node1
! Configuration File for keepalived
global_defs {
notification_email {
saltstack@163.com
}
notification_email_from dba@dbserver.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MySQL-HA
}
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 51
priority 150
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.248
}
}
2)node2
! Configuration File for keepalived
global_defs {
notification_email {
saltstack@163.com
}
notification_email_from dba@dbserver.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MySQL-HA
}
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 51
priority 120
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.248
}
}
启动Keepalived服务
node1、node2
systemctl restart keepalived;systemctl enable keepalived
修改MHA上failover文件
[root@Centos7-4 mha]# cat /data/perl/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '192.168.1.248/24'; # Virtual IP #设置VIP地址
my $ssh_start_vip = "systemctl start keepalvied";
my $ssh_stop_vip = "systemctl stop keepalived";
$ssh_user = "root";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
网友评论