美文网首页
2019-04-19-主从复制架构演变-MHA高可用技术

2019-04-19-主从复制架构演变-MHA高可用技术

作者: 王亚飞1992 | 来源:发表于2019-04-22 20:42 被阅读0次

    1. 主从复制架构演变介绍

    1.1 基本结构

    (1)一主一从
    (2)一主多从
    (3)多级主从
    (4)双主
    (5)循环复制
    

    1.2 高级应用架构演变

    1.2.1 高性能架构

    (1)读写分离架构以及产品(读性能较高)

    代码级别
    MySQL proxy (Atlas,mysql router,proxySQL(percona),maxscale)、
    amoeba(taobao)
    xx-dbproxy等。
    

    (2)分布式架构以及产品(读写性能都提高):

    Mycat--->DBLE自主研发等。
    TiDB
    

    1.2.2 高可用架构

    (3)单活:MMM架构——mysql-mmm(google)
    (4)单活:MHA架构——mysql-master-ha(日本DeNa),T-MHA
    (5)多活:MGR ——5.7 新特性 MySQL Group replication(5.7.17) --->Innodb Cluster  
    (6)多活:MariaDB Galera Cluster架构,(PXC)Percona XtraDB Cluster、MySQL Cluster(Oracle rac)架构
    

    3. 高可用MHA *****

    3.1 作用

    ====== monitor node 监控节点======
    (1) 监控所有节点,重点是master
    (2) 监控到master宕机(实例(ssh能),主机(ssh不能连))
    (3) 监控主从状态
    ====== failover 故障转移 ======
    (4) 对比各节点的GTID号码。
    (5) 数据补偿1:如果ssh能连,从节点立即保存自己缺失部分的二进制日志
    (6) 选主:对比各节点的GTID号码即可,选一个最接近于主库数据的从节点,恢复缺失的日志,并将从库切换为主库 stop slave  reset slave all
    (7) 数据补偿2:如果ssh不能连,计算两个从库的relaylog的差异,恢复到数据少的从库中.
    (8) 2号从库change master to 到 新主,开启新的主从关系
    ====== 应用透明=====
    (9) 使用vip机制实现应用透明
    ====== 补充功能 ======
    (10) 自动修复主库(加入集群)待开发...
    (11) 二次数据补偿的问题 (binlog server)
    (12) 提醒功能(send_report)
    (13) 权重的问题
    

    3.2 架构介绍:

    1主2从,master:db01   slave:db02   db03 ):
    MHA 高可用方案软件构成
    Manager软件:选择一个从节点安装
    Node软件:所有节点都要安装
    

    3.3 MHA软件构成

    Manager工具包主要包括以下几个工具:

    masterha_manger             启动MHA 
    masterha_check_ssh      检查MHA的SSH配置状况 
    masterha_check_repl         检查MySQL复制状况 
    masterha_master_monitor     检测master是否宕机 
    masterha_check_status       检测当前MHA运行状态 
    masterha_master_switch  控制故障转移(自动或者手动)
    masterha_conf_host      添加或删除配置的server信息
    

    Node工具包主要包括以下几个工具:

    这些工具通常由MHA Manager的脚本触发,无需人为操作
    save_binary_logs            保存和复制master的二进制日志 
    apply_diff_relay_logs       识别差异的中继日志事件并将其差异的事件应用于其他的
    purge_relay_logs            清除中继日志(不会阻塞SQL线程)
    

    3.4 MHA环境搭建

    3.4.1 规划:

    主库: 51    node 
    从库: 
    52      node
    53      node    manager
    

    3.4.2 准备环境(略。1主2从GTID)

    3.4.3 配置关键程序软连接

    ln -s /data/mysql/bin/mysqlbinlog    /usr/bin/mysqlbinlog
    ln -s /data/mysql/bin/mysql          /usr/bin/mysql
    

    3.4.4 配置各节点互信

    db01:
    rm -rf /root/.ssh 
    ssh-keygen
    cd /root/.ssh 
    mv id_rsa.pub authorized_keys
    scp  -r  /root/.ssh  10.0.0.52:/root 
    scp  -r  /root/.ssh  10.0.0.53:/root 
    

    各节点验证:

    db01:
    ssh 10.0.0.51 date
    ssh 10.0.0.52 date
    ssh 10.0.0.53 date
    

    db02:

    ssh 10.0.0.51 date
    ssh 10.0.0.52 date
    ssh 10.0.0.53 date
    

    db03:

    ssh 10.0.0.51 date
    ssh 10.0.0.52 date
    ssh 10.0.0.53 date
    

    3.4.5 安装软件

    下载mha软件

    mha官网:https://code.google.com/archive/p/mysql-master-ha/
    github下载地址:https://github.com/yoshinorim/mha4mysql-manager/wiki/Downloads
    

    所有节点安装Node软件依赖包

    yum install perl-DBD-MySQL -y
    rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm
    

    在db01主库中创建mha需要的用户

     grant all privileges on *.* to mha@'10.0.0.%' identified by 'mha';
    

    Manager软件安装(db03)

    yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes
    rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm
    

    3.4.6 配置文件准备(db03)

    创建配置文件目录

     mkdir -p /etc/mha
    

    创建日志目录

     mkdir -p /var/log/mha/app1
    

    编辑mha配置文件

    vim /etc/mha/app1.cnf
    [server default]
    manager_log=/var/log/mha/app1/manager        
    manager_workdir=/var/log/mha/app1            
    master_binlog_dir=/data/binlog       
    user=mha                                   
    password=mha                               
    ping_interval=2
    repl_password=123
    repl_user=repl
    ssh_user=root                               
    [server1]                                   
    hostname=10.0.0.51
    port=3306                                  
    [server2]            
    hostname=10.0.0.52
    port=3306
    [server3]
    hostname=10.0.0.53
    port=3306
    

    3.4.7 状态检查

    互信检查

    masterha_check_ssh  --conf=/etc/mha/app1.cnf 
    Fri Apr 19 16:39:34 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Fri Apr 19 16:39:34 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Fri Apr 19 16:39:34 2019 - [info] Reading server configuration from /etc/mha/app1.cnf..
    Fri Apr 19 16:39:34 2019 - [info] Starting SSH connection tests..
    Fri Apr 19 16:39:35 2019 - [debug] 
    Fri Apr 19 16:39:34 2019 - [debug]  Connecting via SSH from root@10.0.0.51(10.0.0.51:22) to root@10.0.0.52(10.0.0.52:22)..
    Fri Apr 19 16:39:34 2019 - [debug]   ok.
    Fri Apr 19 16:39:34 2019 - [debug]  Connecting via SSH from root@10.0.0.51(10.0.0.51:22) to root@10.0.0.53(10.0.0.53:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:36 2019 - [debug] 
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.52(10.0.0.52:22) to root@10.0.0.51(10.0.0.51:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.52(10.0.0.52:22) to root@10.0.0.53(10.0.0.53:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:37 2019 - [debug] 
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.53(10.0.0.53:22) to root@10.0.0.51(10.0.0.51:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.53(10.0.0.53:22) to root@10.0.0.52(10.0.0.52:22)..
    Fri Apr 19 16:39:36 2019 - [debug]   ok.
    Fri Apr 19 16:39:37 2019 - [info] All SSH connection tests passed successfully.
    

    主从状态

    [root@db03 ~]# masterha_check_ssh  --conf=/etc/mha/app1.cnf 
    Fri Apr 19 16:39:34 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Fri Apr 19 16:39:34 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Fri Apr 19 16:39:34 2019 - [info] Reading server configuration from /etc/mha/app1.cnf..
    Fri Apr 19 16:39:34 2019 - [info] Starting SSH connection tests..
    Fri Apr 19 16:39:35 2019 - [debug] 
    Fri Apr 19 16:39:34 2019 - [debug]  Connecting via SSH from root@10.0.0.51(10.0.0.51:22) to root@10.0.0.52(10.0.0.52:22)..
    Fri Apr 19 16:39:34 2019 - [debug]   ok.
    Fri Apr 19 16:39:34 2019 - [debug]  Connecting via SSH from root@10.0.0.51(10.0.0.51:22) to root@10.0.0.53(10.0.0.53:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:36 2019 - [debug] 
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.52(10.0.0.52:22) to root@10.0.0.51(10.0.0.51:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.52(10.0.0.52:22) to root@10.0.0.53(10.0.0.53:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:37 2019 - [debug] 
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.53(10.0.0.53:22) to root@10.0.0.51(10.0.0.51:22)..
    Fri Apr 19 16:39:35 2019 - [debug]   ok.
    Fri Apr 19 16:39:35 2019 - [debug]  Connecting via SSH from root@10.0.0.53(10.0.0.53:22) to root@10.0.0.52(10.0.0.52:22)..
    Fri Apr 19 16:39:36 2019 - [debug]   ok.
    Fri Apr 19 16:39:37 2019 - [info] All SSH connection tests passed successfully.
    [root@db03 ~]# masterha_check_repl  --conf=/etc/mha/app1.cnf 
    Fri Apr 19 16:40:50 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
    Fri Apr 19 16:40:50 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf..
    Fri Apr 19 16:40:50 2019 - [info] Reading server configuration from /etc/mha/app1.cnf..
    Fri Apr 19 16:40:50 2019 - [info] MHA::MasterMonitor version 0.56.
    Fri Apr 19 16:40:51 2019 - [info] GTID failover mode = 1
    Fri Apr 19 16:40:51 2019 - [info] Dead Servers:
    Fri Apr 19 16:40:51 2019 - [info] Alive Servers:
    Fri Apr 19 16:40:51 2019 - [info]   10.0.0.51(10.0.0.51:3306)
    Fri Apr 19 16:40:51 2019 - [info]   10.0.0.52(10.0.0.52:3306)
    Fri Apr 19 16:40:51 2019 - [info]   10.0.0.53(10.0.0.53:3306)
    Fri Apr 19 16:40:51 2019 - [info] Alive Slaves:
    Fri Apr 19 16:40:51 2019 - [info]   10.0.0.52(10.0.0.52:3306)  Version=5.7.20-log (oldest major version between slaves) log-bin:enabled
    Fri Apr 19 16:40:51 2019 - [info]     GTID ON
    Fri Apr 19 16:40:51 2019 - [info]     Replicating from 10.0.0.51(10.0.0.51:3306)
    Fri Apr 19 16:40:51 2019 - [info]   10.0.0.53(10.0.0.53:3306)  Version=5.7.20-log (oldest major version between slaves) log-bin:enabled
    Fri Apr 19 16:40:51 2019 - [info]     GTID ON
    Fri Apr 19 16:40:51 2019 - [info]     Replicating from 10.0.0.51(10.0.0.51:3306)
    Fri Apr 19 16:40:51 2019 - [info] Current Alive Master: 10.0.0.51(10.0.0.51:3306)
    Fri Apr 19 16:40:51 2019 - [info] Checking slave configurations..
    Fri Apr 19 16:40:51 2019 - [info]  read_only=1 is not set on slave 10.0.0.52(10.0.0.52:3306).
    Fri Apr 19 16:40:51 2019 - [info]  read_only=1 is not set on slave 10.0.0.53(10.0.0.53:3306).
    Fri Apr 19 16:40:51 2019 - [info] Checking replication filtering settings..
    Fri Apr 19 16:40:51 2019 - [info]  binlog_do_db= , binlog_ignore_db= 
    Fri Apr 19 16:40:51 2019 - [info]  Replication filtering check ok.
    Fri Apr 19 16:40:51 2019 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
    Fri Apr 19 16:40:51 2019 - [info] Checking SSH publickey authentication settings on the current master..
    Fri Apr 19 16:40:51 2019 - [info] HealthCheck: SSH to 10.0.0.51 is reachable.
    Fri Apr 19 16:40:51 2019 - [info] 
    10.0.0.51(10.0.0.51:3306) (current master)
     +--10.0.0.52(10.0.0.52:3306)
     +--10.0.0.53(10.0.0.53:3306)
    
    Fri Apr 19 16:40:51 2019 - [info] Checking replication health on 10.0.0.52..
    Fri Apr 19 16:40:51 2019 - [info]  ok.
    Fri Apr 19 16:40:51 2019 - [info] Checking replication health on 10.0.0.53..
    Fri Apr 19 16:40:51 2019 - [info]  ok.
    Fri Apr 19 16:40:51 2019 - [warning] master_ip_failover_script is not defined.
    Fri Apr 19 16:40:51 2019 - [warning] shutdown_script is not defined.
    Fri Apr 19 16:40:51 2019 - [info] Got exit code 0 (Not master dead).
    MySQL Replication Health is OK.
    

    3.4.8 开启MHA(db03):

    nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
    

    3.4.9 查看MHA状态

    [root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
    app1 (pid:4719) is running(0:PING_OK), master:10.0.0.51
    [root@db03 ~]# mysql -umha -pmha -h 10.0.0.51 -e "show variables like 'server_id'"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | server_id     | 51    |
    +---------------+-------+
    [root@db03 ~]# mysql -umha -pmha -h 10.0.0.52 -e "show variables like 'server_id'"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | server_id     | 52    |
    +---------------+-------+
    [root@db03 ~]# mysql -umha -pmha -h 10.0.0.53 -e "show variables like 'server_id'"
    mysql: [Warning] Using a password on the command line interface can be insecure.
    +---------------+-------+
    | Variable_name | Value |
    +---------------+-------+
    | server_id     | 53    |
    +---------------+-------+
    

    3.4.10 故障模拟及处理

    (1)停主库db01:

    /etc/init.d/mysqld stop
    

    观察manager 日志 tail -f /var/log/mha/app1/manager
    末尾必须显示successfully,才算正常切换成功。

    (2) 修复主库

    [root@db01 ~]# /etc/init.d/mysqld start
    

    (3) 恢复主从结构

    CHANGE MASTER TO 
    MASTER_HOST='10.0.0.52',
    MASTER_PORT=3306, 
    MASTER_AUTO_POSITION=1, 
    MASTER_USER='repl', 
    MASTER_PASSWORD='123';
    start slave ;
    

    (4) 修改配置文件

    [server1]
    hostname=10.0.0.51
    port=3306
    

    (5) 启动MHA

    nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
    

    3.4.11 Manager额外参数介绍

    说明:

    主库宕机谁来接管?
    1. 所有从节点日志都是一致的,默认会以配置文件的顺序去选择一个新主。
    2. 从节点日志不一致,自动选择最接近于主库的从库
    3. 如果对于某节点设定了权重(candidate_master=1),权重节点会优先选择。
    但是此节点日志量落后主库100M日志的话,也不会被选择。可以配合check_repl_delay=0,关闭日志量的检查,强制选择候选节点。
    
    (1)  ping_interval=1
    
    #设置监控主库,发送ping包的时间间隔,尝试三次没有回应的时候自动进行failover
    
    (2) candidate_master=1
    #设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
    
    (3)check_repl_delay=0
    #默认情况下如果一个slave落后master 100M的relay logs的话,
    MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间,通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master
    

    3.4.12 MHA 的vip功能

    外部的脚本的调用接口
    1. 参数:
    master_ip_failover_script=/usr/local/bin/master_ip_failover
    2. 注意:/usr/local/bin/master_ip_failover,必须事先准备好此脚本
    3. 将script.tar.gz 文件上传到/usr/local/bin,并解压
    4. 修改脚本内容:
    vi  /usr/local/bin/master_ip_failover
    my $vip = '10.0.0.55/24';
    my $key = '1';
    my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
    my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
    5. 更改manager配置文件:
    vi /etc/mha/app1.cnf
    添加:
    master_ip_failover_script=/usr/local/bin/master_ip_failover
    

    注意:

    [root@db03 ~]# dos2unix /usr/local/bin/master_ip_failover 
    dos2unix: converting file /usr/local/bin/master_ip_failover to Unix format ...
    [root@db03 ~]# chmod +x /usr/local/bin/master_ip_failover 
    

    主库上,手工生成第一个vip地址

    手工在主库上绑定vip,注意一定要和配置文件中的ethN一致,我的是eth0:1(1是key指定的值)
    ifconfig eth0:1 10.0.0.55/24
    

    重启mha

    masterha_stop --conf=/etc/mha/app1.cnf
    nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
    

    3.4.13 邮件提醒

    1. 参数:
    report_script=/usr/local/bin/send
    

    2. 准备邮件脚本

    send_report
    (1)准备发邮件的脚本(我们已经为大家准备好了script.tar.gz)
        将以上脚本解压到 /usr/local/bin
    (2)将准备好的脚本添加到mha配置文件中,让其调用
    

    3. 修改manager配置文件,调用邮件脚本

    vi /etc/mha/app1.cnf
    report_script=/usr/local/bin/send
    

    3. 停止MHA

    masterha_stop --conf=/etc/mha/app1.cnf
    

    4. 开启MHA

    nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
    

    5. 关闭主库,看警告邮件

    故障修复:

    1. 恢复故障节点
    (1)实例宕掉
    /etc/init.d/mysqld start 
    (2)主机损坏,有可能数据也损坏了
    备份并恢复故障节点。
    2.恢复主从环境
    看日志文件:
    CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';
    start slave ;
    3.恢复manager
    3.1 修好的故障节点配置信息,加入到配置文件
    [server1]
    hostname=10.0.0.51
    port=3306
    3.2 启动manager   
    nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
    

    3.4.14 binlog server(db03)

    参数:

    binlogserver配置:

    找一台额外的机器,必须要有5.6以上的版本,支持gtid并开启,我们直接用的第二个slave(db03)

    vim /etc/mha/app1.cnf 
    [binlog1]
    no_master=1
    hostname=10.0.0.53
    master_binlog_dir=/data/mysql/binlog
    

    创建必要目录
    提前创建好,这个目录不能和原有的binlog一致

    mkdir -p /data/mysql/binlog
    chown -R mysql.mysql /data/*
    

    修改完成后,将主库binlog拉过来(从000001开始拉,之后的binlog会自动按顺序过来)

    拉取主库binlog日志

    cd /data/mysql/binlog     -----》必须进入到自己创建好的目录
    mysqlbinlog  -R --host=10.0.0.52 --user=mha --password=mha --raw  --stop-never mysql-bin.000001 &
    

    注意:
    binlog拉取和mha本身没啥关系,但是mha配置文件中加入了binlogserver,必须mha启动之前要去配置执行,否则mha起不来

    重启MHA

    masterha_stop --conf=/etc/mha/app1.cnf
    nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
    

    故障处理

    主库宕机,binlogserver 自动停掉,manager 也会自动停止。
    处理思路:
    1、重新获取新主库的binlog到binlogserver中
    2、重新配置文件binlog server信息
    3、最后再启动MHA
    4.我们的工作
    
    搭建:MHA+VIP+SendReport+BinlogServer
    监控及故障处理
    高可用架构的优化
    核心是:尽可能降低主从的延时,让MHA花在数据补偿上的时间尽量减少。
    5.7 版本,开启GTID模式,开启从库SQL并发复制。
    

    ====================================

    相关文章

      网友评论

          本文标题:2019-04-19-主从复制架构演变-MHA高可用技术

          本文链接:https://www.haomeiwen.com/subject/ietpgqtx.html