美文网首页
InnoDB存储引擎--物理备份与实现

InnoDB存储引擎--物理备份与实现

作者: 胖熊猫l | 来源:发表于2017-02-06 17:21 被阅读0次

    0. Summary

    1. 下载和安装
    2. 备份
    3. 还原
    4. 如何实现一致性备份
    5. 和mysqldump & mydumper的区别
    6. 关于增量备份
    

    1. 下载和安装

    #### 下载 ####

    wget https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.4.5/binary/tarball/percona-xtrabackup-2.4.5-Linux-x86_64.tar.gz
    

    #### 解压 ####

    tar -zxf percona-xtrabackup-2.4.5-Linux-x86_64.tar.gz
    

    #### 设置环境变量 ####

    export PATH=/usr/local/mysql/bin:/usr/local/mydumper:/mdata/mysql/percona-xtrabackup-2.4.5-Linux-x86_64/bin:$PATH
    

    2. 备份

    [root@test-1 percona-xtrabackup-2.4.5-Linux-x86_64]# ls -ltr bin
    total 225988
    -rwxr-xr-x. 1 root root      3020 Nov 25 18:20 xbcloud_osenv
    -rwxr-xr-x. 1 root root   5071619 Nov 25 18:21 xbstream
    -rwxr-xr-x. 1 root root   5001985 Nov 25 18:21 xbcrypt
    -rwxr-xr-x. 1 root root   5179300 Nov 25 18:21 xbcloud
    -rwxr-xr-x. 1 root root 216142648 Nov 25 18:26 xtrabackup
    lrwxrwxrwx. 1 root root        10 Nov 25 18:26 innobackupex -> xtrabackup
    

    innodbckupex实际上是对xtrabackup的一个封装,常用的备份命令如下:

    innobackupex --compress --stream=xbstream --parallel=4 ./ | ssh user@otherhost "xbstream -x"
    innobackupex --databases=dbt3 --compress --compress-threads=8 --stream=xbstream --parallel=4 ./ >backup.xbstream
    

    相关参数解释:

    --compress ---- 对备份的文件进行压缩
    --compress-threads ---- 压缩的并行度
    --stream ---- 流模式
    --parallel ---- 流模式为xbstream通常后面加上prallel=4. 另一种模式会慢很多

    执行一个备份:

    [root@test-1 mdata]# innobackupex --user=root --password=mysql --databases=dbt3 --compress --compress-threads=8 --stream=xbstream --parallel=4 ./ >backup.xbstream
    170202 03:01:02 innobackupex: Starting the backup operation
    
    IMPORTANT: Please check that the backup run completes successfully.
               At the end of a successful backup run innobackupex
               prints "completed OK!".
    
    170202 03:01:02  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;port=3306' as 'root'  (using password: YES).
    Failed to connect to MySQL server as DBD::mysql module is not installed at - line 1327.
    170202 03:01:02 Connecting to MySQL server host: localhost, user: root, password: set, port: 3306, socket: (null)
    Using server version 5.7.14-log
    mysql/percona-xtrabackup-2.4.5-Linux-x86_64/bin/innobackupex version 2.4.5 based on MySQL server 5.7.13 Linux (x86_64) (revision id: e41c0be)
    xtrabackup: uses posix_fadvise().
    xtrabackup: cd to /mdata/mysql_data
    xtrabackup: open files limit requested 0, set to 16384
    xtrabackup: using the following InnoDB configuration:
    xtrabackup:   innodb_data_home_dir = .
    xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
    xtrabackup:   innodb_log_group_home_dir = ./
    xtrabackup:   innodb_log_files_in_group = 2
    xtrabackup:   innodb_log_file_size = 536870912
    xtrabackup: using O_DIRECT
    InnoDB: Number of pools: 1
    ......
    170202 03:01:27 Executing UNLOCK TABLES
    170202 03:01:27 All tables unlocked
    170202 03:01:27 [00] Compressing and streaming ib_buffer_pool to <STDOUT>
    170202 03:01:27 [00]        ...done
    170202 03:01:27 Backup created in directory '/mdata/'
    MySQL binlog position: filename 'bin.000087', position '1279', GTID of the last change '713a7f7f-6f53-11e6-b7a9-000c29de5d8b:1-284353'
    170202 03:01:27 [00] Compressing and streaming backup-my.cnf
    170202 03:01:27 [00]        ...done
    170202 03:01:27 [00] Compressing and streaming xtrabackup_info
    170202 03:01:27 [00]        ...done
    xtrabackup: Transaction log of lsn (7552061121) to (7552061130) was copied.
    170202 03:01:28 completed OK!
    

    实际环境可以单独创建一个备份用户:

    create user 'bkpuser'@'%' identified by 'mysql';
    grant reload,lock_tables,replication client on *.* to 'bkpuser'@'localhost';
    flush privileges;
    

    3. 还原

    mkdir test_backup
    xbstream -x < backup.xbsream -C ./test_backup
    

    对于压缩的备份,还需要再解压,需要下载qpress

    wget http://www.quicklz.com/qpress-11-linux-x64.tar
    

    我这里同样解压在/mdata/mysql/percona-xtrabackup-2.4.5-Linux-x86_64/bin下。

    对于压缩备份,备份的文件都是qp文件。

    [root@test-1 mdata]# ls -ltr test_backup
    total 73184
    -rw-r-----. 1 root root   421240 Feb  2 03:13 ibdata1.qp
    -rw-r-----. 1 root root  1567134 Feb  2 03:13 undo002.qp
    -rw-r-----. 1 root root 14767086 Feb  2 03:13 undo001.qp
    -rw-r-----. 1 root root 58150380 Feb  2 03:13 undo003.qp
    -rw-r-----. 1 root root      152 Feb  2 03:13 xtrabackup_binlog_info.qp
    drwxr-x---. 2 root root     4096 Feb  2 03:13 dbt3
    -rw-r-----. 1 root root      432 Feb  2 03:13 xtrabackup_logfile.qp
    -rw-r-----. 1 root root      119 Feb  2 03:13 xtrabackup_checkpoints
    -rw-r-----. 1 root root     1064 Feb  2 03:13 ib_buffer_pool.qp
    -rw-r-----. 1 root root      414 Feb  2 03:13 backup-my.cnf.qp
    -rw-r-----. 1 root root      587 Feb  2 03:13 xtrabackup_info.qp
    [root@test-1 mdata]# ls -ltr test_backup/dbt3/
    total 1164288
    -rw-r-----. 1 root root      3168 Feb  2 03:13 nation.ibd.qp
    -rw-r-----. 1 root root  16187863 Feb  2 03:13 customer.ibd.qp
    -rw-r-----. 1 root root  12360677 Feb  2 03:13 part.ibd.qp
    -rw-r-----. 1 root root      1496 Feb  2 03:13 region.ibd.qp
    -rw-r-----. 1 root root   1174543 Feb  2 03:13 supplier.ibd.qp
    -rw-r-----. 1 root root      1472 Feb  2 03:13 time_statistics.ibd.qp
    -rw-r-----. 1 root root  72410287 Feb  2 03:13 partsupp.ibd.qp
    -rw-r-----. 1 root root 117605195 Feb  2 03:13 orders.ibd.qp
    -rw-r-----. 1 root root 972424981 Feb  2 03:13 lineitem.ibd.qp
    -rw-r-----. 1 root root       130 Feb  2 03:13 db.opt.qp
    -rw-r-----. 1 root root       954 Feb  2 03:13 lineitem.frm.qp
    -rw-r-----. 1 root root       627 Feb  2 03:13 customer.frm.qp
    -rw-r-----. 1 root root       506 Feb  2 03:13 nation.frm.qp
    -rw-r-----. 1 root root       678 Feb  2 03:13 orders.frm.qp
    -rw-r-----. 1 root root       632 Feb  2 03:13 part.frm.qp
    -rw-r-----. 1 root root       543 Feb  2 03:13 partsupp.frm.qp
    -rw-r-----. 1 root root       459 Feb  2 03:13 region.frm.qp
    -rw-r-----. 1 root root       600 Feb  2 03:13 supplier.frm.qp
    -rw-r-----. 1 root root       401 Feb  2 03:13 time_statistics.frm.qp
    

    解压当前目录下所有qp结尾的文件。

    [root@test-1 test_backup]# for f in `find ./ -iname "*\.qp"`; do qpress -dT2 $f $(dirname $f) && rm -rf $f; done        ---- dT2代表2个线程
    [root@test-1 test_backup]# ls
    backup-my.cnf  dbt3  ib_buffer_pool  ibdata1  undo001  undo002  undo003  xtrabackup_binlog_info  xtrabackup_checkpoints  xtrabackup_info  xtrabackup_logfile
    [root@test-1 test_backup]# ls dbt3
    customer.frm  db.opt        lineitem.ibd  nation.ibd  orders.ibd  part.ibd      partsupp.ibd  region.ibd    supplier.ibd         time_statistics.ibd
    customer.ibd  lineitem.frm  nation.frm    orders.frm  part.frm    partsupp.frm  region.frm    supplier.frm  time_statistics.frm
    

    对参数文件也做了备份

    [root@test-1 test_backup]# cat backup-my.cnf 
    # This MySQL options file was generated by innobackupex.
    
    # The MySQL server
    [mysqld]
    innodb_checksum_algorithm=innodb
    innodb_log_checksum_algorithm=strict_crc32
    innodb_data_file_path=ibdata1:12M:autoextend
    innodb_log_files_in_group=2
    innodb_log_file_size=536870912
    innodb_fast_checksum=false
    innodb_page_size=8192
    innodb_log_block_size=512
    innodb_undo_directory=./
    innodb_undo_tablespaces=3
    server_id=11
    
    redo_log_version=1
    

    也会记录备份对应的二进制日志的文件名和位置以及GTID.

    [root@test-1 test_backup]# cat xtrabackup_binlog_info
    bin.000087  1279    713a7f7f-6f53-11e6-b7a9-000c29de5d8b:1-284353
    

    上面也可以看出,对于物理备份,备份的是表空间。当然我这里备份不完整,系统库也需要备份。

    恢复命令:

    innobackupex --copy-back /path/to/BACKUP-DIR
    innobackupex --apply-log /path/to/BACKUP-DIR        ---- 进行重做日志的回放。
    

    4. 如何实现一致性备份

    #### Session 1 执行sysbench ####

    [root@test-1 db]# pwd
    /mdata/sysbench/sysbench/tests/db
    [root@test-1 db]# sysbench --test=./oltp.lua --oltp-table-size=1000000 --oltp-tables-count=4 --mysql-user=root --mysql-password=mysql --mysql-socket=/tmp/mysql.sock --mysql-host=192.168.6.11 --max-requests=0 --max-time=3600 --num-threads=50 --report-interval=3 run
    ......
    

    #### Session 2 执行备份 ####

    [root@test-1 mdata]# innobackupex --user=root --password=mysql --databases=dbt3 --compress --compress-threads=8 --stream=xbstream --parallel=4 ./ >backup.xbstream
    170202 15:28:57 innobackupex: Starting the backup operation
    
    IMPORTANT: Please check that the backup run completes successfully.
               At the end of a successful backup run innobackupex
               prints "completed OK!".
    
    170202 15:28:58  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;port=3306' as 'root'  (using password: YES).
    Failed to connect to MySQL server as DBD::mysql module is not installed at - line 1327.
    170202 15:28:58 Connecting to MySQL server host: localhost, user: root, password: set, port: 3306, socket: (null)
    Using server version 5.7.14-log
    innobackupex version 2.4.5 based on MySQL server 5.7.13 Linux (x86_64) (revision id: e41c0be)
    xtrabackup: uses posix_fadvise().
    xtrabackup: cd to /mdata/mysql_data
    xtrabackup: open files limit requested 0, set to 16384
    xtrabackup: using the following InnoDB configuration:
    xtrabackup:   innodb_data_home_dir = .
    xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
    xtrabackup:   innodb_log_group_home_dir = ./
    xtrabackup:   innodb_log_files_in_group = 2
    xtrabackup:   innodb_log_file_size = 536870912
    xtrabackup: using O_DIRECT
    InnoDB: Number of pools: 1
    170202 15:29:01 >> log scanned up to (7593548930)
    InnoDB: Opened 3 undo tablespaces
    InnoDB: 0 undo tablespaces made active
    xtrabackup: Generating a list of tablespaces
    InnoDB: Allocated tablespace ID 38 for dbt3/customer, old maximum was 3
    xtrabackup: Starting 4 threads for parallel data files transfer
    170202 15:29:01 [01] Compressing and streaming ./ibdata1
    170202 15:29:01 [02] Compressing and streaming .//undo001
    170202 15:29:01 [04] Compressing and streaming .//undo003
    [01] xtrabackup: Page 215 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 216 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 223 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 227 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 228 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 229 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 233 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 235 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 246 is a doublewrite buffer page, skipping.
    [01] xtrabackup: Page 249 is a doublewrite buffer page, skipping.
    170202 15:29:01 [03] Compressing and streaming .//undo002
    170202 15:29:02 [03]        ...done
    170202 15:29:02 [01]        ...done
    170202 15:29:02 [01] Compressing and streaming ./dbt3/customer.ibd
    170202 15:29:02 [03] Compressing and streaming ./dbt3/lineitem.ibd
    170202 15:29:02 >> log scanned up to (7594429459)           ---- 这就是为什么能做一致性备份的原理
    170202 15:29:02 [02]        ...done
    170202 15:29:02 [02] Compressing and streaming ./dbt3/nation.ibd
    170202 15:29:02 [02]        ...done
    170202 15:29:02 [02] Compressing and streaming ./dbt3/orders.ibd
    170202 15:29:03 >> log scanned up to (7595544151)
    170202 15:29:04 [01]        ...done
    170202 15:29:04 [01] Compressing and streaming ./dbt3/part.ibd
    170202 15:29:04 >> log scanned up to (7596354737)
    170202 15:29:05 [01]        ...done
    170202 15:29:05 [01] Compressing and streaming ./dbt3/partsupp.ibd
    170202 15:29:05 >> log scanned up to (7597130352)
    170202 15:29:07 [04]        ...done
    170202 15:29:07 >> log scanned up to (7598143074)
    170202 15:29:07 [04] Compressing and streaming ./dbt3/region.ibd
    170202 15:29:07 [04]        ...done
    170202 15:29:07 [04] Compressing and streaming ./dbt3/supplier.ibd
    170202 15:29:07 [04]        ...done
    170202 15:29:07 [04] Compressing and streaming ./dbt3/time_statistics.ibd
    170202 15:29:07 [04]        ...done
    170202 15:29:08 >> log scanned up to (7599157935)
    170202 15:29:09 >> log scanned up to (7599996438)
    170202 15:29:10 >> log scanned up to (7600949836)
    170202 15:29:11 >> log scanned up to (7601781796)
    170202 15:29:12 [01]        ...done
    170202 15:29:12 >> log scanned up to (7602618940)
    170202 15:29:13 [02]        ...done
    170202 15:29:13 >> log scanned up to (7603528064)
    170202 15:29:14 >> log scanned up to (7604472806)
    170202 15:29:15 >> log scanned up to (7605335291)
    170202 15:29:16 >> log scanned up to (7606280534)
    ......
    170202 15:29:43 >> log scanned up to (7629791636)
    170202 15:29:44 >> log scanned up to (7630685560)
    170202 15:29:45 >> log scanned up to (7631723498)
    170202 15:29:46 [03]        ...done
    170202 15:29:46 >> log scanned up to (7632816165)
    170202 15:29:46 Executing FLUSH NO_WRITE_TO_BINLOG TABLES...
    170202 15:29:46 Executing FLUSH TABLES WITH READ LOCK...
    170202 15:29:46 Starting to backup non-InnoDB tables and files
    170202 15:29:46 [01] Skipping ./mysql/db.opt.
    ......
    170202 15:29:46 [01] Skipping ./employees/salaries.ibd.
    170202 15:29:46 [01] Compressing and streaming ./dbt3/db.opt to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/customer.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/lineitem.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/nation.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/orders.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/part.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/partsupp.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/region.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/supplier.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Compressing and streaming ./dbt3/time_statistics.frm to <STDOUT>
    170202 15:29:46 [01]        ...done
    170202 15:29:46 [01] Skipping ./tpcc/db.opt.
    ......
    170202 15:29:46 [01] Skipping ./sbtest/sbtest1.ibd.
    170202 15:29:46 Finished backing up non-InnoDB tables and files
    170202 15:29:46 [00] Compressing and streaming xtrabackup_binlog_info
    170202 15:29:46 [00]        ...done
    170202 15:29:46 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
    xtrabackup: The latest check point (for incremental): '7553506465'
    xtrabackup: Stopping log copying thread.
    .170202 15:29:46 >> log scanned up to (7633113624)
    
    170202 15:29:47 Executing UNLOCK TABLES
    170202 15:29:47 All tables unlocked
    170202 15:29:47 [00] Compressing and streaming ib_buffer_pool to <STDOUT>
    170202 15:29:47 [00]        ...done
    170202 15:29:47 Backup created in directory '/mdata/'
    MySQL binlog position: filename 'bin.000087', position '47439281', GTID of the last change '713a7f7f-6f53-11e6-b7a9-000c29de5d8b:1-341791'
    170202 15:29:47 [00] Compressing and streaming backup-my.cnf
    170202 15:29:47 [00]        ...done
    170202 15:29:47 [00] Compressing and streaming xtrabackup_info
    170202 15:29:47 [00]        ...done
    xtrabackup: Transaction log of lsn (7552104905) to (7633113624) was copied.
    170202 15:29:47 completed OK!               ---- backup完成
    

    #### Session 1 ####

    [   3s] threads: 50, tps: 644.48, reads: 9229.01, writes: 2596.25, response time: 217.91ms (95%), errors: 0.00, reconnects:  0.00
    [   6s] threads: 50, tps: 797.14, reads: 11151.01, writes: 3178.24, response time: 145.87ms (95%), errors: 0.00, reconnects:  0.00
    [   9s] threads: 50, tps: 760.29, reads: 10619.68, writes: 3041.48, response time: 127.44ms (95%), errors: 0.00, reconnects:  0.00
    [  12s] threads: 50, tps: 765.38, reads: 10732.61, writes: 3057.17, response time: 119.18ms (95%), errors: 0.00, reconnects:  0.00
    [  15s] threads: 50, tps: 665.71, reads: 9312.65, writes: 2670.52, response time: 126.99ms (95%), errors: 0.00, reconnects:  0.00
    [  18s] threads: 50, tps: 726.62, reads: 10176.30, writes: 2907.80, response time: 107.87ms (95%), errors: 0.00, reconnects:  0.00
    [  21s] threads: 50, tps: 723.67, reads: 10133.72, writes: 2886.35, response time: 118.22ms (95%), errors: 0.00, reconnects:  0.00
    ......
    [  48s] threads: 50, tps: 630.06, reads: 8851.47, writes: 2513.90, response time: 123.39ms (95%), errors: 0.00, reconnects:  0.00
    [  51s] threads: 50, tps: 502.01, reads: 7043.44, writes: 2048.37, response time: 172.64ms (95%), errors: 0.00, reconnects:  0.00
    [  54s] threads: 50, tps: 506.27, reads: 7053.85, writes: 2002.43, response time: 167.90ms (95%), errors: 0.00, reconnects:  0.00
    [  57s] threads: 50, tps: 513.29, reads: 7214.36, writes: 2064.48, response time: 160.24ms (95%), errors: 0.00, reconnects:  0.00
    [  60s] threads: 50, tps: 516.09, reads: 7232.28, writes: 2074.03, response time: 164.22ms (95%), errors: 0.00, reconnects:  0.00
    [  63s] threads: 50, tps: 537.65, reads: 7474.45, writes: 2138.94, response time: 154.03ms (95%), errors: 0.00, reconnects:  0.00
    [  66s] threads: 50, tps: 560.98, reads: 7879.12, writes: 2228.94, response time: 145.17ms (95%), errors: 0.00, reconnects:  0.00
    [  69s] threads: 50, tps: 470.98, reads: 6606.37, writes: 1891.91, response time: 177.35ms (95%), errors: 0.00, reconnects:  0.00
    [  72s] threads: 50, tps: 575.12, reads: 7994.35, writes: 2276.48, response time: 145.60ms (95%), errors: 0.00, reconnects:  0.00
    [  75s] threads: 50, tps: 497.92, reads: 7032.18, writes: 2035.68, response time: 165.95ms (95%), errors: 0.00, reconnects:  0.00
    [  78s] threads: 50, tps: 541.93, reads: 7553.64, writes: 2126.04, response time: 155.10ms (95%), errors: 0.00, reconnects:  0.00
    [  81s] threads: 50, tps: 504.72, reads: 7052.04, writes: 2038.54, response time: 172.85ms (95%), errors: 0.00, reconnects:  0.00
    [  84s] threads: 50, tps: 544.88, reads: 7657.58, writes: 2166.50, response time: 159.95ms (95%), errors: 0.00, reconnects:  0.00
    [  87s] threads: 50, tps: 567.68, reads: 7927.55, writes: 2277.73, response time: 143.57ms (95%), errors: 0.00, reconnects:  0.00
    [  90s] threads: 50, tps: 561.39, reads: 7896.83, writes: 2259.90, response time: 139.75ms (95%), errors: 0.00, reconnects:  0.00
    [  93s] threads: 50, tps: 566.74, reads: 7925.74, writes: 2249.31, response time: 139.71ms (95%), errors: 0.00, reconnects:  0.00
    [  96s] threads: 50, tps: 589.14, reads: 8236.26, writes: 2346.22, response time: 138.42ms (95%), errors: 0.00, reconnects:  0.00
    [  99s] threads: 50, tps: 640.86, reads: 8912.35, writes: 2563.77, response time: 120.18ms (95%), errors: 0.00, reconnects:  0.00
    [ 102s] threads: 50, tps: 541.98, reads: 7632.44, writes: 2179.27, response time: 159.43ms (95%), errors: 0.00, reconnects:  0.00
    [ 105s] threads: 50, tps: 562.04, reads: 7901.79, writes: 2258.13, response time: 139.09ms (95%), errors: 0.00, reconnects:  0.00
    [ 108s] threads: 50, tps: 589.68, reads: 8242.47, writes: 2339.69, response time: 137.02ms (95%), errors: 0.00, reconnects:  0.00
    ......
    

    备份可以发现对于qps还是有一定影响的。

    总结一下xtrabackup的原理:

    • 记录当前redo日志的LSN
    • copy 表空间文件 & copy 当前产生的redo日志
    • flush table with read lock; 记录当前filename,pos,gtid
    • copy redo日志,做这步的原因是上面还会做一步flush binary log,把系统缓存中没有持久化到磁盘中的数据做一次fsync, 所以这步确保所有redo都备份完成。
    • backup完成

    5. 和mysqldump & mydumper的区别

    • flush table with read lock; 记录当前filename,pos,gtid
    • start transaction with consistent snapshot;
    • unlock tables;
    • select * from table1,table2...

    以上是mysqldump/mydumper的大致原理,也就是假设如下:

    备份开始21:00, 结束22:00

    1. mysqldump: 21:00数据 => binary log => 22:00 => future
    2. xtrabackup: 22:00数据 => binary log => future

    6. 关于增量备份

    通过xtrabackup备份增量,但是原版的MySQL没有很好的方法统计哪些页发生变化,使用的方法是对要备份的所有页发生的变化。Percona有类似Oracle Block Changing Tracking功能。

    另一种做增量备份的方法是flush logs或者flush binary logs之后备份二进制日志文件。和物理的增量备份的区别是恢复会非常长,是个逻辑过程。

    MySQL对增量备份的场景比较少,因为基本上一主一从甚至一主多从,高可用的要求更高。全备主要是针对误操作的恢复。

    相关文章

      网友评论

          本文标题:InnoDB存储引擎--物理备份与实现

          本文链接:https://www.haomeiwen.com/subject/owjuittx.html