美文网首页
Linux作业(6)——Redis

Linux作业(6)——Redis

作者: 羰基生物 | 来源:发表于2020-10-26 15:27 被阅读0次

    1、RDB和AOF的优缺点

    RDB优点:

    • RDB可以在不同时间点作备份,在服务出问题后可以恢复到不同的时间节点

    • RDB可以最大话Redis的性能,只需要fork出一个子进程,子进程完成剩下的保存工作,父进程没有任何磁盘I/O。

    • 恢复大型数据速度要比AOF速度快

    • 紧凑的单一文件,方便网络传输,适合容灾恢复

    RDB缺点:

    • 当备份开始后,到备份结束这段时间内,当发生故障会造成数据丢失,无法恢复数据

    • 每次都要由父进程fork一个子进程,当数据比较大时,这个过程会比较长,有可能造成cpu在某个时间点停止处理客户端,如果cpu比较忙,这个时间可能长达一秒。

    AOF优点:

    • AOF更新频率比RDB更快,数据库更新。 AOF 的默认策略为每秒钟 fsync 一次,在这种配置下,Redis 仍然可以保持良好的性能,并且就算发生故障停机,也最多只会丢失一秒钟的数据

    • 重写机制,优化AOF文件

    • 如果误操作,只要AOF未被重写,停止服务,移除AOF文件尾部FUSHALL命令,重启redis,可以将数据集恢复到flushall执行前的状态

    AOF缺点:

    • 相同数据集,AOF文件体积比RDB大很多

    • 恢复大型数据速度比RDB慢

    2、master和slave同步过程

    • 全量同步

      首次同步时全量同步,从服务器发送psync(2.8版本前时SYNC)命令,主进程fork一个子进程后台执行bgsave命令,将新写入的数据写入到一个缓冲区,bgsave执行完成,发送RDB文件给slave,然后master再将缓冲区的数据以redis协议格式发送给slave,slave收到后,先删除旧数据,再将新的RDB文件载入内存,再加载收到的缓冲区的数据,完成同步。

    • 增量同步

      全同步之后再次需要同步时,slave只要发送当前的offset位置(类似MySQL中binlog的位置)给master,然后master根据这个位置将其之后的数据(包含缓冲区)发送给slave,slave再次加载到内存,完成同步。

    • 主从同步主要过程

      • slave发送psync给master

      • master收到psync,执行bgsave,生成rdb文件,缓冲区记录此后所有执行写命令

      • master执行完bgsave,发送rdb文件给slave,并在缓冲区继续记录执行的写命令

      • slave收到rdb文件,删除旧文件,载入新文件至内存

      • master发送完rdb文件后,开始发送缓冲区记录的写命令

      • slave载入rdb文件完成,开始接收master发送的写命令,并且执行

      • 首次完成后再次同步,slave发送slave_repl_offset位置给master,只同步新增数据,不再全同步。

    3、哨兵的使用和实现机制

    ·上图是Redis的主从同步架构,要解释什么是哨兵模式,要从redis的主从模式说起,redis的主从模式就是把上图的所有的哨兵去掉,就变成了主从模式,主从模式是主节点负责写请求,然后异步的同步给从节点,然后从节点负责读请求,所以在主从架构中redis每个节点保存的数据是相同的,只是数据的同步可能会有一点延迟,那考虑到高可用性,如果主节点挂了,是没有一个自动选主的机制的,需要人工来指定一个节点为主节点,然后再恢复成主从结构,所以其实也是不能做到高可用。所以这个时候,就可以考利用别的机器来监控这些主从节点的变化,从而实现故障自动恢复,即哨兵模式。

    每个哨兵节点维护三个定时任务,即心跳检测,信息交换,发现集群拓扑变化。

    • 定时任务1:心态检测,哨兵节点每隔1s向所有的master、slaver、别的sentinel节点发送一个PING命令,来确定对方是否能响应。

    • 定时任务2:信息交换,通过redis的pub/sub功能,每个sentinel每隔2s都会向master的sentinel_:hello频道发送自己所知道的信息

    • 定时任务3:发现拓扑变化,每个sentinel节点每隔10s都会向master和slaver发送INFO命令,发现slave节点,确认主从关系

    哨兵节点设为奇数个,防止脑裂。例如,共三个哨兵节点,当一个哨兵节点通过ping检测,认为master已经down了,这个时候,称为SDOWN,即主观下线,但是此时还未开始提升slave为master,因为并不确定master就真的有问题;同时另外一个哨兵节点也通过ping检测,也认为master已经down了,此时超过半数的哨兵节点认为master下线即为客观下线,也就是‘真的’下线了,有问题了。然后会开始选举出一个哨兵中的‘老大’,由它来处理故障。选出来的sentinel再提升新master时需要先剔除掉一些不满足条件的slaver,例如已经下线的从服务,5s没有回复sentinel的info命令的slaver。在选择的时候还可以根据sentinel配置文件配置replica-priority优先级,越小越高。offset较大的优先,run id较小的优先。在新的master选出来之后,哨兵中的‘老大’会告诉其它slave新master的相关信息,让salve向新master复制数据。

    4、redis cluster集群创建和使用

    • 基于Redis 4.0.14的集群创建

      • 基本配置

        • 每个redis节点采用相同的硬件配置,相同密码,相同的Redis版本

        • 所有redis服务器不能有数据

        • 三台Centos7,redis4.0.14均已编译安装,各启动两个实例,分别使用6379,6380端口,模拟6台redis服务器。

          10.0.0.107:6379|6380 10.0.0.108:6379|6380 10.0.0.109:6379|6380

    • 安装

      • 安装目录/apps/redis

      • 安装包/root/redis-4.0.14.tar.gz

      • 编译安装

      [root@c7_107 ~]# cd
      [root@c7_107 ~]# tar xf redis-4.0.14.tar.gz
      [root@c7_107 redis-4.0.14]# make PREFIX=/apps/redis install
      
    • 准备相关目录,文件

      [root@c7_107 redis-4.0.14]# ln -s /apps/redis/bin/redis-* /usr/bin/
      [root@c7_107 redis-4.0.14]# mkdir -p /apps/redis/{etc,run,log,data}
      [root@c7_107 redis-4.0.14]# cp redis.conf /apps/redis/etc/
      
      
    • 准备用户

      [root@c7_107 redis-4.0.14]# useradd -r -s /sbin/nologin redis
      
      
    • 修改权限,优化相关配置

    [root@c7_107 redis-4.0.14]# chown -R redis.redis /apps/redis
    [root@c7_107 redis-4.0.14]# cat >> /etc/sysctl.conf <<EOF
    net.core.somaxconn = 1024
    vm.obercommit_memory = 1
    EOF
    [root@c7_107 redis-4.0.14]# sysctl -p
    net.core.somaxconn = 1024
    [root@c7_107 redis-4.0.14]# echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >>     
    /etc/rc.d/rc.local
    [root@c7_107 redis-4.0.14]# chmod +x /etc/rc.d/rc.local
    [root@c7_107 redis-4.0.14]# /etc/rc.d/rc.local
    
    • 准备service文件
    [root@c7_107 redis-4.0.14]# vim /usr/lib/systemd/system/redis.service 
    
    [Unit]
    Description=Redis persistent key-value database
    After=network.target
    
    [Service]
    ExecStart=/apps/redis/bin/redis-server /apps/redis/etc/redis.conf --supervised systemd
    ExecStop=/bin/kill -s QUIT \$MAINPID
    Type=notify
    User=redis
    Group=redis
    RuntimeDirectory=redis
    RuntimeDirectoryMode=0755
    [Install]
    WantedBy=multi-user.target
    
    [root@c7_107 redis-4.0.14]# systemctl daemon-reload
    [root@c7_107 redis-4.0.14]# systemctl enable --now redis
    
    • 修改Redis实例配置文件(自己指定的安装目录/apps/redis/etc)

      • redis.conf
    [root@c7_107 ~]# systemctl stop redis
    [root@c7_107 ~]# vim /apps/redis/etc/redis.conf
    ##修改这些选项,redis.conf是默认文件,端口6379,对应6379的实例
    masterauth 123456
    port 6379
    bind 0.0.0.0
    pidfile /apps/redis/run/redis_6379.pid
    logfile /apps/redis/log/redis-6379.log
    dbfilename dump.rdb
    dir /apps/redis/data/
    requirepass 123456
    appendonly no
    appendfilename 'appendonly.aof'
    cluster-enabled yes
    cluster-config-file nodes-6379.conf
    cluster-requir-full-coverage yes改为no
    
    *   redis6380.conf
    ```
    [root@c7_107 ~]# cp /apps/redis/etc/redis.conf /apps/redis/etc/redis6380.conf
    ##复制redis.conf文件,命名为redis6380.conf
    [root@c7_107 ~]# vim /apps/redis/etc/redis6380.conf
    masterauth 123456
    port 6380
    bind 0.0.0.0
    pidfile /apps/redis/run/redis_6380.pid
    logfile /apps/redis/log/redis-6380.log
    dbfilename dump6380.rdb
    dir /apps/redis/data/
    requirepass 123456
    appendonly no
    appendfilename 'appendonly6380.aof'
    cluster-enabled yes
    cluster-config-file nodes-6380.conf
    cluster-requir-full-coverage yes改为no
    
    • 准备6380端口实例服务文件

      [root@c7_107 ~]# cp /lib/systemd/system/redis.service /lib/systemd/system/redis6380.service
      [root@c7_107 ~]# sed -i 's/redis.conf/redis6380.conf/' /lib/systemd/system/redis6380.service
      
      
    • 启动服务

      [root@c7_107 ~]# systemctl daemon-reload
      [root@c7_107 ~]# systemctl enable --now redis redis6380       
      
      
    • 准备集群管理工具redis-trib.rb

      [root@c7_107 ~]# find / -name redis-trib.rb
      /root/redis-4.0.14/src/redis-trib.rb
      [root@c7_107 ~]# cp /root/redis-4.0.14/src/redis-trib.rb /usr/bin/
      ##复制到/usr/bin目录下
      [root@c7_107 ~]# redisredis-trib.rb###无法运行,缺少rb脚本
      [root@c7_107 ~]# yum install -y rubygems
      [root@c7_107 ~]# gem install redis 
      ...
      ...
      ...
      
      ###解决redis-trib.rb版本较低
      #下载安装包
      [root@c7_107 ~]# wget https://cache.ruby-lang.org/pub/ruby/2.5/ruby-2.5.5.tar.gz
      #安装依赖包
      [root@c7_107 ~]# yum -y install openssl-devel zlib-devel
      #编译安装redis-trib.rb
      [root@c7_107 ~]# cd ruby-2.5.5
      [root@c7_107 ~]# ./configure
      [root@c7_107 ~]# make -j 4 && make install
      ###重新退出登录
      [root@c7_107 ~]# exit
      ###重新登录之后运行报错
      [root@c7_107 ~]# redis-trib.rb -h 
      ...
      ...
      ...
      ###解决报错
      [root@c7_107 ~]# gem install redis -v 4.1.3 
      ###安装完成后显示
      1 gem installed
      
      
      
    • 修改redis-trib.rb登录redis的密码

    [root@c7_109 ~]# vim /usr/local/lib/ruby/gems/2.5.0/gems/redis-4.1.3/lib/redis/client.rb
    
    # frozen_string_literal: true
    
    require_relative "errors"
    require "socket"
    require "cgi"
    
    class Redis
      class Client
    
        DEFAULTS = {
          :url => lambda { ENV["REDIS_URL"] },
          :scheme => "redis",
          :host => "127.0.0.1",
          :port => 6379,
          :path => nil,
          :timeout => 5.0,
          :password => nli,###nli改为123456
          :db => 0,
          :driver => nil,
          :id => nil,
          :tcp_keepalive => 0,
          :reconnect_attempts => 1,
          :reconnect_delay => 0,
          :reconnect_delay_max => 0.5,
          :inherit_socket => false
        }
    ...
    ...
    
    • 查看所有节点状态
    [root@c7_107 ~]# systemctl is-active redis redis6380
    active
    active
    [root@c7_108 ~]# systemctl is-active redis redis6380
    active
    active
    [root@c7_109 ~]# systemctl is-active redis redis6380
    active
    active
    
    • 在第一个节点上创建集群
    [root@c7_109 etc]# redis-trib.rb create --replicas 1 10.0.0.107:6379 10.0.0.108:6379 10.0.0.109:6379 10.0.0.107:6380 10.0.0.108:6380 10.0.0.109:6380
    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    10.0.0.107:6379
    10.0.0.108:6379
    10.0.0.109:6379
    Adding replica 10.0.0.108:6380 to 10.0.0.107:6379
    Adding replica 10.0.0.109:6380 to 10.0.0.108:6379
    Adding replica 10.0.0.107:6380 to 10.0.0.109:6379
    M: f4dd979d98b5a2caac38889f4df7c0c68d792ee6 10.0.0.107:6379
       slots:0-5460 (5461 slots) master
    M: 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1 10.0.0.108:6379
       slots:5461-10922 (5462 slots) master
    M: 3ec1d22efc11c3071d70ca3931b94073f53dcfe9 10.0.0.109:6379
       slots:10923-16383 (5461 slots) master
    S: e64350237902d920468c5b6bb4ad1a7437962cd2 10.0.0.107:6380
       replicates 3ec1d22efc11c3071d70ca3931b94073f53dcfe9
    S: ba12418239b36741005160a733d16bc7386e08d2 10.0.0.108:6380
       replicates f4dd979d98b5a2caac38889f4df7c0c68d792ee6
    S: b82d67c5b9b98a5ae2b05087e1ca59ca52e6a8d5 10.0.0.109:6380
       replicates 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join.....
    >>> Performing Cluster Check (using node 10.0.0.107:6379)
    M: f4dd979d98b5a2caac38889f4df7c0c68d792ee6 10.0.0.107:6379
       slots:0-5460 (5461 slots) master
       1 additional replica(s)
    S: b82d67c5b9b98a5ae2b05087e1ca59ca52e6a8d5 10.0.0.109:6380
       slots: (0 slots) slave
       replicates 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1
    M: 3ec1d22efc11c3071d70ca3931b94073f53dcfe9 10.0.0.109:6379
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
    S: e64350237902d920468c5b6bb4ad1a7437962cd2 10.0.0.107:6380
       slots: (0 slots) slave
       replicates 3ec1d22efc11c3071d70ca3931b94073f53dcfe9
    M: 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1 10.0.0.108:6379
       slots:5461-10922 (5462 slots) master
       1 additional replica(s)
    S: ba12418239b36741005160a733d16bc7386e08d2 10.0.0.108:6380
       slots: (0 slots) slave
       replicates f4dd979d98b5a2caac38889f4df7c0c68d792ee6
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    ####可以看见每个主机的6379端口和6380端口实例各自的角色
    
    [root@c7_107 ~]# systemctl stop redis
    ###关掉redis服务,此时107主机的6379端口实例消失,原来它是master;只剩下6380端口的实例,并且将108主机的6380端口实例提升为master
    [root@c7_107 ~]# redis-trib.rb check 10.0.0.109:6379
    >>> Performing Cluster Check (using node 10.0.0.109:6379)
    M: 3ec1d22efc11c3071d70ca3931b94073f53dcfe9 10.0.0.109:6379
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
    M: ba12418239b36741005160a733d16bc7386e08d2 10.0.0.108:6380
       slots:0-5460 (5461 slots) master
       0 additional replica(s)
    S: b82d67c5b9b98a5ae2b05087e1ca59ca52e6a8d5 10.0.0.109:6380
       slots: (0 slots) slave
       replicates 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1
    S: e64350237902d920468c5b6bb4ad1a7437962cd2 10.0.0.107:6380
       slots: (0 slots) slave
       replicates 3ec1d22efc11c3071d70ca3931b94073f53dcfe9
    M: 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1 10.0.0.108:6379
       slots:5461-10922 (5462 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    
    ###重新打开107主机上的redis实例,可以看到服务恢复,但是其角色由最开始的master变成了slave
    
    [root@c7_107 ~]# systemctl start redis
    [root@c7_107 ~]# redis-trib.rb check 10.0.0.109:6379
    >>> Performing Cluster Check (using node 10.0.0.109:6379)
    M: 3ec1d22efc11c3071d70ca3931b94073f53dcfe9 10.0.0.109:6379
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
    M: ba12418239b36741005160a733d16bc7386e08d2 10.0.0.108:6380
       slots:0-5460 (5461 slots) master
       1 additional replica(s)
    S: b82d67c5b9b98a5ae2b05087e1ca59ca52e6a8d5 10.0.0.109:6380
       slots: (0 slots) slave
       replicates 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1
    S: e64350237902d920468c5b6bb4ad1a7437962cd2 10.0.0.107:6380
       slots: (0 slots) slave
       replicates 3ec1d22efc11c3071d70ca3931b94073f53dcfe9
    M: 1ee69217ad5b96d6bc4b79dbb6fbb27dceebd8b1 10.0.0.108:6379
       slots:5461-10922 (5462 slots) master
       1 additional replica(s)
    S: f4dd979d98b5a2caac38889f4df7c0c68d792ee6 10.0.0.107:6379
       slots: (0 slots) slave
       replicates ba12418239b36741005160a733d16bc7386e08d2
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    
    
    
    • 验证数据写入
    #!/usr/bin/env python3
    from rediscluster import RedisCluster
    startup_nodes = [
            {"host":"10.0.0.107","port":6379},
            {"host":"10.0.0.107","port":6380},
            {"host":"10.0.0.108","port":6379},
            {"host":"10.0.0.108","port":6380},
            {"host":"10.0.0.109","port":6379},
            {"host":"10.0.0.109","port":6380}
    
    ]
    redis_conn= RedisCluster(startup_nodes=startup_nodes,password='123456',decode_responses=True)
    
    for i in range(0, 10000):
        redis_conn.set('key'+str(i),'value'+str(i))
        print('key'+str(i)+':',redis_conn.get('key'+str(i)))
    
    [root@c7_107 ~]# redis-cli -a 123456 -h 10.0.0.108 DBSIZE
    Warning: Using a password with '-a' option on the command line interface may not be safe.
    (integer) 3340
    [root@c7_107 ~]# redis-cli -a 123456 -h 10.0.0.109 DBSIZE
    Warning: Using a password with '-a' option on the command line interface may not be safe.
    (integer) 3329
    [root@c7_107 ~]# redis-cli -a 123456 -h 10.0.0.107 DBSIZE
    Warning: Using a password with '-a' option on the command line interface may not be safe.
    (integer) 3331
    

    相关文章

      网友评论

          本文标题:Linux作业(6)——Redis

          本文链接:https://www.haomeiwen.com/subject/abbamktx.html