美文网首页JAVA后台开发_从入门到精通
22 Redis备份、安全、性能测试、客户端连接与分区

22 Redis备份、安全、性能测试、客户端连接与分区

作者: 笑Skr人啊 | 来源:发表于2017-11-28 17:10 被阅读17次
    • 1 备份

    Redis SAVE 命令用于创建当前数据库的备份。
    
    # 语法
    redis 127.0.0.1:6379> SAVE 
    
    # 实例
    redis 127.0.0.1:6379> SAVE    # 该命令将在 redis 安装目录中创建dump.rdb文件。
    OK
    
    
    
    Redsi默认备份
    image.png

    指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合
    save <seconds> <changes>
    Redis默认配置文件中提供了三个条件:
    save 900 1
    save 300 10
    save 60 10000
    分别表示900秒(15分钟)内有1个更改,300秒(5分钟)内有10个更改以及60秒内有10000个更改。

    • 2 Redis恢复

    如果需要恢复数据,只需将备份文件 (dump.rdb) 移动到 redis 安装目录并启动服务即可。获取 redis 目录可以使用 CONFIG 命令,如下所示:
    
    redis 127.0.0.1:6379> CONFIG GET dir  # CONFIG GET dir 输出的 redis 安装目录为 /usr/local/redis/bin。
    1) "dir"
    2) "/usr/local/redis/bin"
    
    
    # Bgsave
    创建 redis 备份文件也可以使用命令 BGSAVE,该命令在后台执行。
    
    # 实例
    127.0.0.1:6379> BGSAVE
    Background saving started
    
    
    • 3 Redis 安全

    我们可以通过 redis 的配置文件设置密码参数,这样客户端连接到 redis 服务就需要密码验证,这样可以让你的 redis 服务更安全。
    
    # 实例
    我们可以通过以下命令查看是否设置了密码验证:
    127.0.0.1:6379> CONFIG get requirepass
    1) "requirepass"
    2) ""
    
    默认情况下 requirepass 参数是空的,这就意味着你无需通过密码验证就可以连接到 redis 服务。
    你可以通过以下命令来修改该参数:
    127.0.0.1:6379> CONFIG set requirepass "runoob"
    OK
    127.0.0.1:6379> CONFIG get requirepass
    1) "requirepass"
    2) "runoob"
    设置密码后,客户端连接 redis 服务就需要密码验证,否则无法执行命令。
    
    通过密码进入
    
    # 语法
    AUTH 命令基本语法格式如下:
    127.0.0.1:6379> AUTH password
    
    # 实例
    127.0.0.1:6379> AUTH "runoob"
    OK
    127.0.0.1:6379> SET mykey "Test value"
    OK
    127.0.0.1:6379> GET mykey
    "Test value"
    
    • 4 Redis 性能测试

    Redis 性能测试是通过同时执行多个命令实现的。
    
    # 语法
    redis-benchmark [option] [option value]
    
    
    # 实例
    以下实例同时执行 10000 个请求来检测性能:
    [root@root src]# redis-benchmark -n 10000
    ====== PING_INLINE ======
      10000 requests completed in 1.09 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    3.22% <= 2 milliseconds
    41.76% <= 3 milliseconds
    70.39% <= 4 milliseconds
    83.98% <= 5 milliseconds
    91.31% <= 6 milliseconds
    96.25% <= 7 milliseconds
    98.52% <= 8 milliseconds
    99.29% <= 9 milliseconds
    99.51% <= 60 milliseconds
    99.55% <= 61 milliseconds
    99.59% <= 62 milliseconds
    99.65% <= 63 milliseconds
    99.76% <= 65 milliseconds
    99.81% <= 66 milliseconds
    99.84% <= 67 milliseconds
    99.89% <= 68 milliseconds
    99.96% <= 69 milliseconds
    100.00% <= 69 milliseconds
    9149.13 requests per second
    
    ====== PING_BULK ======
      10000 requests completed in 1.01 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.98% <= 2 milliseconds
    36.14% <= 3 milliseconds
    72.15% <= 4 milliseconds
    88.62% <= 5 milliseconds
    96.20% <= 6 milliseconds
    98.72% <= 7 milliseconds
    99.62% <= 8 milliseconds
    100.00% <= 8 milliseconds
    9920.63 requests per second
    
    ====== SET ======
      10000 requests completed in 1.82 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.08% <= 2 milliseconds
    20.53% <= 3 milliseconds
    44.32% <= 4 milliseconds
    59.14% <= 5 milliseconds
    70.00% <= 6 milliseconds
    78.01% <= 7 milliseconds
    83.98% <= 8 milliseconds
    88.31% <= 9 milliseconds
    91.20% <= 10 milliseconds
    92.88% <= 11 milliseconds
    93.91% <= 12 milliseconds
    94.48% <= 13 milliseconds
    94.94% <= 14 milliseconds
    95.31% <= 15 milliseconds
    95.83% <= 16 milliseconds
    96.14% <= 17 milliseconds
    96.35% <= 18 milliseconds
    96.65% <= 19 milliseconds
    96.82% <= 20 milliseconds
    97.01% <= 21 milliseconds
    97.14% <= 22 milliseconds
    97.27% <= 23 milliseconds
    97.36% <= 24 milliseconds
    97.37% <= 25 milliseconds
    97.38% <= 26 milliseconds
    97.39% <= 28 milliseconds
    97.50% <= 29 milliseconds
    97.70% <= 30 milliseconds
    97.94% <= 31 milliseconds
    98.04% <= 34 milliseconds
    98.06% <= 35 milliseconds
    98.21% <= 36 milliseconds
    98.35% <= 37 milliseconds
    98.60% <= 38 milliseconds
    98.70% <= 39 milliseconds
    98.89% <= 40 milliseconds
    98.95% <= 58 milliseconds
    99.01% <= 63 milliseconds
    99.05% <= 64 milliseconds
    99.27% <= 65 milliseconds
    99.41% <= 66 milliseconds
    99.46% <= 67 milliseconds
    99.51% <= 182 milliseconds
    99.58% <= 183 milliseconds
    99.60% <= 196 milliseconds
    99.66% <= 197 milliseconds
    99.75% <= 198 milliseconds
    99.83% <= 199 milliseconds
    99.97% <= 200 milliseconds
    100.00% <= 200 milliseconds
    5491.49 requests per second
    
    ====== GET ======
      10000 requests completed in 1.13 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.02% <= 2 milliseconds
    27.99% <= 3 milliseconds
    62.86% <= 4 milliseconds
    79.92% <= 5 milliseconds
    90.47% <= 6 milliseconds
    94.97% <= 7 milliseconds
    96.82% <= 8 milliseconds
    98.04% <= 9 milliseconds
    98.91% <= 10 milliseconds
    99.29% <= 11 milliseconds
    99.49% <= 12 milliseconds
    99.51% <= 17 milliseconds
    99.58% <= 18 milliseconds
    99.67% <= 19 milliseconds
    99.77% <= 20 milliseconds
    99.87% <= 21 milliseconds
    99.95% <= 22 milliseconds
    100.00% <= 22 milliseconds
    8841.73 requests per second
    
    ====== INCR ======
      10000 requests completed in 1.16 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.53% <= 2 milliseconds
    34.20% <= 3 milliseconds
    65.18% <= 4 milliseconds
    79.12% <= 5 milliseconds
    86.80% <= 6 milliseconds
    91.70% <= 7 milliseconds
    95.03% <= 8 milliseconds
    96.74% <= 9 milliseconds
    97.67% <= 10 milliseconds
    98.12% <= 11 milliseconds
    98.38% <= 12 milliseconds
    98.62% <= 13 milliseconds
    98.82% <= 14 milliseconds
    99.05% <= 15 milliseconds
    99.23% <= 16 milliseconds
    99.33% <= 17 milliseconds
    99.35% <= 18 milliseconds
    99.42% <= 19 milliseconds
    99.51% <= 20 milliseconds
    99.54% <= 21 milliseconds
    99.64% <= 23 milliseconds
    99.84% <= 24 milliseconds
    99.92% <= 25 milliseconds
    100.00% <= 26 milliseconds
    8620.69 requests per second
    
    ====== LPUSH ======
      10000 requests completed in 1.14 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.31% <= 2 milliseconds
    26.22% <= 3 milliseconds
    62.00% <= 4 milliseconds
    80.99% <= 5 milliseconds
    90.09% <= 6 milliseconds
    94.49% <= 7 milliseconds
    96.50% <= 8 milliseconds
    97.63% <= 9 milliseconds
    98.48% <= 10 milliseconds
    98.87% <= 11 milliseconds
    99.01% <= 28 milliseconds
    99.06% <= 29 milliseconds
    99.31% <= 30 milliseconds
    99.56% <= 31 milliseconds
    99.65% <= 32 milliseconds
    99.85% <= 33 milliseconds
    99.91% <= 34 milliseconds
    99.97% <= 35 milliseconds
    100.00% <= 35 milliseconds
    8771.93 requests per second
    
    ====== RPUSH ======
      10000 requests completed in 1.26 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.34% <= 2 milliseconds
    27.12% <= 3 milliseconds
    60.09% <= 4 milliseconds
    77.87% <= 5 milliseconds
    88.10% <= 6 milliseconds
    94.10% <= 7 milliseconds
    96.81% <= 8 milliseconds
    97.92% <= 9 milliseconds
    98.44% <= 10 milliseconds
    98.51% <= 12 milliseconds
    98.74% <= 13 milliseconds
    98.82% <= 14 milliseconds
    98.96% <= 16 milliseconds
    99.01% <= 23 milliseconds
    99.03% <= 24 milliseconds
    99.18% <= 25 milliseconds
    99.32% <= 26 milliseconds
    99.44% <= 27 milliseconds
    99.51% <= 142 milliseconds
    99.56% <= 143 milliseconds
    99.79% <= 144 milliseconds
    99.87% <= 145 milliseconds
    99.98% <= 146 milliseconds
    100.00% <= 146 milliseconds
    7923.93 requests per second
    
    ====== LPOP ======
      10000 requests completed in 1.03 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.19% <= 2 milliseconds
    29.29% <= 3 milliseconds
    66.46% <= 4 milliseconds
    83.18% <= 5 milliseconds
    93.52% <= 6 milliseconds
    97.92% <= 7 milliseconds
    99.41% <= 8 milliseconds
    99.81% <= 9 milliseconds
    99.95% <= 10 milliseconds
    100.00% <= 10 milliseconds
    9708.74 requests per second
    
    ====== RPOP ======
      10000 requests completed in 1.02 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    2.05% <= 2 milliseconds
    36.07% <= 3 milliseconds
    71.22% <= 4 milliseconds
    86.80% <= 5 milliseconds
    95.47% <= 6 milliseconds
    98.80% <= 7 milliseconds
    99.58% <= 8 milliseconds
    99.89% <= 9 milliseconds
    100.00% <= 9 milliseconds
    9823.18 requests per second
    
    ====== SADD ======
      10000 requests completed in 1.00 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.91% <= 2 milliseconds
    42.61% <= 3 milliseconds
    74.22% <= 4 milliseconds
    89.63% <= 5 milliseconds
    97.37% <= 6 milliseconds
    99.48% <= 7 milliseconds
    99.85% <= 8 milliseconds
    99.88% <= 9 milliseconds
    99.99% <= 10 milliseconds
    100.00% <= 10 milliseconds
    9990.01 requests per second
    
    ====== SPOP ======
      10000 requests completed in 1.02 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.57% <= 2 milliseconds
    33.63% <= 3 milliseconds
    69.56% <= 4 milliseconds
    87.01% <= 5 milliseconds
    95.36% <= 6 milliseconds
    98.88% <= 7 milliseconds
    99.88% <= 8 milliseconds
    100.00% <= 8 milliseconds
    9803.92 requests per second
    
    ====== LPUSH (needed to benchmark LRANGE) ======
      10000 requests completed in 1.06 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    1.04% <= 2 milliseconds
    25.87% <= 3 milliseconds
    61.58% <= 4 milliseconds
    81.69% <= 5 milliseconds
    91.84% <= 6 milliseconds
    96.92% <= 7 milliseconds
    98.94% <= 8 milliseconds
    99.63% <= 9 milliseconds
    99.83% <= 10 milliseconds
    99.97% <= 11 milliseconds
    100.00% <= 11 milliseconds
    9469.70 requests per second
    
    ====== LRANGE_100 (first 100 elements) ======
      10000 requests completed in 1.15 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    0.27% <= 2 milliseconds
    20.10% <= 3 milliseconds
    55.39% <= 4 milliseconds
    76.93% <= 5 milliseconds
    90.68% <= 6 milliseconds
    97.36% <= 7 milliseconds
    99.25% <= 8 milliseconds
    99.78% <= 9 milliseconds
    99.94% <= 10 milliseconds
    100.00% <= 10 milliseconds
    8673.03 requests per second
    
    ====== LRANGE_300 (first 300 elements) ======
      10000 requests completed in 1.48 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 2 milliseconds
    7.32% <= 3 milliseconds
    43.87% <= 4 milliseconds
    68.40% <= 5 milliseconds
    85.17% <= 6 milliseconds
    94.29% <= 7 milliseconds
    97.38% <= 8 milliseconds
    98.82% <= 9 milliseconds
    99.56% <= 10 milliseconds
    99.82% <= 11 milliseconds
    99.94% <= 12 milliseconds
    99.98% <= 13 milliseconds
    100.00% <= 13 milliseconds
    6752.19 requests per second
    
    ====== LRANGE_500 (first 450 elements) ======
      10000 requests completed in 1.65 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    0.03% <= 2 milliseconds
    0.81% <= 3 milliseconds
    32.20% <= 4 milliseconds
    60.47% <= 5 milliseconds
    79.42% <= 6 milliseconds
    90.72% <= 7 milliseconds
    95.86% <= 8 milliseconds
    98.50% <= 9 milliseconds
    99.77% <= 10 milliseconds
    100.00% <= 10 milliseconds
    6060.61 requests per second
    
    ====== LRANGE_600 (first 600 elements) ======
      10000 requests completed in 1.88 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    0.03% <= 2 milliseconds
    0.17% <= 3 milliseconds
    16.70% <= 4 milliseconds
    47.74% <= 5 milliseconds
    68.38% <= 6 milliseconds
    84.82% <= 7 milliseconds
    93.00% <= 8 milliseconds
    97.03% <= 9 milliseconds
    98.74% <= 10 milliseconds
    99.48% <= 11 milliseconds
    99.61% <= 12 milliseconds
    99.65% <= 13 milliseconds
    99.70% <= 14 milliseconds
    99.71% <= 15 milliseconds
    99.75% <= 16 milliseconds
    99.89% <= 17 milliseconds
    99.99% <= 18 milliseconds
    100.00% <= 18 milliseconds
    5324.81 requests per second
    
    ====== MSET (10 keys) ======
      10000 requests completed in 1.04 seconds
      50 parallel clients
      3 bytes payload
      keep alive: 1
    
    0.01% <= 1 milliseconds
    0.93% <= 2 milliseconds
    25.54% <= 3 milliseconds
    61.25% <= 4 milliseconds
    81.86% <= 5 milliseconds
    92.88% <= 6 milliseconds
    97.90% <= 7 milliseconds
    99.27% <= 8 milliseconds
    99.73% <= 9 milliseconds
    100.00% <= 9 milliseconds
    9633.91 requests per second
    
    • 5 Redis 客户端连接

    Redis 通过监听一个 TCP 端口或者 Unix socket 的方式来接收来自客户端的连接,当一个连接建立后,Redis 内部会进行以下一些操作:
        首先,客户端 socket 会被设置为非阻塞模式,因为 Redis 在网络事件处理上采用的是非阻塞多路复用模型。
        然后为这个 socket 设置 TCP_NODELAY 属性,禁用 Nagle 算法
        然后创建一个可读的文件事件用于监听这个客户端 socket 的数据发送
    
    # 最大连接数
    在 Redis2.4 中,最大连接数是被直接硬编码在代码里面的,而在2.6版本中这个值变成可配置的。
    maxclients 的默认值是 10000,你也可以在 redis.conf 中对这个值进行修改。
    
    127.0.0.1:6379> config get maxclients
    1) "maxclients"
    2) "10000"
    

    实例
    以下实例我们在服务启动时设置最大连接数为 100000:
    redis-server --maxclients 100000
    客户端命令
    |S.N. |命令| 描述|
    |1 |CLIENT LIST 返回连接到 redis 服务的客户端列表|
    |2 |CLIENT SETNAME |设置当前连接的名称|
    |3 |CLIENT GETNAME| 获取通过 CLIENT SETNAME 命令设置的服务名称|
    |4 |CLIENT PAUSE |挂起客户端连接,指定挂起的时间以毫秒计|
    |5 |CLIENT KILL| 关闭客户端连接|

    • 6 Redis 管道技术

    Redis是一种基于客户端-服务端模型以及请求/响应协议的TCP服务。这意味着通常情况下一个请求会遵循以下步骤:
        客户端向服务端发送一个查询请求,并监听Socket返回,通常是以阻塞模式,等待服务端响应。
        服务端处理命令,并将结果返回给客户端。
    
    
    # Redis 管道技术(出错)
    Redis 管道技术可以在服务端未响应时,客户端可以继续向服务端发送请求,并最终一次性读取所有服务端的响应。
    
    #实例
    查看 redis 管道,只需要启动 redis 实例并输入以下命令:
    $(echo -en "PING\r\n SET runoobkey redis\r\nGET runoobkey\r\nINCR visitor\r\nINCR visitor\r\nINCR visitor\r\n"; sleep 10) | nc localhost 6379
    
    +PONG
    +OK
    redis
    :1
    :2
    :3
    以上实例中我们通过使用 PING 命令查看redis服务是否可用, 之后我们们设置了 runoobkey 的值为 redis,然后我们获取 runoobkey 的值并使得 visitor 自增 3 次。
    在返回的结果中我们可以看到这些命令一次性向 redis 服务提交,并最终一次性读取所有服务端的响应
    管道技术的优势
    管道技术最显著的优势是提高了 redis 服务的性能。
    一些测试数据
    在下面的测试中,我们将使用Redis的Ruby客户端,支持管道技术特性,测试管道技术对速度的提升效果。
    require 'rubygems' 
    require 'redis'
    def bench(descr) 
    start = Time.now 
    yield 
    puts "#{descr} #{Time.now-start} seconds" 
    end
    def without_pipelining 
    r = Redis.new 
    10000.times { 
        r.ping 
    } 
    end
    def with_pipelining 
    r = Redis.new 
    r.pipelined { 
        10000.times { 
            r.ping 
        } 
    } 
    end
    bench("without pipelining") { 
        without_pipelining 
    } 
    bench("with pipelining") { 
        with_pipelining 
    }
    从处于局域网中的Mac OS X系统上执行上面这个简单脚本的数据表明,开启了管道操作后,往返时延已经被改善得相当低了。
    without pipelining 1.185238 seconds 
    with pipelining 0.250783 seconds
    如你所见,开启管道后,我们的速度效率提升了5倍。
    
    
    • 7 Redis 分区

    分区是分割数据到多个Redis实例的处理过程,因此每个实例只保存key的一个子集。
    
    # 分区的优势
        通过利用多台计算机内存的和值,允许我们构造更大的数据库。
        通过多核和多台计算机,允许我们扩展计算能力;通过多台计算机和网络适配器,允许我们扩展网络带宽。
    
    # 分区的不足
        redis的一些特性在分区方面表现的不是很好:
            涉及多个key的操作通常是不被支持的。举例来说,当两个set映射到不同的redis实例上时,你就不能对这两个set执行交集操作。
            涉及多个key的redis事务不能使用。
            当使用分区时,数据处理较为复杂,比如你需要处理多个rdb/aof文件,并且从多个实例和主机备份持久化文件。
            增加或删除容量也比较复杂。redis集群大多数支持在运行时增加、删除节点的透明数据平衡的能力,但是类似于客户端分区、代理等其他系统则不支持这项特性。然而,一种叫做presharding的技术对此是有帮助的。
    
    # 分区类型
        Redis 有两种类型分区。 假设有4个Redis实例 R0,R1,R2,R3,和类似user:1,user:2这样的表示用户的多个key,对既定的key有多种不同方式来选择这个key存放在哪个实例中。也就是说,有不同的系统来映射某个key到某个Redis服务。
    
    # 范围分区
        最简单的分区方式是按范围分区,就是映射一定范围的对象到特定的Redis实例。
    比如,ID从0到10000的用户会保存到实例R0,ID从10001到 20000的用户会保存到R1,以此类推。
    这种方式是可行的,并且在实际中使用,不足就是要有一个区间范围到实例的映射表。这个表要被管理,同时还需要各 种对象的映射表,通常对Redis来说并非是好的方法。
    
    # 哈希分区
        另外一种分区方法是hash分区。这对任何key都适用,也无需是object_name:这种形式,像下面描述的一样简单:
        用一个hash函数将key转换为一个数字,比如使用crc32 hash函数。对key foobar执行crc32(foobar)会输出类似93024922的整数。
        
        对这个整数取模,将其转化为0-3之间的数字,就可以将这个整数映射到4个Redis实例中的一个了。93024922 % 4 = 2,就是说key foobar应该被存到R2实例中。注意:取模操作是取除的余数,通常在多种编程语言中用%操作符实现。
    

    相关文章

      网友评论

        本文标题:22 Redis备份、安全、性能测试、客户端连接与分区

        本文链接:https://www.haomeiwen.com/subject/lxdwvxtx.html