美文网首页
redis.conf文件配置细节(4.0)

redis.conf文件配置细节(4.0)

作者: lazyguy | 来源:发表于2019-05-07 10:48 被阅读0次

    ./redis-server 启动的第一个参数就得带上配置文件

    ./redis-server /path/to/redis.conf

    配置文件中的单位换算规则,

    k已1000为单位换算,kb则是准确的1024换算,大小写不敏感。

    1k => 1000 bytes
    1kb => 1024 bytes
    1m => 1000000 bytes
    1mb => 10241024 bytes
    1g => 1000000000 bytes
    1gb => 1024
    1024*1024 bytes

    多个配置文件引用

    redis也支持配置文件的引用。主要是用于“多态”,基础配置文件用于通用,再针对个别服务器配置单独的文件被基础配置文件引用。
    注意:命令config rewrite xxxx 是在改写redis.conf不会涉及到被引用的文件。而redis读取配置生效的依据是同样的命令,最后一个有效。所以当你希望被引入的配置文件总是有效时,就将其放在文件最后。反之,放在文件最前面。

    include /path/to/local.conf
    include /path/to/other.conf

    额外模块(module)

    增强模块的加载在server启动时,如果不能成功加载,忽略。

    include /path/to/local.conf
    include /path/to/other.conf

    网络

    bind 指令

    需要明确的是,bind指令配置的是该redis server 监听的网卡的地址,如果不配置,或者配置成 bind 0.0.0.0 ,表示监听所有网卡。redis默认给的配置是bind 127.0.0.1,意味着只有本机能访问。

    bind 192.168.1.100 10.0.0.1
    bind 127.0.0.1 ::1

    protected-mode 指令

    默认开启,在开启时,如果以下任一条件满足,则禁止外网ip访问。
    1.没有明确的bind到一个网卡
    2.没有启用访问密码。

    protect-mode yes

    port 指令

    默认6379,如果设置为0,不监听任何tcp接口。

    port 6379

    tcp-backlog 指令

    客户端准备好被接入的等待队列的大小。默认值如下。
    因为linux 内核本身会按照/proc/sys/net/core/somaxconn的配置自动丢弃超过队列大小的待接入client,所以,需要同时提高linux本身的配置somaxconn 和 tcp_max_syn_backlog才有效
    https://blog.csdn.net/chuixue24/article/details/80486866

    tcp-backlog 511

    unixsocket 指令

    用于IPC(inter process commication),默认没有开启

    unixsocket /tmp/redis.sock
    unixsocketperm 700

    timeout 指令

    client idle一定时间后将其关闭,默认0,表示关闭此功能(应该是无指令发送?)

    timeout 0

    TCP keepalive.

    tcp协议自带的维持连接的心跳间隔
    tcp-keepalive 300

    通用

    配置指令 意义 示例
    daemonize 默认redis并不会在linux上以daemon模式运行。开启后会写/var/run/redis.conf文件,用于单实例验证。 daemonize no
    supervised ???? supervised no
    pidfile 指定redis以守护进程启动时,要写的pid文件,默认是/var/run/redis.conf,如果不能创建,也不会影响redis的启动和运行 pidfile /var/run/redis_6379.pid
    loglevel 从低到高debug,verbose,notice,warning 线上推荐notice loglevel notice
    logfile 日志输送的地方,如果是空字符串,默认输出到标准输出流2,比如在控制台启动的,则输出到控制台。如果以daemonize运行,又没有指定日志,则到/dev/null logfile ""
    syslog-enabled 将日志打印到system logger syslog-enabled no
    syslog-ident Specify the syslog identity syslog-ident redis
    syslog-facility ?????? syslog-facility local0
    databases https://stackoverflow.com/questions/16221563/whats-the-point-of-multiple-redis-databases databases 16
    always-show-log 打印那个酷炫的logo always-show-log yes

    快照相关配置

    配置指令 意义 示例
    save <seconds> <changes> 多少秒后,至少N次修改请求后,执行快照存储RDB
    注释掉所有save指令,则不执行持久化,在运行时也可以通过配置save "" ,来关掉持久化
    save 900 1
    save 300 10
    save 60 10000
    stop-writes-on-bgsave-error 当rdb持久化开启时,如果bgsave失败了,redis会拒绝client端的写入,用硬核方式提醒。但是如果考虑做了其他有效监控,不希望redis用这种方式报警,可以关闭 stop-writes-on-bgsave-error yes
    rdbcompression 压缩存储文件,通常应总设置为yes,除非想在存储时节省cpu资源 rdbcompression yes
    rdbchecksum 从version 5开始rdb会在存储加载时做CRC64校验,可以更好的验证文件内容,但是会降低10%性能 rdbchecksum yes
    dbfilename rdb快照默认输出文件,dump.rdb dbfilename dump.rdb
    dir redis的工作目录,默认当前文件夹。注意此配置应该是目录。dbfilename也在此文件夹下 dir ./

    集群相关配置(REPLICATION)

    配置指令 意义 示例
    slaveof <masterip> <masterport> 主从同步,此配置用于表明该server是某个server的slave slaveof 192.168.1.212 6379
    masterauth <master-password> 如果master有秘密
    slave-serve-stale-data slave数据库是否在正在同步或断开时,响应客户端,默认yes slave-serve-stale-data yes
    slave-read-only slave server是否只接受读请求,默认yes.感觉永远也别用这功能为好 slave-read-only yes
    repl-diskless-sync 主从全量复制时是否通过纯socket方式运行,该功能还在experimental阶段(4.0),速度更快,每次同步masterserver都等待一段时间,以尽量同步多的slave server repl-diskless-sync no
    repl-diskless-sync-delay 无盘复制的时候,默认延迟5s repl-diskless-sync-delay 5
    repl-ping-slave-period slave ping的心跳周期,默认10s repl-ping-slave-period 10
    repl-timeout 集群判定超时的时间,60s默认,这个时间应该大于repl-ping-slave-period repl-timeout 60
    repl-disable-tcp-nodelay 开启时,redis会用更小的带宽去做主从同步,但是最高会有40ms延迟,适用于主从server带宽不够时,默认自然是关闭的 repl-disable-tcp-nodelay no
    repl-backlog-size 部分同步的时候,master缓存指令的buffer大小,越大,主从失联导致的全量复制可能就越低 repl-backlog-size 1mb
    repl-backlog-ttl 主server和从server 失联超过此时间后,释放主server的缓存
    从server是不会释放backlog的,因为它需要用里面的数据来跟master重连确认同步到哪儿了
    repl-backlog-ttl 3600
    slave-priority 100 用于sentinel选新的mater server,越小优先级越高。这个不应该搞一个zxid最大的选举什么的么?
    min-slaves-to-write 小于X台slave连接后,master停止写服务 默认关闭
    min-slaves-max-lag 上面这个台数失联后,可容忍的时间
    slave-announce-ip 强制配置告诉master自己的IP,不考虑NAT情况 slave-announce-ip 5.5.5.5
    slave-announce-port 强制配置告诉master自己的PORT,不考虑NAT情况 slave-announce-port 1234

    5.5.5.5

    安全相关配置

    如果密码不够强,就最好别用,因为redis的读取速度极快,很容易被破解
    requirepass xxxxx

    命令改别名

    rename-command CONFIG ""
    

    客户端相关配置

    默认最大socket连接10000/或者系统允许的最大数。如果超过,丢出错误信息 max number of clients reached

    maxclients 10000""
    

    内存管理

    redis可以设置使用内存的最大值,配合回收策略。todo

       maxmemory-policy noeviction
       maxmemory-samples 5
    

    懒释放/延期删除(lazy freeing)

    可以通过配置,将一些server端删除行为改为同步或者异步 //todo
    lazyfree-lazy-eviction no
    lazyfree-lazy-expire no
    lazyfree-lazy-server-del no
    slave-lazy-flush no

    APPEND ONLY MODE (AOF持久化)

    AOF默认不开启,值得注意的是,aof和rdb其实是可以同时开启的,并不冲突。如果2者皆有,AOF文件会被优先选择,因为其保存的信息更多。

      appendonly no
      appendfilename "appendonly.aof"
    

    默认调用fsync()的评率是 everysec,每秒一次.参考:http://antirez.com/post/redis-persistence-demystified.html

        appendfsync always
        appendfsync everysec
        appendfsync no
    

    ???

        no-appendfsync-on-rewrite no
    

    设置redis重写aof的最小size和阈值比例,默认100%

        auto-aof-rewrite-percentage 100
        auto-aof-rewrite-min-size 64mb
    

    是否接受加载未被完全序列化的aof文件(屁股被截断了的),默认yes.注意是尾部被截断的情况,如果aof是中间的数据出现错误,任然会导致error。

        aof-load-truncated yes
    

    ???//todo

        aof-use-rdb-preamble no
    

    LUA脚本

    SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be

        lua-time-limit 5000
    

    redis 集群(redis cluster)

    普通的redis server是不能被集群化的。必须从一开始就配置为集群的一个节点才行。

    指令名 意义 示例
    cluster-enabled 启动集群 cluster-enabled yes
    cluster-config-file 集群节点自动创建,修改的文件,不要手动修改 cluster-config-file nodes-6379.conf
    cluster-node-timeout 节点失联认定时间,单位毫秒大多数其他内部时间限制都是基于此配置的倍数 cluster-node-timeout 15000
    cluster-slave-validity-factor 从节点允许可能成为master节点的时间阈值=(node-timeout * slave-validity-factor) + repl-ping-slave-period
    此阈值越大,从server就允许越旧的数据存在,当为0的时候,任何slave server都可能成为master
    cluster-slave-validity-factor 10
    cluster-migration-barrier 集群的“富余”slave允许分给其他孤儿master,master至少还有x+1台slave,则可分给孤儿一台slave,如果低于此数量,决绝。默认为1.既只有自己有2台slave,才能给孤儿master一台。https://blog.csdn.net/u011535541/article/details/78625330 cluster-migration-barrier 1
    cluster-require-full-coverage 当集群的某些slot未被覆盖,既master-slave节点全挂了。是否允许集群继续对外服务。默认不行 cluster-require-full-coverage yes
    cluster-slave-no-failover no 如果此server是slave,阻止其自动成为master(没看明白使用场景),默认自然是no cluster-slave-no-failover no

    集群对docker/nat的支持 CLUSTER DOCKER/NAT support

    todo?

        cluster-announce-ip 10.1.1.5
        cluster-announce-port 6379
        cluster-announce-bus-port 6380
    

    慢日志SLOW LOG

    慢日志的时间并未包括对应的I/O操作(客户端连接,应答,回写),而是事实的操作时间

    命令 意义 demo
    slowlog-log-slower-than 10000 单位是微秒,所以默认慢于10毫秒的都会被认为是慢,被记录 slowlog-log-slower-than 10000
    slowlog-max-len 128 等待被IO进文件的slow command的队列长度,虽然数字可以任意设置,但是注意这个队列是要占用内存的,SLOWLOG RESET可以清空这个队列 slowlog-max-len 128

    ????todo LATENCY MONITOR

    latency-monitor-threshold 0

    事件通知?todo

    notify-keyspace-events ""

    ############################### ADVANCED CONFIG ###############################

    Hashes are encoded using a memory efficient data structure when they have a

    small number of entries, and the biggest entry does not exceed a given

    threshold. These thresholds can be configured using the following directives.

    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64

    Lists are also encoded in a special way to save a lot of space.

    The number of entries allowed per internal list node can be specified

    as a fixed maximum size or a maximum number of elements.

    For a fixed maximum size, use -5 through -1, meaning:

    -5: max size: 64 Kb <-- not recommended for normal workloads

    -4: max size: 32 Kb <-- not recommended

    -3: max size: 16 Kb <-- probably not recommended

    -2: max size: 8 Kb <-- good

    -1: max size: 4 Kb <-- good

    Positive numbers mean store up to exactly that number of elements

    per list node.

    The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),

    but if your use case is unique, adjust the settings as necessary.

    list-max-ziplist-size -2

    Lists may also be compressed.

    Compress depth is the number of quicklist ziplist nodes from each side of

    the list to exclude from compression. The head and tail of the list

    are always uncompressed for fast push/pop operations. Settings are:

    0: disable all list compression

    1: depth 1 means "don't start compressing until after 1 node into the list,

    going from either the head or tail"

    So: [head]->node->node->...->node->[tail]

    [head], [tail] will always be uncompressed; inner nodes will compress.

    2: [head]->[next]->node->node->...->node->[prev]->[tail]

    2 here means: don't compress head or head->next or tail->prev or tail,

    but compress all nodes between them.

    3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]

    etc.

    list-compress-depth 0

    Sets have a special encoding in just one case: when a set is composed

    of just strings that happen to be integers in radix 10 in the range

    of 64 bit signed integers.

    The following configuration setting sets the limit in the size of the

    set in order to use this special memory saving encoding.

    set-max-intset-entries 512

    Similarly to hashes and lists, sorted sets are also specially encoded in

    order to save a lot of space. This encoding is only used when the length and

    elements of a sorted set are below the following limits:

    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64

    HyperLogLog sparse representation bytes limit. The limit includes the

    16 bytes header. When an HyperLogLog using the sparse representation crosses

    this limit, it is converted into the dense representation.

    A value greater than 16000 is totally useless, since at that point the

    dense representation is more memory efficient.

    The suggested value is ~ 3000 in order to have the benefits of

    the space efficient encoding without slowing down too much PFADD,

    which is O(N) with the sparse encoding. The value can be raised to

    ~ 10000 when CPU is not a concern, but space is, and the data set is

    composed of many HyperLogLogs with cardinality in the 0 - 15000 range.

    hll-sparse-max-bytes 3000

    Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in

    order to help rehashing the main Redis hash table (the one mapping top-level

    keys to values). The hash table implementation Redis uses (see dict.c)

    performs a lazy rehashing: the more operation you run into a hash table

    that is rehashing, the more rehashing "steps" are performed, so if the

    server is idle the rehashing is never complete and some more memory is used

    by the hash table.

    The default is to use this millisecond 10 times every second in order to

    actively rehash the main dictionaries, freeing memory when possible.

    If unsure:

    use "activerehashing no" if you have hard latency requirements and it is

    not a good thing in your environment that Redis can reply from time to time

    to queries with 2 milliseconds delay.

    use "activerehashing yes" if you don't have such hard requirements but

    want to free memory asap when possible.

    activerehashing yes

    The client output buffer limits can be used to force disconnection of clients

    that are not reading data from the server fast enough for some reason (a

    common reason is that a Pub/Sub client can't consume messages as fast as the

    publisher can produce them).

    The limit can be set differently for the three different classes of clients:

    normal -> normal clients including MONITOR clients

    slave -> slave clients

    pubsub -> clients subscribed to at least one pubsub channel or pattern

    The syntax of every client-output-buffer-limit directive is the following:

    client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>

    A client is immediately disconnected once the hard limit is reached, or if

    the soft limit is reached and remains reached for the specified number of

    seconds (continuously).

    So for instance if the hard limit is 32 megabytes and the soft limit is

    16 megabytes / 10 seconds, the client will get disconnected immediately

    if the size of the output buffers reach 32 megabytes, but will also get

    disconnected if the client reaches 16 megabytes and continuously overcomes

    the limit for 10 seconds.

    By default normal clients are not limited because they don't receive data

    without asking (in a push way), but just after a request, so only

    asynchronous clients may create a scenario where data is requested faster

    than it can read.

    Instead there is a default limit for pubsub and slave clients, since

    subscribers and slaves receive data in a push fashion.

    Both the hard or the soft limit can be disabled by setting them to zero.

    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit slave 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60

    Client query buffers accumulate new commands. They are limited to a fixed

    amount by default in order to avoid that a protocol desynchronization (for

    instance due to a bug in the client) will lead to unbound memory usage in

    the query buffer. However you can configure it here if you have very special

    needs, such us huge multi/exec requests or alike.

    client-query-buffer-limit 1gb

    In the Redis protocol, bulk requests, that are, elements representing single

    strings, are normally limited ot 512 mb. However you can change this limit

    here.

    proto-max-bulk-len 512mb

    Redis calls an internal function to perform many background tasks, like

    closing connections of clients in timeout, purging expired keys that are

    never requested, and so forth.

    Not all tasks are performed with the same frequency, but Redis checks for

    tasks to perform according to the specified "hz" value.

    By default "hz" is set to 10. Raising the value will use more CPU when

    Redis is idle, but at the same time will make Redis more responsive when

    there are many keys expiring at the same time, and timeouts may be

    handled with more precision.

    The range is between 1 and 500, however a value over 100 is usually not

    a good idea. Most users should use the default of 10 and raise this up to

    100 only in environments where very low latency is required.

    hz 10

    When a child rewrites the AOF file, if the following option is enabled

    the file will be fsync-ed every 32 MB of data generated. This is useful

    in order to commit the file to the disk more incrementally and avoid

    big latency spikes.

    aof-rewrite-incremental-fsync yes

    Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good

    idea to start with the default settings and only change them after investigating

    how to improve the performances and how the keys LFU change over time, which

    is possible to inspect via the OBJECT FREQ command.

    There are two tunable parameters in the Redis LFU implementation: the

    counter logarithm factor and the counter decay time. It is important to

    understand what the two parameters mean before changing them.

    The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis

    uses a probabilistic increment with logarithmic behavior. Given the value

    of the old counter, when a key is accessed, the counter is incremented in

    this way:

    1. A random number R between 0 and 1 is extracted.

    2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).

    3. The counter is incremented only if R < P.

    The default lfu-log-factor is 10. This is a table of how the frequency

    counter changes with a different number of accesses with different

    logarithmic factors:

    +--------+------------+------------+------------+------------+------------+

    | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |

    +--------+------------+------------+------------+------------+------------+

    | 0 | 104 | 255 | 255 | 255 | 255 |

    +--------+------------+------------+------------+------------+------------+

    | 1 | 18 | 49 | 255 | 255 | 255 |

    +--------+------------+------------+------------+------------+------------+

    | 10 | 10 | 18 | 142 | 255 | 255 |

    +--------+------------+------------+------------+------------+------------+

    | 100 | 8 | 11 | 49 | 143 | 255 |

    +--------+------------+------------+------------+------------+------------+

    NOTE: The above table was obtained by running the following commands:

    redis-benchmark -n 1000000 incr foo

    redis-cli object freq foo

    NOTE 2: The counter initial value is 5 in order to give new objects a chance

    to accumulate hits.

    The counter decay time is the time, in minutes, that must elapse in order

    for the key counter to be divided by two (or decremented if it has a value

    less <= 10).

    The default value for the lfu-decay-time is 1. A Special value of 0 means to

    decay the counter every time it happens to be scanned.

    lfu-log-factor 10

    lfu-decay-time 1

    ########################### ACTIVE DEFRAGMENTATION #######################

    WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested

    even in production and manually tested by multiple engineers for some

    time.

    What is active defragmentation?

    -------------------------------

    Active (online) defragmentation allows a Redis server to compact the

    spaces left between small allocations and deallocations of data in memory,

    thus allowing to reclaim back memory.

    Fragmentation is a natural process that happens with every allocator (but

    less so with Jemalloc, fortunately) and certain workloads. Normally a server

    restart is needed in order to lower the fragmentation, or at least to flush

    away all the data and create it again. However thanks to this feature

    implemented by Oran Agra for Redis 4.0 this process can happen at runtime

    in an "hot" way, while the server is running.

    Basically when the fragmentation is over a certain level (see the

    configuration options below) Redis will start to create new copies of the

    values in contiguous memory regions by exploiting certain specific Jemalloc

    features (in order to understand if an allocation is causing fragmentation

    and to allocate it in a better place), and at the same time, will release the

    old copies of the data. This process, repeated incrementally for all the keys

    will cause the fragmentation to drop back to normal values.

    Important things to understand:

    1. This feature is disabled by default, and only works if you compiled Redis

    to use the copy of Jemalloc we ship with the source code of Redis.

    This is the default with Linux builds.

    2. You never need to enable this feature if you don't have fragmentation

    issues.

    3. Once you experience fragmentation, you can enable this feature when

    needed with the command "CONFIG SET activedefrag yes".

    The configuration parameters are able to fine tune the behavior of the

    defragmentation process. If you are not sure about what they mean it is

    a good idea to leave the defaults untouched.

    Enabled active defragmentation

    activedefrag yes

    Minimum amount of fragmentation waste to start active defrag

    active-defrag-ignore-bytes 100mb

    Minimum percentage of fragmentation to start active defrag

    active-defrag-threshold-lower 10

    Maximum percentage of fragmentation at which we use maximum effort

    active-defrag-threshold-upper 100

    Minimal effort for defrag in CPU percentage

    active-defrag-cycle-min 25

    Maximal effort for defrag in CPU percentage

    active-defrag-cycle-max 75

    相关文章

      网友评论

          本文标题:redis.conf文件配置细节(4.0)

          本文链接:https://www.haomeiwen.com/subject/hrzqoqtx.html