美文网首页
Kafka日志文件查看内容方案

Kafka日志文件查看内容方案

作者: 明训 | 来源:发表于2021-03-13 00:13 被阅读0次

    背景说明

    通过日志数据文件恢复消息

    Kafka将生产者发送给它的消息数据内容保存至日志数据文件中,该文件以该段的基准偏移量左补齐0命名,文件后缀为“.log”。分区中的每条message由offset来表示它在这个分区中的偏移量,这个offset并不是该Message在分区中实际存储位置,而是逻辑上的一个值(Kafka中用8字节长度来记录这个偏移量),但它却唯一确定了分区中一条Message的逻辑位置,同一个分区下的消息偏移量按照顺序递增(这个可以类比下数据库的自增主键)。另外,从dump出来的日志数据文件的字符值中可以看到消息体的各个字段的内容值。
    

    解决方案

    命令查看

    [root@ljhan2 kafka]# bin/kafka-run-class.sh kafka.tools.DumpLogSegments
    Parse a log file and dump its contents to the console, useful for debugging a seemingly corrupt log segment.
    Option                               Description
    ------                               -----------
    --deep-iteration                     if set, uses deep instead of shallow
                                           iteration.
    --files <String: file1, file2, ...>  REQUIRED: The comma separated list of data
                                           and index log files to be dumped.
    --index-sanity-check                 if set, just checks the index sanity
                                           without printing its content. This is
                                           the same check that is executed on
                                           broker startup to determine if an index
                                           needs rebuilding or not.
    --key-decoder-class [String]         if set, used to deserialize the keys. This
                                           class should implement kafka.serializer.
                                           Decoder trait. Custom jar should be
                                           available in kafka/libs directory.
                                           (default: kafka.serializer.StringDecoder)
    --max-message-size <Integer: size>   Size of largest message. (default: 5242880)
    --offsets-decoder                    if set, log data will be parsed as offset
                                           data from the __consumer_offsets topic.
    --print-data-log                     if set, printing the messages content when
                                           dumping data logs. Automatically set if
                                           any decoder option is specified.
    --transaction-log-decoder            if set, log data will be parsed as
                                           transaction metadata from the
                                           __transaction_state topic.
    --value-decoder-class [String]       if set, used to deserialize the messages.
                                           This class should implement kafka.
                                           serializer.Decoder trait. Custom jar
                                           should be available in kafka/libs
                                           directory. (default: kafka.serializer.
                                           StringDecoder)
    --verify-index-only                  if set, just verify the index log without
                                           printing its content.
    [root@ljhan2 kafka]#
    

    命令执行

    [root@ljhan2 kafka]# bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.log  --deep-iteration --print-data-log
    Dumping /work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.log
    Starting offset: 0
    offset: 0 position: 0 CreateTime: 1610187094280 isvalid: true keysize: 2 valuesize: 2 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: [] key: a1 payload: a1
    offset: 1 position: 72 CreateTime: 1610187109018 isvalid: true keysize: 2 valuesize: 2 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: [] key: a2 payload: a2
    [root@ljhan2 kafka]# bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.index  --deep-iteration --print-data-log
    Dumping /work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.index
    offset: 0 position: 0
    [root@ljhan2 kafka]#
    [root@ljhan2 kafka]# bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.timeindex    --deep-iteration --print-data-log
    Dumping /work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.timeindex
    timestamp: 1610189276640 offset: 23
    Found timestamp mismatch in :/work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.timeindex
      Index timestamp: 0, log timestamp: 1610187094280
    Found out of order timestamp in :/work/uReplicator/cluster1/kafka_data/a-0/00000000000000000000.timeindex
      Index timestamp: 0, Previously indexed timestamp: 1610189276640
    [root@ljhan2 kafka]#
    

    参考文档

    https://www.dazhuanlan.com/2019/11/24/5dd99a748b5ea/

    相关文章

      网友评论

          本文标题:Kafka日志文件查看内容方案

          本文链接:https://www.haomeiwen.com/subject/gsbtcltx.html