美文网首页基础组件
kafka使用常见报错及解决方法

kafka使用常见报错及解决方法

作者: yanshaowen | 来源:发表于2018-03-08 09:21 被阅读14715次

    1 启动advertised.listeners配置异常:

    java.lang.IllegalArgumentException: requirement failed: advertised.listeners cannot use the nonroutable meta-address 0.0.0.0. Use a routable IP address.
        at scala.Predef$.require(Predef.scala:277)
        at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1203)
        at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1170)
        at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:881)
        at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:878)
        at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
        at kafka.Kafka$.main(Kafka.scala:82)
        at kafka.Kafka.main(Kafka.scala)
    
    

    1.1 解决方法:修改server.properties

    advertised.listeners=PLAINTEXT://{ip}:9092  # ip可以内网、外网ip、127.0.0.1 或域名
    

    1.2 解析:

    server.properties中有两个listeners。 listeners:启动kafka服务监听的ip和端口,可以监听内网ip和0.0.0.0(不能为外网ip),默认为java.net.InetAddress.getCanonicalHostName()获取的ip。advertised.listeners:生产者和消费者连接的地址,kafka会把该地址注册到zookeeper中,所以只能为除0.0.0.0之外的合法ip或域名 ,默认和listeners的配置一致。

    2 启动PrintGCDateStamps异常

    [0.004s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/data/service/kafka_2.11-0.11.0.2/bin/../logs/kafkaServer-gc.log instead.
    Unrecognized VM option 'PrintGCDateStamps'
    Error: Could not create the Java Virtual Machine.
    Error: A fatal exception has occurred. Program will exit.
    

    2.1 解决方法: 更换jdk1.8.x版本或者使用>=kafka1.0.x的版本。

    2.2 解析:

    只有在jdk1.9并且kafka版本在1.0.x之前的版本才会出现。

    3 生成者发送message失败或消费者不能消费(kafka1.0.1)

    #(java)org.apache.kafka警告
    Connection to node 0 could not be established. Broker may not be available.
    
    
    # (nodejs) kafka-node异常 (执行producer.send后的异常)
    { TimeoutError: Request timed out after 30000ms
        at new TimeoutError (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\errors\TimeoutError.js:6:9)
        at Timeout.setTimeout [as _onTimeout] (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:737:14)
        at ontimeout (timers.js:466:11)
        at tryOnTimeout (timers.js:304:5)
        at Timer.listOnTimeout (timers.js:264:5) message: 'Request timed out after 30000ms' }
    

    3.1 解决方法: 检查advertised.listeners的配置(如果有多个Broker可根据java版本的对应的node号检查配置),判断当前的网络是否可以连接到地址(telnet等)

    4 partitions配置的值过小造成错误(kafka1.0.1)

    #(java)org.apache.kafka(执行producer.send)
    Exception in thread "main" org.apache.kafka.common.KafkaException: Invalid partition given with record: 1 is not in the range [0...1).
        at org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:908)
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:778)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:768)
        at com.wenshao.dal.TestProducer.main(TestProducer.java:36)
    
    
    # (nodejs) kafka-node异常 (执行producer.send后的异常)
    { BrokerNotAvailableError: Could not find the leader
        at new BrokerNotAvailableError (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\errors\BrokerNotAvailableError.js:11:9)
        at refreshMetadata.error (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:831:16)
        at D:\project\node\kafka-test\src\node_modules\kafka-node\lib\client.js:514:9
        at KafkaClient.wrappedFn (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:379:14)
        at KafkaClient.Client.handleReceivedData (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\client.js:770:60)
        at Socket.<anonymous> (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:618:10)
        at Socket.emit (events.js:159:13)
        at addChunk (_stream_readable.js:265:12)
        at readableAddChunk (_stream_readable.js:252:11)
        at Socket.Readable.push (_stream_readable.js:209:10) message: 'Could not find the leader' }
    

    4.1 解决方法: 修改num.partitions的值,partitions在是在创建topic的时候默认创建的partitions节点的个数,只对新创建的topic生效,所有尽量在项目规划时候定一个合理的值。也可以通过命令行动态扩容()

    ./bin/kafka-topics.sh --zookeeper  localhost:2181 --alter --partitions 2 --topic  foo
    

    相关文章

      网友评论

        本文标题:kafka使用常见报错及解决方法

        本文链接:https://www.haomeiwen.com/subject/mqrsfftx.html