问题描述:
OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions
运行SparkStreming程序一段时间后,发现产生了异常:
19/06/26 03:05:30 ERROR JobScheduler: Error running job streaming job 1561518330000 ms.0
org.apache.spark.SparkException: Job aborted due to stage failure: Task 13 in stage 14895.0 failed 4 times, most recent failure: Lost task 13.3 in stage 14895.0 (TID 98327, 172.19.32.62, executor 0): org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {mytopic-3.3.1-0=3274}
at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:883)
at
①如果spark-streaming刚启动就报这个错误,则因为Kafka的retention expiration造成的
解决方法:
https://blog.csdn.net/xueba207/article/details/51174818
https://m.imooc.com/article/details?article_id=269193
②如果spark-streaming 运行一段时间之后出现该问题,通常这个问题也有可能是如下原因造成的
如果消息体太大了,超过 fetch.message.max.bytes=1m的默认配置,那么Spark-Streaming会直接抛出OffsetOutOfRangeException异常,然后停止服务。
解决方案:Kafka consumer中设置fetch.message.max.bytes为大一点的内存
#比如设置为50M:1024*1024*50
fetch.message.max.bytes=52428800
原文链接:https://blog.csdn.net/fct2001140269/article/details/93756686
网友评论