美文网首页
hive oom解决

hive oom解决

作者: Hunter严 | 来源:发表于2019-06-20 21:15 被阅读0次

Hive中跑MapReduce Job出现OOM问题分析及解决

一、引子

今天在跑一段很复杂而且涉及数据量10多年的N个表join的长SQL时,发生了OOM的异常。

由于一个map通常配置只有64MB或者128MB,则在Map阶段出现OOM的情况很少见。所以一般发生在reduce阶段。

但是今天这个异常详细的看后,会发现既不是map阶段,也不是reduce阶段,发现不是执行过程,而是driver提交job阶段就OOM了。

Hive中XMLEncoder序列化MapredWork引发OutOfMemoryError

XMLEncoder导致java.lang.OutOfMemoryError: GC overhead limit exceeded

二、概括回顾

先概括下,Hive中出现OOM的异常原因大致分为以下几种:

1. Map阶段OOM。

2. Reduce阶段OOM。

3. Driver提交Job阶段OOM。

Map阶段OOM:

1. 发生OOM的几率很小,除非你程序的逻辑不正常,亦或是程序写的不高效,产生垃圾太多。

Reduce阶段OOM:

1. data skew 数据倾斜

data skew是引发这个的一个原因。

key分布不均匀,导致某一个reduce所处理的数据超过预期,导致jvm频繁GC。

2. value对象过多或者过大

某个reduce中的value堆积的对象过多,导致jvm频繁GC。

解决办法:

1. 增加reduce个数,set mapred.reduce.tasks=300,。

2. 在hive-site.xml中设置,或者在hive shell里设置 set  mapred.child.java.opts = -Xmx512m

   或者只设置reduce的最大heap为2G,并设置垃圾回收器的类型为并行标记回收器,这样可以显著减少GC停顿,但是稍微耗费CPU。

   set mapred.reduce.child.java.opts=-Xmx2g -XX:+UseConcMarkSweepGC;

3. 使用map join 代替 common join. 可以set hive.auto.convert.join = true

4. 设置 hive.optimize.skewjoin = true 来解决数据倾斜问题

Driver提交job阶段OOM:

 job产生的执行计划的条目太多,比如扫描的分区过多,上到4k-6k个分区的时候,并且是好几张表的分区都很多时,这时做join。

究其原因,是 因为序列化时,会将这些分区,即hdfs文件路径,封装为Path对象,这样,如果对象太多了,而且Driver启动的时候设置的heap size太小,则会导致在Driver内序列化这些MapRedWork时,生成的对象太多,导致频繁GC,则会引发如下异常:

java.lang.OutOfMemoryError: GC overhead limit exceeded

at sun.nio.cs.UTF_8.newEncoder(UTF_8.java:53)

at java.beans.XMLEncoder.createString(XMLEncoder.java:572)

三、诊断问题

如何诊断到了问题:

在网上搜异常,在Hive的IRA发现一个issues,和我的情况类似:

HIVE-2988

问题描述:

Use of XMLEncoder to serialize MapredWork causes OOM in hive cli

When running queries on tables with 6000 partitions, hive cli if configured with 128M runs into OOM. Heapdump showed 37MB occupied by one XMLEncoder object while the MapredWork was 500K which is highly inefficient. We should switch to using something more efficient like XStream.

比较相近的解释:

I ran with128M to investigate the OOM. We have resorted to running with1G as XmX because we keep hitting OOM with bigger tables in hive. There were other things that contributed to the memory usage - mostly Path objects because of the higher number of partitions. But they are absolutely needed. XMLEncoder is something that created too much garbage in a veryshortspan and caused GC. That would be something easy to change/fix without having to touch the core logic.

We should be looking at fixing the root cause of the problem instead of keeping on increasing the memory requirements. Ours is a highly multi-tenant system and there are lot of otherprograms(pig,etc)running too in the gateway. So running with a lowermemory(256-512MB)will help.

Found two other reports ofthisissue:

http://mail-archives.apache.org/mod_mbox/hive-user/201106.mbox/%3CBANLkTik4THLNkxV87UygvqhoLri3UL9R3Q@mail.gmail.com%3E

https://issues.apache.org/jira/browse/HIVE-1316

This fix increased the max heap size of CLI client and disabled GC overhead limit.

于是亲自确认一下问题处在哪个地方:

因为这段复杂的sql有22个job,每次到第20个job的时候就会出现oom,所以我特意将数据置空,调试这在shell下跑。

发现跑到第20个job,扫描的正是扫描了14年的注册表,推断是分区太多,造成的序列化时对象太多OOM。

以下是调试图:

将查询计划打印出来,会发现很大,主要是因为有一个注册表,扫描了近14年的历史数据,14×365 = 5110 个分区。除去早些年份数据的不完整性,大约3138个分区。

每个分区都会生成一个Path对象,这样仅仅是这一个表,生成的查询计划被序列化为对象会耗去大半内存,如果在和其它的表好几年的数据继续做join的话,又会耗去更多内存来序列化成对象,这样就会频繁GC,GC不出来,就会抛出OOM的异常。

仅仅是一个查询,都打印出了25MB,说明扫描的数据量是在是太多了。

explain extended  a_complex_hql_query > /app/hadoop/shengli/large_plan

du -sh large_plan

25M    large_plan

详细日志如下:

2014-11-2113:13:28,334INFO [Thread-2] AbstractBaseAction$StreamDrainer - Hadoop job informationforStage-23: number of mappers:12; number of reducers:1

2014-11-2113:13:28,354INFO [Thread-2] AbstractBaseAction$StreamDrainer -2014-11-2113:13:28,347Stage-23map =50%,  reduce =0%, Cumulative CPU42.04sec

2014-11-2113:13:48,370INFO [Thread-2] AbstractBaseAction$StreamDrainer -2014-11-2113:13:48,363Stage-23map =100%,  reduce =33%, Cumulative CPU262.39sec

2014-11-2113:14:29,228INFO [Thread-2] AbstractBaseAction$StreamDrainer -2014-11-2113:14:29,222Stage-23map =100%,  reduce =69%, Cumulative CPU262.39sec

2014-11-2113:14:49,248INFO [Thread-2] AbstractBaseAction$StreamDrainer -2014-11-2113:14:49,239Stage-23map =100%,  reduce =92%, Cumulative CPU324.73sec

2014-11-2113:15:11,952INFO [Thread-2] AbstractBaseAction$StreamDrainer -2014-11-2113:15:11,944Stage-23map =100%,  reduce =100%, Cumulative CPU360.1sec

2014-11-2113:15:36,740INFO [Thread-2] AbstractBaseAction$StreamDrainer -2014-11-2113:15:36,734Stage-23map =100%,  reduce =100%, Cumulative CPU360.1sec

2014-11-2113:15:36,741INFO [Thread-2] AbstractBaseAction$StreamDrainer - MapReduce Total cumulative CPU time:6minutes0seconds100msec

2014-11-2113:15:36,749INFO [Thread-2] AbstractBaseAction$StreamDrainer - Ended Job = job_201411141019_68657

2014-11-2113:15:40,306INFO [Thread-2] AbstractBaseAction$StreamDrainer - Launching Job20out of21

2014-11-2113:15:42,162INFO [Thread-2] AbstractBaseAction$StreamDrainer - Number of reduce tasks not specified. Estimated from input data size:252

2014-11-2113:15:42,163INFO [Thread-2] AbstractBaseAction$StreamDrainer -In order to change the average loadforareducer(in bytes):

2014-11-21 13:15:42,163 INFO [Thread-2] AbstractBaseAction$StreamDrainer -  set hive.exec.reducers.bytes.per.reducer=

2014-11-2113:15:42,163INFO [Thread-2] AbstractBaseAction$StreamDrainer - In order to limit the maximum number of reducers:

2014-11-2113:15:42,163INFO [Thread-2] AbstractBaseAction$StreamDrainer -  set hive.exec.reducers.max=

2014-11-2113:15:42,163INFO [Thread-2] AbstractBaseAction$StreamDrainer - In order to set a constant number of reducers:

2014-11-2113:15:42,163INFO [Thread-2] AbstractBaseAction$StreamDrainer -  set mapred.reduce.tasks=

2014-11-2113:16:40,377INFO [Thread-2] AbstractBaseAction$StreamDrainer - java.lang.OutOfMemoryError: GC overhead limit exceeded

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at sun.nio.cs.UTF_8.newEncoder(UTF_8.java:53)

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.createString(XMLEncoder.java:572)

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputValue(XMLEncoder.java:543)

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:682)

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:687)

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputValue(XMLEncoder.java:552)

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:682)

2014-11-2113:16:40,378INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:687)

2014-11-2113:16:40,379INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputValue(XMLEncoder.java:552)

2014-11-2113:16:40,379INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:682)

2014-11-2113:16:40,379INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:687)

2014-11-2113:16:40,379INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputValue(XMLEncoder.java:552)

2014-11-2113:16:40,379INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:682)

2014-11-2113:16:40,379INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:687)

2014-11-2113:16:40,379INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputValue(XMLEncoder.java:552)

2014-11-2113:16:40,380INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:682)

2014-11-2113:16:40,380INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:687)

2014-11-2113:16:40,380INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.outputValue(XMLEncoder.java:552)

2014-11-2113:16:40,380INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.flush(XMLEncoder.java:398)

2014-11-2113:16:40,380INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at java.beans.XMLEncoder.close(XMLEncoder.java:429)

2014-11-2113:16:40,380INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.exec.Utilities.serializeMapRedWork(Utilities.java:532)

2014-11-2113:16:40,380INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:376)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:418)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:138)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:144)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1355)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1139)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.ql.Driver.run(Driver.java:945)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer -      at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)

2014-11-2113:16:40,381INFO [Thread-2] AbstractBaseAction$StreamDrainer - FAILED: Execution Error,returncode -101from org.apache.hadoop.hive.ql.exec.MapRedTask

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - MapReduce Jobs Launched:

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job0: Map:33Reduce:5Cumulative CPU:903.59sec  HDFS Read:4225680985HDFS Write:123129169S

UCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job1: Map:192Reduce:4Cumulative CPU:3215.94sec  HDFS Read:3036813878HDFS Write:322647287

SUCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job2: Map:52Reduce:4Cumulative CPU:1314.04sec  HDFS Read:3278902794HDFS Write:435711878

SUCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job3: Map:959Reduce:70Cumulative CPU:55831.24sec  HDFS Read:69728403753HDFS Write:457377

8468SUCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job4: Map:154Reduce:1Cumulative CPU:1809.45sec  HDFS Read:483233734HDFS Write:25999761S

UCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job5: Map:84Reduce:6Cumulative CPU:2466.01sec  HDFS Read:5618486947HDFS Write:1649704865

SUCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job6: Map:450Reduce:55Cumulative CPU:22239.14sec  HDFS Read:54635746333HDFS Write:728231

5124SUCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job7: Map:359Reduce:27Cumulative CPU:14699.06sec  HDFS Read:26702083597HDFS Write:236403

3090SUCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job8: Map:83Reduce:6Cumulative CPU:2410.03sec  HDFS Read:5514015601HDFS Write:628742985

SUCCESS

2014-11-2113:16:40,382INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job9: Map:84Reduce:6Cumulative CPU:2200.0sec  HDFS Read:5618486947HDFS Write:853325331S

UCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job10: Map:365Reduce:27Cumulative CPU:13274.58sec  HDFS Read:27143622450HDFS Write:29912

35104SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job11: Map:959Reduce:70Cumulative CPU:55145.65sec  HDFS Read:69728403753HDFS Write:43358

51625SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job12: Map:172Reduce:1Cumulative CPU:1561.64sec  HDFS Read:945050606HDFS Write:40445679

SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job13: Map:260Reduce:9Cumulative CPU:5965.67sec  HDFS Read:8639664681HDFS Write:38277094

9SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job14: Map:212Reduce:15Cumulative CPU:7573.68sec  HDFS Read:14849298755HDFS Write:109518

4574SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job15: Map:16Reduce:2Cumulative CPU:981.37sec  HDFS Read:1660614485HDFS Write:795259370

SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job16: Map:27Reduce:3Cumulative CPU:1357.34sec  HDFS Read:2991246795HDFS Write:238860030

1SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job17: Map:70Reduce:5Cumulative CPU:2138.48sec  HDFS Read:4573808778HDFS Write:290957162

1SUCCESS

2014-11-2113:16:40,383INFO [Thread-2] AbstractBaseAction$StreamDrainer - Job18: Map:12Reduce:1Cumulative CPU:360.1sec  HDFS Read:853346317HDFS Write:705875733SU

CCESS

三、Driver阶段OOM解决方案:

原因找到了,是因为扫描的表的分区太多,上到3千到6千个分区,这样在对计划进行序列化时,仅仅是路径对象Path就会耗去大半Driver,如果Driver设置的heap太小,甚至都会OOM。

解决思路:

1.  减少分区数量,将历史数据做成一张整合表,做成增量数据表,这样分区就很少了。

2. 调大Hive CLI Driver的heap size, 默认是256MB,调节成512MB或者更大。

具体做法是在bin/hive bin/hive-config里可以找到启动CLI的JVM OPTIONS。

这里我们设置

export HADOOP_HEAPSIZE=512

我的解决方法是,双管齐下。

即做成了整合,方便使用,又调节了Hive CLI Driver的heap size,保证线上的运行稳定

相关文章

  • hive oom解决

    Hive中跑MapReduce Job出现OOM问题分析及解决 一、引子 今天在跑一段很复杂而且涉及数据量10多年...

  • Android基础(29)内存泄漏

    1)oom是什么?2)什么情况导致oom?3)有什么解决方法可以避免OOM?4)Oom 是否可以try catch...

  • Android OOM问题的What Where How

    知己知彼,百战不殆。要想解决和避免OOM,必须先知道OOM是什么,在哪里会发生,最后才是怎样去解决OOM; Wha...

  • Hive插入多个分区时OOM故障解决记录

    一、故障情景 基于Hive的数据仓库中需要做一张累积快照表,记录了客户发生各个行为的具体日期,比如激活日期、注册日...

  • flink get_json_object OOM 定位

    环境版本 flink 1.3.5 hive 2.1.1 问题现象 任务运行一段时间出现 oom GC 信息来自于 ...

  • 2. Interview-JVM&GC

    JVM知识图谱 1 怎么解决OOM?/ 怎么排查OOM?/ JVM调优 参考:https://blog.csdn....

  • 搭建Hive所遇过的坑

    一.基本功能: 1.启动hive时报错 解决方案: 2.启动hive时报错: 解决方案: 3.启动hive时报错 ...

  • [从零开始学Hive]Hive入门

    Hive入门 Hive简介 Hive:由Facebook开源用于解决海量结构化日志的数据统计。 Hive是基于Ha...

  • Hive 入门

    1 Hive 基本概念 1.1 什么是 Hive? Hive:由 Facebook 开源用于解决海量结构化日志的数...

  • 五分钟快速了解各种OOM代码样例及解决方法

    针对目前大家对OOM的类型不太熟悉,那么来总结一下各种OOM出现的情况以及解决方法。把各种OOM的情况列出来,然后...

网友评论

      本文标题:hive oom解决

      本文链接:https://www.haomeiwen.com/subject/yxjsqctx.html