美文网首页
Hive自动map join优化而引发MapJoinMemory

Hive自动map join优化而引发MapJoinMemory

作者: LannisterWF | 来源:发表于2019-08-23 13:28 被阅读0次

问题描述

在Hive on Spark上运行一个hive查询脚本,spark将其划分为3个stage。发现脚本在执行到stage2时有大量task失败,失败任务被重新调起后依然失败,最终导致整个job失败。

分析问题

查看任务日志,发现报出的错误是

ERROR spark.SparkReduceRecordHandler: Fatal error: org.apache.hadoop.hive.ql.metadata.HiveException: Error while processing row (tag=0) {"key":{"_col0":"48945840 5791400728103103000049993","_col1":"01"},"value":{"_col0":0,"_col1":90,"_col2":0,"_col3":0,"_col4":0}}org.apache.hadoop.hive.ql.metadata.HiveException: Error while processing row (tag=0) {"key":{"_col0":"48945840 5791400728103103000049993","_col1":"01"},"value":{"_col0":0,"_col1":90,"_col2":0,"_col3":0,"_col4":0}}

继续往下翻看日志,看到

Caused by: org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionException: 2019-08-20 04:24:26 Processing rows: 3600000 Hashtable size: 3599999 Memory usage: 9491710256 percentage: 0.921 at org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionHandler.checkMemoryStatus(MapJoinMemoryExhaustionHandler.java:99)

根据字面意思,是因为由于执行了mapjoin而导致内存耗尽,内存占用率达到了0.921,因此任务无法继续执行,导致失败。

为什么会触发map join呢?根据mapjoin原理介绍可知:

MapJoin通常用于一个很小的表和一个大表进行join的场景,具体小表有多小,由参数 hive.mapjoin.smalltable.filesize来决定,该参数表示小表的总大小,默认值为25000000字节,即25M。 

mapjoin的优化在于,在mapreduce task开始之前,创建一个local task, 小表以hashtable的形式加载到内存,然后序列化到磁盘,把内存的hashtable压缩为tar文件。然后把文件分发到 Hadoop Distributed Cache,然后传输给每一个mapper,mapper在本地反序列化文件并加载进内存在做join。

在我的HQL语句中,确实有多个left outer join语句。据此,可以猜测是在小表加载时,内存溢出导致的问题。

确认原因

再次查阅日志,发现在报错之前有这么一段:

19/08/20 16:24:05 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:05 Processing rows: 2600000 Hashtable size: 2599999 Memory usage: 7472152840 percentage: 0.725

2019-08-20 04:24:05 Processing rows: 2800000 Hashtable size: 2799999 Memory usage: 7564427528 percentage: 0.734

19/08/20 16:24:05 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:05 Processing rows: 2800000 Hashtable size: 2799999 Memory usage: 7564427528 percentage: 0.734

2019-08-20 04:24:07 Processing rows: 2700000 Hashtable size: 2699999 Memory usage: 8992588048 percentage: 0.872

19/08/20 16:24:07 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:07 Processing rows: 2700000 Hashtable size: 2699999 Memory usage: 8992588048 percentage: 0.872

2019-08-20 04:24:07 Processing rows: 2900000 Hashtable size: 2899999 Memory usage: 7488930560 percentage: 0.726

19/08/20 16:24:07 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:07 Processing rows: 2900000 Hashtable size: 2899999 Memory usage: 7488930560 percentage: 0.726

2019-08-20 04:24:07 Processing rows: 2900000 Hashtable size: 2899999 Memory usage: 7572816640 percentage: 0.735

19/08/20 16:24:07 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:07 Processing rows: 2900000 Hashtable size: 2899999 Memory usage: 7572816640 percentage: 0.735

2019-08-20 04:24:10 Processing rows: 2800000 Hashtable size: 2799999 Memory usage: 7694450952 percentage: 0.746

19/08/20 16:24:10 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:10 Processing rows: 2800000 Hashtable size: 2799999 Memory usage: 7694450952 percentage: 0.746

2019-08-20 04:24:10 Processing rows: 3000000 Hashtable size: 2999999 Memory usage: 8029995272 percentage: 0.779

19/08/20 16:24:10 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:10 Processing rows: 3000000 Hashtable size: 2999999 Memory usage: 8029995272 percentage: 0.779

2019-08-20 04:24:10 Processing rows: 3000000 Hashtable size: 2999999 Memory usage: 8143241480 percentage: 0.79

19/08/20 16:24:10 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:10 Processing rows: 3000000 Hashtable size: 2999999 Memory usage: 8143241480 percentage: 0.79

19/08/20 16:24:11 INFO spark.SparkRecordHandler: processing 6000000 rows: used memory = 8357150984

19/08/20 16:24:11 INFO spark.SparkRecordHandler: processing 6000000 rows: used memory = 8503951624

2019-08-20 04:24:12 Processing rows: 2900000 Hashtable size: 2899999 Memory usage: 8294236424 percentage: 0.805

19/08/20 16:24:12 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:12 Processing rows: 2900000 Hashtable size: 2899999 Memory usage: 8294236424 percentage: 0.805

2019-08-20 04:24:12 Processing rows: 3100000 Hashtable size: 3099999 Memory usage: 8671723784 percentage: 0.841

19/08/20 16:24:12 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:12 Processing rows: 3100000 Hashtable size: 3099999 Memory usage: 8671723784 percentage: 0.841

2019-08-20 04:24:13 Processing rows: 3100000 Hashtable size: 3099999 Memory usage: 7904166168 percentage: 0.767

19/08/20 16:24:13 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:13 Processing rows: 3100000 Hashtable size: 3099999 Memory usage: 7904166168 percentage: 0.767

2019-08-20 04:24:14 Processing rows: 3000000 Hashtable size: 2999999 Memory usage: 8243904824 percentage: 0.80

19/08/20 16:24:14 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:14 Processing rows: 3000000 Hashtable size: 2999999 Memory usage: 8243904824 percentage: 0.80

19/08/20 16:24:15 INFO spark.SparkRecordHandler: processing 6000000 rows: used memory = 8269070632

2019-08-20 04:24:16 Processing rows: 3100000 Hashtable size: 3099999 Memory usage: 8147435816 percentage: 0.79

19/08/20 16:24:16 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:16 Processing rows: 3100000 Hashtable size: 3099999 Memory usage: 8147435816 percentage: 0.79

2019-08-20 04:24:16 Processing rows: 3200000 Hashtable size: 3199999 Memory usage: 8256487720 percentage: 0.801

19/08/20 16:24:16 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:16 Processing rows: 3200000 Hashtable size: 3199999 Memory usage: 8256487720 percentage: 0.801

2019-08-20 04:24:16 Processing rows: 3200000 Hashtable size: 3199999 Memory usage: 8508145960 percentage: 0.825

19/08/20 16:24:16 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:16 Processing rows: 3200000 Hashtable size: 3199999 Memory usage: 8508145960 percentage: 0.825

2019-08-20 04:24:18 Processing rows: 3300000 Hashtable size: 3299999 Memory usage: 8524084384 percentage: 0.827

19/08/20 16:24:18 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:18 Processing rows: 3300000 Hashtable size: 3299999 Memory usage: 8524084384 percentage: 0.827

2019-08-20 04:24:18 Processing rows: 3300000 Hashtable size: 3299999 Memory usage: 8779936928 percentage: 0.852

19/08/20 16:24:18 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:18 Processing rows: 3300000 Hashtable size: 3299999 Memory usage: 8779936928 percentage: 0.852

2019-08-20 04:24:19 Processing rows: 3200000 Hashtable size: 3199999 Memory usage: 8642363704 percentage: 0.838

19/08/20 16:24:19 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:19 Processing rows: 3200000 Hashtable size: 3199999 Memory usage: 8642363704 percentage: 0.838

2019-08-20 04:24:21 Processing rows: 3400000 Hashtable size: 3399999 Memory usage: 8919187768 percentage: 0.865

19/08/20 16:24:21 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:21 Processing rows: 3400000 Hashtable size: 3399999 Memory usage: 8919187768 percentage: 0.865

2019-08-20 04:24:22 Processing rows: 3400000 Hashtable size: 3399999 Memory usage: 9120514360 percentage: 0.885

19/08/20 16:24:22 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:22 Processing rows: 3400000 Hashtable size: 3399999 Memory usage: 9120514360 percentage: 0.885

2019-08-20 04:24:23 Processing rows: 3300000 Hashtable size: 3299999 Memory usage: 9053405496 percentage: 0.878

19/08/20 16:24:23 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:23 Processing rows: 3300000 Hashtable size: 3299999 Memory usage: 9053405496 percentage: 0.878

2019-08-20 04:24:24 Processing rows: 3500000 Hashtable size: 3499999 Memory usage: 9223274816 percentage: 0.895

19/08/20 16:24:24 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:24 Processing rows: 3500000 Hashtable size: 3499999 Memory usage: 9223274816 percentage: 0.895

2019-08-20 04:24:25 Processing rows: 3500000 Hashtable size: 3499999 Memory usage: 9179234616 percentage: 0.89

19/08/20 16:24:25 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:25 Processing rows: 3500000 Hashtable size: 3499999 Memory usage: 9179234616 percentage: 0.89

19/08/20 16:24:25 INFO spark.SparkRecordHandler: processing 7000000 rows: used memory = 9405727032

2019-08-20 04:24:25 Processing rows: 3400000 Hashtable size: 3399999 Memory usage: 9189720384 percentage: 0.891

19/08/20 16:24:25 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:25 Processing rows: 3400000 Hashtable size: 3399999 Memory usage: 9189720384 percentage: 0.891

19/08/20 16:24:26 INFO spark.SparkRecordHandler: processing 7000000 rows: used memory = 9202303280

2019-08-20 04:24:26 Processing rows: 3600000 Hashtable size: 3599999 Memory usage: 9491710256 percentage: 0.921

19/08/20 16:24:26 INFO exec.HashTableSinkOperator: 2019-08-20 04:24:26 Processing rows: 3600000 Hashtable size: 3599999 Memory usage: 9491710256 percentage: 0.921

在内存用到0.921之后,就开始出现错误信息了,因此可以确定是mapjoin优化而导致的内存溢出。

解决方案

可以hive.mapjoin.localtask.max.memory.usage将内存使用极限调大到0.999,这个配置默认是0.9。但是很明显这个举措既不治标也不治本,内存里面根本就放不下。

因此可以

set hive.auto.convert.join = false #关闭mapjion,自从hive0.11.0之后这个参数默认为true

调小hive.smalltable.filesize,默认是25000000(在2.0.0版本中)

set hive.ignore.mapjoin.hint=false; 关闭忽略mapjoin的hints

其中最关键的是直接关闭mapjoin自动优化,就不会有上面的问题了。如果是在脚本中关闭,可以写为

beeline --hiveconf hive.auto.convert.join=false

在关闭mapjoin之后,该任务顺利执行,不过原来的3个stage变成了5个。

一点小思考

仔细想想这个问题,hive会首先将join关联的中一个表判定为小表,然后才会将其加载到内存中。我所用集群的hive.smalltable.filesize为100MB,按理说被判定为小表的表完全可以放到内存中,毕竟内存相对而言是很大的。但为什么会内存溢出呢?是hive对表大小的判断失误吗?据一些专家所言,hive可能会把大表而不是小表装进内存,这是hive的一个bug吗?希望有大佬指点迷津。

hear me roar!

相关文章

  • Hive自动map join优化而引发MapJoinMemory

    问题描述 在Hive on Spark上运行一个hive查询脚本,spark将其划分为3个stage。发现脚本在执...

  • Hive的性能优化以及数据倾斜

    hive性能优化 一、Map阶段的优化: (控制hive任务中的map数,确定合适的map数,以及每个map处理合...

  • Hive优化实践1-数据倾斜及join无关的优化

    Hive SQL的各种优化方法基本 都和数据倾斜密切相关。 Hive的优化分为join相关的优化和join无关的优...

  • Hive优化

    Hive HQL优化 Hive优化目标在有限的资源下,执行效率更高 常见问题数据倾斜map数设置reduce数设置...

  • hive 优化-1

    join优化-数据倾斜hive.optimize.skewjoin=true; 【TODO 细节】数据倾斜时启动两...

  • Spark 广播join 与 Hive map join

    Sprak 广播变量 广播变量(Broadcast Variables)允许开发人员在每个节点(Worker or...

  • 【Hive】Join的优化

    小表Join大表:Map Join小表缓存并发送到各个节点,没有Shuffle的过程 大表Join大表:Reduc...

  • 2019-07-26

    优化器新特性 优化器具有一些新特性,主要是重新排序join和自动mapjoin。 (1)重新排序join (2)自...

  • Hive优化笔记

    1.hive参数优化 1.1 map个数优化 map的个数是如何决定的: 一个文件在执行数据处理的时候,被分成文件...

  • 5. Hadoop之旅——Hive使用篇

    github链接 针对Hive的优化主要有以下几个方面: map reduce file format shuff...

网友评论

      本文标题:Hive自动map join优化而引发MapJoinMemory

      本文链接:https://www.haomeiwen.com/subject/thezsctx.html