一. 问题描述
今天遇到一个问题,一个大表join 一个拉链表,获取对应的数据
大表t_big,数量2kw左右
小表t_lalian,是拉链表,数量5k左右
两个表关联条件是
t_big.tx_date >= t_lalian.start_date and t_big.tx_date < t_lalian.end_date
这种非等值连接,不能写在on子句,只能写在where子句后面,那么此时的问题就是连个表的关联变成笛卡尔积了,产生的数据量太大了,而且笛卡尔积是全局的,所以只有一个reduce,执行进程上看,reduce进度一直卡在99%不动。
2023-05-18 12:06:01,292 Stage-1 map = 100%, reduce = 87%, Cumulative CPU 2055.94 sec
2023-05-18 12:06:07,496 Stage-1 map = 100%, reduce = 88%, Cumulative CPU 2063.82 sec
2023-05-18 12:06:12,665 Stage-1 map = 100%, reduce = 89%, Cumulative CPU 2071.49 sec
2023-05-18 12:06:25,072 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 2086.6 sec
2023-05-18 12:06:31,275 Stage-1 map = 100%, reduce = 91%, Cumulative CPU 2094.27 sec
2023-05-18 12:06:37,487 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 2101.46 sec
2023-05-18 12:06:48,869 Stage-1 map = 100%, reduce = 93%, Cumulative CPU 2116.57 sec
2023-05-18 12:07:01,276 Stage-1 map = 100%, reduce = 94%, Cumulative CPU 2130.99 sec
2023-05-18 12:07:07,514 Stage-1 map = 100%, reduce = 95%, Cumulative CPU 2138.27 sec
2023-05-18 12:07:13,721 Stage-1 map = 100%, reduce = 96%, Cumulative CPU 2146.04 sec
2023-05-18 12:07:19,928 Stage-1 map = 100%, reduce = 98%, Cumulative CPU 2153.28 sec
2023-05-18 12:07:25,095 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2161.03 sec
2023-05-18 12:08:26,075 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2234.01 sec
2023-05-18 12:09:27,065 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2317.47 sec
2023-05-18 12:10:28,031 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2391.56 sec
2023-05-18 12:11:28,965 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2462.62 sec
2023-05-18 12:12:29,865 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2523.88 sec
2023-05-18 12:13:30,747 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2597.22 sec
2023-05-18 12:14:31,654 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2668.52 sec
2023-05-18 12:15:32,505 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 2773.16 sec
......
......
2023-05-18 12:34:47,688 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 4100.79 sec
2023-05-18 12:35:48,450 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 4170.27 sec
2023-05-18 12:36:49,253 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 4237.83 sec
2023-05-18 12:37:50,034 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 4303.86 sec
2023-05-18 12:38:50,803 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 4371.48 sec
二. 解决方案
当Hive设定为严格模式(hive.mapred.mode = strict)时,不允许在HQL语句中出现笛卡尔积,这实际说明了Hive 对笛卡尔积支持较弱。因为找不到 join key, Hive只能使用一个reducer 来完成笛卡尔积。
那么此时我们需要的是人工的给两个表一个join条件,避免只有一个reduce操作。
将t_big 增加一个随机数的列,取值范围1-20
将t_lalian数据通过join复制20份
那么此时两个表就可以通过 num_key来进行join了
顺便指定一下reduce的个数,以免hive自动判断的reduce数发生错误。
set hive.auto.convert.join=false;
set mapred.reduce.tasks = 10;
select tmp1.*,tmp2.price
from
(
select *,ceiling(rand()*19) as num_key
from t_big
) tmp1
join
(
select t1.*,t2.rn
from t_lalian t1
join ( select id as rn from t100 order by id limit 20 ) t2
) tmp2
on tmp1.num_key = tmp2.rn
where tmp1.tx_date >= tmp2.start_date
and tmp1.tx_date < tmp2.end_date;
结论:
优化后,执行时间由之前的18分钟,优化到4分钟左右
网友评论