美文网首页
一次sparksql问题排查记录

一次sparksql问题排查记录

作者: JX907 | 来源:发表于2020-12-04 09:53 被阅读0次

    问题: 在调试一个sparksql左连接查询时发现数据结果不正确,经过一天折腾才发现使用子查询方式能够得到正确的结果,分析执行计划发现第一种写法的优化后的执行计划将where t.ip is null and t.dn条件错误的加到了左表子查询中了,即红色标出的地方,这样导致左表子查询查不出数据来。

    结论: 过滤条件写在where条件中时,spark会将sql优化为inner join, 如果连接条件中的字段出现在最后的where条件中,那么该条件在做谓词下推时也会被加到左表和右表中,此时就不符合预期结果,即会导致左表中的查不到预期的数据,但是将过滤数据用的限定条件写到子查询中时查出的结果是正确的,执行计划也是正确的,原因不详,怀疑是spark执行计划优化中的bug;

    过程数据记录
    1、条件在where中
    select
    oneday.dn, oneday.ip, '20201202', '20201202'
    from
    (
    select
    ip,dn
    from dwd_dns.t_ip_dn_his_rel2
    where dt = '20201202'
    group by ip,dn
    ) oneday left join dwd_dns.t_ip_dn_first t on t.ip = oneday.ip and t.dn = oneday.dn
    where t.ip is null and t.dn is null and t.dt = '20201001'

    执行计划:

    == Optimized Logical Plan ==
    InsertIntoHiveTable dwd_dns.t_ip_dn_first, org.apache.hadoop.hive.ql.io.orc.OrcSerde, Map(dt -> None), true, false, [dn, ip, first_time, dt]
    +- Project [dn#1, ip#2, 20201202 AS first_time#28, 20201202 AS dt#29]
    +- Join Inner, ((ip#8 = ip#2) && (dn#7 = dn#1))
    :- Aggregate [ip#2, dn#1], [ip#2, dn#1]
    : +- Project [dn#1, ip#2]
    : +- Filter (((((isnotnull(dt#6) && (dt#6 = 20201202)) && isnull(dn#1)) && isnull(ip#2)) && isnotnull(ip#2)) && isnotnull(dn#1))
    : +- Relation[uid#0,dn#1,ip#2,cname#3,dnsip#4,probe_time#5,dt#6] orc
    +- Project [dn#7, ip#8]
    +- Filter (((((isnotnull(dt#10) && isnull(ip#8)) && isnull(dn#7)) && (dt#10 = 20201001)) && isnotnull(ip#8)) && isnotnull(dn#7))
    +- Relation[dn#7,ip#8,first_time#9,dt#10] orc

    2、条件在子查询中
    select
    /+ REPARTITION(10)/
    oneday.dn, oneday.ip, '20201202', '20201202'
    from
    (
    select
    ip,dn
    from dwd_dns.t_ip_dn_his_rel2
    where dt = '20201202'
    group by ip,dn
    ) oneday left join
    (
    select dn, ip
    from
    dwd_dns.t_ip_dn_first
    where dt = '20201001'
    ) t on t.ip = oneday.ip and t.dn = oneday.dn
    where t.ip is null and t.dn is null

    执行计划:

    == Optimized Logical Plan ==
    InsertIntoHiveTable dwd_dns.t_ip_dn_first, org.apache.hadoop.hive.ql.io.orc.OrcSerde, Map(dt -> None), true, false, [dn, ip, first_time, dt]
    +- Project [dn#1, ip#2, 20201202 AS first_time#28, 20201202 AS dt#29]
    +- Repartition 10, true
    +- Project [dn#1, ip#2]
    +- Filter (isnull(ip#8) && isnull(dn#7))
    +- Join LeftOuter, ((ip#8 = ip#2) && (dn#7 = dn#1))
    :- Aggregate [ip#2, dn#1], [ip#2, dn#1]
    : +- Project [dn#1, ip#2]
    : +- Filter (isnotnull(dt#6) && (dt#6 = 20201202))
    : +- Relation[uid#0,dn#1,ip#2,cname#3,dnsip#4,probe_time#5,dt#6] orc
    +- Project [dn#7, ip#8]
    +- Filter (((isnotnull(dt#10) && (dt#10 = 20201001)) && isnotnull(ip#8)) && isnotnull(dn#7))
    +- Relation[dn#7,ip#8,first_time#9,dt#10] orc

    相关文章

      网友评论

          本文标题:一次sparksql问题排查记录

          本文链接:https://www.haomeiwen.com/subject/vwubwktx.html