美文网首页
spark查orc格式数据偶尔报错NullPointerExce

spark查orc格式数据偶尔报错NullPointerExce

作者: 李斯不怨 | 来源:发表于2020-04-03 15:48 被阅读0次

    spark查orc格式的数据有时会报这个错

    Caused by: java.lang.NullPointerException

    at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:560)

    at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1010)

    ... 47 more

    跟进代码

    org.apache.hadoop.hive.ql.io.orc.OrcInputFormat

    staticenum SplitStrategyKind {

        HYBRID,

        BI,

        ETL

      }

    ...

        Context(Configuration conf) {

          this.conf = conf;

          minSize = conf.getLong(MIN_SPLIT_SIZE, DEFAULT_MIN_SPLIT_SIZE);

          maxSize = conf.getLong(MAX_SPLIT_SIZE, DEFAULT_MAX_SPLIT_SIZE);

          String ss = conf.get(ConfVars.HIVE_ORC_SPLIT_STRATEGY.varname);

          if(ss ==null|| ss.equals(SplitStrategyKind.HYBRID.name())) {

            splitStrategyKind = SplitStrategyKind.HYBRID;

          } else {

            LOG.info("Enforcing " + ss + " ORC split strategy");

            splitStrategyKind = SplitStrategyKind.valueOf(ss);

          }

    ...

            switch(context.splitStrategyKind) {

              case BI:

                // BI strategy requested through configsplitStrategy =new BISplitStrategy(context, fs, dir, children, isOriginal,

                    deltas, covered);

                break;

              case ETL:

                // ETL strategy requested through configsplitStrategy =new ETLSplitStrategy(context, fs, dir, children, isOriginal,

                    deltas, covered);

                break;

              default:

                // HYBRID strategyif(avgFileSize > context.maxSize) {

                  splitStrategy =new ETLSplitStrategy(context, fs, dir, children, isOriginal, deltas,

                      covered);

                } else {

                  splitStrategy =new BISplitStrategy(context, fs, dir, children, isOriginal, deltas,

                      covered);

                }

                break;

            }

    org.apache.hadoop.hive.conf.HiveConf.ConfVars

    HIVE_ORC_SPLIT_STRATEGY("hive.exec.orc.split.strategy", "HYBRID",newStringSet("HYBRID", "BI", "ETL"),

            "This is not a user level config. BI strategy is used when the requirement is to spend less time in split generation" +        " as opposed to query execution (split generation does not read or cache file footers)." +        " ETL strategy is used when spending little more time in split generation is acceptable" +        " (split generation reads and caches file footers). HYBRID chooses between the above strategies" +        " based on heuristics."),

    The HYBRID mode reads the footers for all files if there are fewer files than expected mapper count, switching over to generating 1 split per file if the average file sizes are smaller than the default HDFS blocksize. ETL strategy always reads the ORC footers before generating splits, while the BI strategy generates per-file splits fast without reading any data from HDFS.

    可见hive.exec.orc.split.strategy默认是HYBRID,HYBRID时如果不满足

    if (avgFileSize > context.maxSize) {

    splitStrategy = new BISplitStrategy(context, fs, dir, children, isOriginal, deltas,

    covered);

    报错的就是BISplitStrategy,具体这个类为什么报错还没有细看,不过可以修改设置避免这个问题

    set hive.exec.orc.split.strategy=ETL

    问题暂时解决,未完待续;

    相关文章

      网友评论

          本文标题:spark查orc格式数据偶尔报错NullPointerExce

          本文链接:https://www.haomeiwen.com/subject/btzhphtx.html