美文网首页
sparkSQL新增优化器实现复杂计算的快速预览

sparkSQL新增优化器实现复杂计算的快速预览

作者: frb502 | 来源:发表于2018-12-31 21:18 被阅读0次

    场景

    有时我们使用sparkSQL做复杂模型时需要实现对数据的快速预览,假如模型是用好几表做Join且每个表的数据量都挺大时,那么预览就会很慢。

    解决方案

    普通的预览我们可能会怎么写SQL:

    select a,b,c,d,e from table1 left join table2 on  f1=f2 left join table3 on f2=f3 limit 1000
    

    这样写挺费spark计算资源的,且速度达不到预览要求。
    数据预览我们并不保证最终结果的正确性,只是出一个大体的数据,方便对模型的创建和修改。

    实现快速预览的必要条件是减少源数据的读取,这样我们只需要把目标定在减少表数据的读取就能实现快速预览。

    熟悉sparkSQL都知道,与数据源相关的LogicalPlan是LogicalRelation,我们看下LogicalRelation定义。

    case class LogicalRelation(
        relation: BaseRelation,
        output: Seq[AttributeReference],
        catalogTable: Option[CatalogTable])
      extends LeafNode with MultiInstanceRelation {
    
      // Logical Relations are distinct if they have different output for the sake of transformations.
      override def equals(other: Any): Boolean = other match {
        case l @ LogicalRelation(otherRelation, _, _) => relation == otherRelation && output == l.output
        case _ => false
      }
    

    里面有个relation属性,类型是BaseRelation,这是个抽象类,具体我们看它的实现类。


    我们的底层数据是存储在Hadoop上的关注 HadoopFsRelation

    case class HadoopFsRelation(
        location: FileIndex,
        partitionSchema: StructType,
        dataSchema: StructType,
        bucketSpec: Option[BucketSpec],
        fileFormat: FileFormat,
        options: Map[String, String])(val sparkSession: SparkSession)
    

    关注location这个属性,类型是FileIndex,数据的存放位置与它有关。



    关注方法listFiles,获取文件信息会调用它,我们只需要在这个方法做些手脚就OK了。

    源码

    // 新建了优化器继承Rule
    case class SampleExecution(sparkSession: SparkSession) extends Rule[LogicalPlan] {
       override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
         case l @ LogicalRelation(r: HadoopFsRelation, output, catalogTable) if sampleExecution =>
           if (!r.location.isInstanceOf[SampleFileIndex]) {
             val relation = HadoopFsRelation(new SampleFileIndex(r.location), r.partitionSchema,
               r.dataSchema, r.bucketSpec, r.fileFormat, r.options)(r.sparkSession)
             LogicalRelation(relation, output, catalogTable)
           } else {
             l
           }
       }
    
       private def sampleExecution: Boolean = {
         val sampleExecution = sparkSession.sparkContext.getLocalProperty("spark.bdp.sample.execution")
         if (sampleExecution != null) {
           return sampleExecution.toBoolean
         }
         false
       }
    
     }
    
    // 自己定义了个FileIndex代理,继承了FileIndex
    class SampleFileIndex(fileIndex: FileIndex) extends FileIndex {
    
      override def rootPaths: Seq[Path] = fileIndex.rootPaths
    
      override def partitionSchema: StructType = fileIndex.partitionSchema
    
      override def sizeInBytes: Long = fileIndex.sizeInBytes
    
     // 此处是关键,对要扫描的文件进行人为筛选
      override def listFiles(partitionFilters: Seq[Expression],
                             dataFilters: Seq[Expression]): Seq[PartitionDirectory] = {
        sampleFiles(fileIndex.listFiles(partitionFilters, dataFilters))
      }
    
      override def refresh(): Unit = fileIndex.refresh()
    
      override def inputFiles: Array[String] = fileIndex.inputFiles
    
      private def sampleFiles(partitionDirList: Seq[PartitionDirectory]): Seq[PartitionDirectory] = {
        val candidates = new ArrayBuffer[PartitionDirectory]()
        val sampleFiles = new ArrayBuffer[PartitionDirectory]()
    
        for (i <- Random.shuffle(Seq.range(0, partitionDirList.length))) {
          if (candidates.size <= MobiusConf.sampleFileCount && partitionDirList(i).files.size > 0) {
            candidates.append(partitionDirList(i))
          }
        }
        var fileCountPerPartition = MobiusConf.sampleFileCount / candidates.size
        if (fileCountPerPartition * candidates.size < MobiusConf.sampleFileCount) {
          fileCountPerPartition += 1
        }
    
        for (c <- candidates) {
          val files = new ArrayBuffer[FileStatus]()
          // 优先选取sampleFileMinLen - sampleFileMaxLen的文件
          这两配置主要是让选择的文件不大不小
          for (i <- Random.shuffle(Seq.range(0, c.files.length))) {
            val file = c.files(i)
            if (files.size <= fileCountPerPartition
              && file.getLen > MobiusConf.sampleFileMinLen
              && file.getLen < MobiusConf.sampleFileMaxLen) {
              files.append(file)
            }
          }
          // 如果文件数量不够且候选文件个数还充裕,随机补足文件个数
          if (files.size < fileCountPerPartition && c.files.size > files.size) {
            for (i <- Random.shuffle(Seq.range(0, c.files.length))) {
              val file = c.files(i)
              if (files.size <= fileCountPerPartition
                && file.getLen > 0 && !files.contains(file)) {
                files.append(file)
              }
            }
          }
          sampleFiles.append(PartitionDirectory(c.values, files))
        }
        sampleFiles
      }
    
    }
    

    到此把我们的优化器加到spark里就能实现快速预览的逻辑啦,速度贼快!

    相关文章

      网友评论

          本文标题:sparkSQL新增优化器实现复杂计算的快速预览

          本文链接:https://www.haomeiwen.com/subject/zqbwlqtx.html