美文网首页
193、Spark 2.0之Dataset开发详解-typed操

193、Spark 2.0之Dataset开发详解-typed操

作者: ZFH__ZJ | 来源:发表于2019-02-11 23:02 被阅读0次

    和join的区别是连接后的新Dataset的schema会不一样

    代码

    object TypedOperation {
    
      case class Employee(name: String, age: Long, depId: Long, gender: String, salary: Long)
    
      case class Department(id: Long, name: String)
    
      def main(args: Array[String]): Unit = {
        val sparkSession = SparkSession
          .builder()
          .appName("BasicOperation")
          .master("local")
          .getOrCreate()
    
        import sparkSession.implicits._
        import org.apache.spark.sql.functions._
    
        val employeePath = this.getClass.getClassLoader.getResource("employee.json").getPath
        val departmentPath = this.getClass.getClassLoader.getResource("department.json").getPath
    
        val employeeDF = sparkSession.read.json(employeePath)
        val departmentDF = sparkSession.read.json(departmentPath)
    
        val employeeDS = employeeDF.as[Employee]
        val departmentDS = departmentDF.as[Department]
    
        employeeDS.joinWith(departmentDS, $"depId" === $"id").show()
      }
    }
    

    相关文章

      网友评论

          本文标题:193、Spark 2.0之Dataset开发详解-typed操

          本文链接:https://www.haomeiwen.com/subject/zgkqeqtx.html