美文网首页
Spark中算子aggregate的用法总结

Spark中算子aggregate的用法总结

作者: joKerAndy | 来源:发表于2019-09-27 11:01 被阅读0次
  • 官方解释

/**
   * Aggregate the elements of each partition, and then the results for all the partitions, using
   * given combine functions and a neutral "zero value". This function can return a different result
   * type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U
   * and one operation for merging two U's, as in scala.TraversableOnce. Both of these functions are
   * allowed to modify and return their first argument instead of creating a new U to avoid memory
   * allocation.
   *
   * @param zeroValue the initial value for the accumulated result of each partition for the
   *                  `seqOp` operator, and also the initial value for the combine results from
   *                  different partitions for the `combOp` operator - this will typically be the
   *                  neutral element (e.g. `Nil` for list concatenation or `0` for summation)
   * @param seqOp an operator used to accumulate results within a partition
   * @param combOp an associative operator used to combine results from different partitions
   */
  def aggregate[U: ClassTag](zeroValue: U)(seqOp: (U, T) => U, combOp: (U, U) => U): U = withScope {...}
  • 翻译成白话

aggregate首先合并RDD中每个partition中的元素,每个partition得到一个结果,然后将所有partition得到的结果再做一次最终合并,得到最终的结果。aggregate方法第一个参数称之为初始值,官方解释,初始值参与每个partition中元素的合并,并参与所有partition的最终合并。

  • 举例说明

Talk is cheap. Show me the code.

  • 举例一
val rdd1 = sc.parallelize(List("12", "23", "345", "4567"), 2)//指定是2个partition
val result = rdd1.aggregate("")((x, y) => math.max(x.length, y.length).toString, (x, y) => x + y) //24

分析:
1.假设2个partition分别为p0和p1,元素“12”、“23”在p0中,“345”、“4567”在p1中(可以使用mapPartitionsWithIndex方法查看,Spark有对应算法决定去往哪个partition的元素是均衡的)
2.在p0中执行
初始值参与运算

<伪代码>
math.max("".length,"12".length).toString()   //结果是"2"
math.max("2".length,"23".length).toString()   //结果是"2",p0最终结果是"2"

3.在p1中执行
初始值参与运算

<伪代码>
math.max("".length,"345".length).toString()   //结果是"3"
math.max("3".length,"4567".length).toString()   //结果是"4",p1最终结果是"4"

4、将p0和p1结果汇总
初始值参与运算

<伪代码>
""+"2"="2"
"2"+"4"="24"

5、注意一点:最终结果可能不一样
p0和p1最终结果返回可能存在顺序不同,所以上边的结果还有一种情况可能是"42"

  • 举例二
val rdd1 = sc.parallelize(List("12", "23", "345", "4567"), 2)//指定是2个partition
val result = rdd1.aggregate("12")((x, y) => math.min(x.length, y.length).toString, (x, y) => x + y)//1211

分析:
1.假设2个partition分别为p0和p1,元素“12”、“23”在p0中,“345”、“4567”在p1中(可以使用mapPartitionsWithIndex方法查看,Spark有对应算法决定去往哪个partition的元素是均衡的)
2.在p0中执行
初始值参与运算

<伪代码>
math.min("12".length,"12".length).toString()   //结果是"2"
math.min("2".length,"23".length).toString()   //结果是"1",p0最终结果是"1"

3.在p1中执行
初始值参与运算

<伪代码>
math.min("12".length,"345".length).toString()   //结果是"2"
math.min("2".length,"4567".length).toString()   //结果是"1",p1最终结果是"1"

4、将p0和p1结果汇总
初始值参与运算

<伪代码>
"12"+"1"="121"
"121"+"1"="1211"

相关文章

网友评论

      本文标题:Spark中算子aggregate的用法总结

      本文链接:https://www.haomeiwen.com/subject/aiqgyctx.html