美文网首页大数据
spark源码分析(1)

spark源码分析(1)

作者: mainroot | 来源:发表于2018-11-14 15:13 被阅读0次

    一、启动

    1.spark-submit分析

    在Linux是一个脚本,内容很简单,如下:

    if [ -z "${SPARK_HOME}" ]; then
      export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
    fi
    

    就是找到spark-submit命令所在的目录,然后进行上一层,并赋值给SPARK_HOME

    禁用Python 3.3+中字符串的随机哈希,没关注,不知道为啥这样干
    export PYTHONHASHSEED=0
    

    以下这行重点来:

    exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$@" 
    

    用spark-class 命令,参数为 org.apache.spark.deploy.SparkSubmit 附加上所有参数

    2.spark-class分析

    重点代码如下:

    # 加载环境变量
    . "${SPARK_HOME}"/bin/load-spark-env.sh
    
    # 找到java命令
    if [ -n "${JAVA_HOME}" ]; then
      RUNNER="${JAVA_HOME}/bin/java"
    else
      if [ `command -v java` ]; then
        RUNNER="java"
      else
        echo "JAVA_HOME is not set" >&2
        exit 1
      fi
    fi
    
    # 查找 assembly jar 这意味着任务提交时无需将该JAR文件打包
    SPARK_ASSEMBLY_JAR=
    if [ -f "${SPARK_HOME}/RELEASE" ]; then
      ASSEMBLY_DIR="${SPARK_HOME}/lib"
    else
     ASSEMBLY_DIR="${SPARK_HOME}/assembly/target/scala-$SPARK_SCALA_VERSION"
    fi
    GREP_OPTIONS=
    num_jars="$(ls -1 "$ASSEMBLY_DIR" | grep "^spark-assembly.*hadoop.*\.jar$" | wc -l)"
    if [ "$num_jars" -eq "0" -a -z "$SPARK_ASSEMBLY_JAR" -a "$SPARK_PREPEND_CLASSES" != "1" ]; then
      echo "Failed to find Spark assembly in $ASSEMBLY_DIR." 1>&2
      echo "You need to build Spark before running this program." 1>&2
      exit 1
    fi
    if [ -d "$ASSEMBLY_DIR" ]; then
      ASSEMBLY_JARS="$(ls -1 "$ASSEMBLY_DIR" | grep "^spark-assembly.*hadoop.*\.jar$" || true)"
      if [ "$num_jars" -gt "1" ]; then
        echo "Found multiple Spark assembly jars in $ASSEMBLY_DIR:" 1>&2
        echo "$ASSEMBLY_JARS" 1>&2
        echo "Please remove all but one jar." 1>&2
        exit 1
      fi
    fi
    SPARK_ASSEMBLY_JAR="${ASSEMBLY_DIR}/${ASSEMBLY_JARS}"
    # 指定了assembly_jar包
    LAUNCH_CLASSPATH="$SPARK_ASSEMBLY_JAR"
    
    # 添加启动器目录
    if [ -n "$SPARK_PREPEND_CLASSES" ]; then
     LAUNCH_CLASSPATH="${SPARK_HOME}/launcher/target/scala-$SPARK_SCALA_VERSION/classes:$LAUNCH_CLASSPATH"
    fi
    export _SPARK_ASSEMBLY="$SPARK_ASSEMBLY_JAR"
    

    CLSSPATH 已经准备好了,下面开始构建java -cp 命令启动java程序

    CMD=()
    while IFS= read -d '' -r ARG; do
      CMD+=("$ARG")
    #执行 org.apache.spark.launcher.Main作为Spark应用程序的主入口
    done < <("$RUNNER" -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main "$@")
    exec "${CMD[@]}"              
    

    可以看到,使用 org.apache.spark.launcher.Main类启动org.apache.spark.deploy.SparkSubmit来启动用户的应用

    3.org.apache.spark.launcher.Main 分析

    主要代码如下:

    public static void main(String[] argsArray) throws Exception {
        boolean printLaunchCommand = !isEmpty(System.getenv("SPARK_PRINT_LAUNCH_COMMAND"));
        List<String> args = new ArrayList<String>(Arrays.asList(argsArray));
        String className = args.remove(0);
        AbstractCommandBuilder builder;
        if (className.equals("org.apache.spark.deploy.SparkSubmit")) {
            builder = new SparkSubmitCommandBuilder(args);
            List<String> help = new ArrayList<String>();
            if (parser.className != null) {
              help.add(parser.CLASS);
              help.add(parser.className);
            }
            help.add(parser.USAGE_ERROR);
            builder = new SparkSubmitCommandBuilder(help);
          }
        } else {
          builder = new SparkClassCommandBuilder(className, args);
        }
        Map<String, String> env = new HashMap<String, String>();
        List<String> cmd = builder.buildCommand(env);
        List<String> bashCmd = prepareBashCommand(cmd, env);
        for (String c : bashCmd) {
          System.out.print(c);
          System.out.print('\0');
        }
      }
    

    我们可以设定环境变量

    export SPARK_PRINT_LAUNCH_COMMAND=1
    

    执行spark-submit 来看看这个程序是如何处的,将在终端打印出下启动命令

    Spark Command:
    /opt/alanx/jdk/bin/java -cp \
        /opt/alanx/spark/spark/conf/:\
        /opt/alanx/spark/spark/lib/spark-assembly-hadoop.jar:\
        /opt/alanx/hadoop/hadoop/etc/hadoop/:\
        /opt/alanx/hadoop/hadoop/etc/hadoop/:\
        /opt/alanx/kafka/kafka/libs/*.jar:\
        /opt/alanx/hadoop/hadoop/etc/hadoop/:\
        /opt/alanx/hadoop/hadoop/share/hadoop/common/lib/*:\
        /opt/alanx/hadoop/hadoop/share/hadoop/common/*:\
        /opt/alanx/hadoop/hadoop/share/hadoop/hdfs/:\
        /opt/alanx/hadoop/hadoop/share/hadoop/hdfs/lib/*:\
        /opt/alanx/hadoop/hadoop/share/hadoop/hdfs/*:\
        /opt/alanx/hadoop/hadoop/share/hadoop/yarn/lib/*:\
        /opt/alanx/hadoop/hadoop/share/hadoop/yarn/*:\
        /opt/alanx/hadoop/hadoop/share/hadoop/mapreduce/lib/*:\
        /opt/alanx/hadoop/hadoop/share/hadoop/mapreduce/*:\
        /opt/alanx/hadoop/hadoop/contrib/capacity-scheduler/*.jar 
        -Xms1g -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit \  
            --name '$(应用名称)'\
            --class $(入口类)\
            --master  yarn\
            --deploy-mode cluster\
            --driver-memory 4g\
            --executor-memory 4g\
            --executor-cores 4\
            --num-executors 8\
            --queue thequeue\
            $(应用程序的jar包)                      
    

    可以看出来,根据配置,把所有依赖的java包全部加入命令中的-cp中。 然后启动 org.apache.spark.deploy.SparkSubmit 来启动用户的应用程序。

    4.org.apache.spark.deploy.SparkSubmit 分析

    main函数如下:

    def main(args: Array[String]): Unit = {
        val appArgs = new SparkSubmitArguments(args)
        appArgs.action match {
          case SparkSubmitAction.SUBMIT => submit(appArgs)
          case SparkSubmitAction.KILL => kill(appArgs)
          case SparkSubmitAction.REQUEST_STATUS => requestStatus(appArgs)
        }
      }
    

    没什么可看的,直接看submit 函数

    private def submit(args: SparkSubmitArguments): Unit = {
        val (childArgs, childClasspath, sysProps, childMainClass) = prepareSubmitEnvironment(args)
        def doRunMain(): Unit = {
            runMain(childArgs, childClasspath, sysProps, childMainClass, args.verbose)
        }
        doRunMain()
      }
    

    该函数也没什么,把参数直传给了runMain函数,跟踪下去

    private def runMain(
          childArgs: Seq[String],
          childClasspath: Seq[String],
          sysProps: Map[String, String],
          childMainClass: String,
          verbose: Boolean): Unit = {
        val loader =
          if (sysProps.getOrElse("spark.driver.userClassPathFirst", "false").toBoolean) {
            new ChildFirstURLClassLoader(new Array[URL](0),
              Thread.currentThread.getContextClassLoader)
          } else {
            new MutableURLClassLoader(new Array[URL](0),
              Thread.currentThread.getContextClassLoader)
          }
        Thread.currentThread.setContextClassLoader(loader)
        for (jar <- childClasspath) {
          addJarToClasspath(jar, loader)
        }
        for ((key, value) <- sysProps) {
          System.setProperty(key, value)
        }
        var mainClass: Class[_] = null
        mainClass = Utils.classForName(childMainClass)
        val mainMethod = mainClass.getMethod("main", new Array[String](0).getClass)
        mainMethod.invoke(null, childArgs.toArray)
      }
    

    该部分用的了反射的方法,取出用户提交的类的main函数,然后通过invoke调用。对于invoke请自行搜索相关主题。

    相关文章

      网友评论

        本文标题:spark源码分析(1)

        本文链接:https://www.haomeiwen.com/subject/fegdfqtx.html