美文网首页
Spark1.6.0编译(Hadoop版本2.6.0-cdh-5

Spark1.6.0编译(Hadoop版本2.6.0-cdh-5

作者: 移动的红烧肉 | 来源:发表于2017-12-20 15:29 被阅读0次

    本地想玩耍spark,Hadoop其他环境都使用的是CDH 5.7.0,本想一路下来继续使用cloudera提供的版本,但是官方编译的版本运行起来各种错误,所以先码一片文章,发扬奉献精神,走你>>>

    开始入坑:(直接上编译步骤)

    1.环境介绍:

    CentOS6.8-64位
    JDK7
    Spark1.6.0
    Scala2.10.5
    Hadoop2.6.0-CDH5.7.0
    maven3.3.9(只需要3.3.3+就可以)
    

    2.编译环境部署

    (1)jdk的安装,发扬工匠精神,直接都码上吧

    • tar -zxvf jdk1.7.0_65.tar.gz -C /opt/cluster/
    • vim /etc/profile(添加环境变量)
    # Java Home
    export JAVA_HOME=/opt/jdk1.7.0_65
    export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    bin:$JAVA_HOME/bin
    
    • source /etc/profile
    • 验证安装:java -version 或者 echo $JAVA_HOME

    (2)Maven的安装

    • tar -zxvf apache-maven-3.3.9.tar.gz -C /opt/cluster/
    • vim /etc/profile(添加环境变量)
    # Maven Home
    export MAVEN_HOME=/opt/cluster/apache-maven-3.3.9
    bin:$MAVEN_HOME/bin
    
    • source /etc/profile
    • 验证安装:mvn -version 或者 echo $MAVEN_HOME

    (3)Scala的安装

    • tar -zxvf scala-2.10.5.tgz -C /opt/cluster/
    • vim /etc/profile(添加环境变量)
    # Scala Home
    export SCALA_HOME=/opt/cluster/scala-2.10.5
    bin:$SCALA_HOME/bin
    
    • source /etc/profile
    • 验证安装:scala

    (4)spark源码安装与配置

    • tar -zxvf spark-1.6.0.tgz -C /opt/cluster/
    • 为了编译比较快,要更改make-distribution.sh文件
    添加版本(不用spark自己去解析生成)
    VERSION=1.6.0
    SCALA_VERSION=2.10
    SPARK_HADOOP_VERSION=2.6.0-cdh5.7.0
    SPARK_HIVE=1
    
    将130多行的解析版本代码注释掉
    #VERSION=$("$MVN" help:evaluate -Dexpression=project.version $@ 2>/dev/null | grep -v "INFO" | tail -n 1)
    #SCALA_VERSION=$("$MVN" help:evaluate -Dexpression=scala.binary.version $@ 2>/dev/null\
    #    | grep -v "INFO"\
    #    | tail -n 1)
    #SPARK_HADOOP_VERSION=$("$MVN" help:evaluate -Dexpression=hadoop.version $@ 2>/dev/null\
    #    | grep -v "INFO"\
    #    | tail -n 1)
    #SPARK_HIVE=$("$MVN" help:evaluate -Dexpression=project.activeProfiles -pl sql/hive $@ 2>/dev/null\
    #    | grep -v "INFO"\
    #    | fgrep --count "<id>hive</id>";\
    #    # Reset exit status to 0, otherwise the script stops here if the last grep finds nothing\
    #   # because we use "set -o pipefail"
    #    echo -n)
    

    3.开始编译

    • 我建议使用jdk7去编译,jdk8的不去特此说明了
    step1:进入spark源码目录
    cd /opt/cluster/spark-1.6.0
    step2:配置一下MAVEN_OPTS
    export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
    step3:开始编译
    ./make-distribution.sh --tgz -Phadoop-2.6 -Dhadoop.version=2.6.0-cdh5.7.0 -Pyarn -Phive  -Phive-thriftserver
    

    4.编译过程

    • 编译是按照下面的模块执行的,可以去观察日志打印更好的了解执行过程
    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Summary:
    [INFO] 
    [INFO] Spark Project Parent POM ........................... SUCCESS [  5.850 s]
    [INFO] Spark Project Test Tags ............................ SUCCESS [  4.403 s]
    [INFO] Spark Project Launcher ............................. SUCCESS [ 15.255 s]
    [INFO] Spark Project Networking ........................... SUCCESS [ 11.419 s]
    [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  7.578 s]
    [INFO] Spark Project Unsafe ............................... SUCCESS [ 20.146 s]
    [INFO] Spark Project Core ................................. SUCCESS [05:10 min]
    [INFO] Spark Project Bagel ................................ SUCCESS [ 17.821 s]
    [INFO] Spark Project GraphX ............................... SUCCESS [ 45.020 s]
    [INFO] Spark Project Streaming ............................ SUCCESS [01:12 min]
    [INFO] Spark Project Catalyst ............................. SUCCESS [01:39 min]
    [INFO] Spark Project SQL .................................. SUCCESS [02:23 min]
    [INFO] Spark Project ML Library ........................... SUCCESS [02:24 min]
    [INFO] Spark Project Tools ................................ SUCCESS [ 19.271 s]
    [INFO] Spark Project Hive ................................. SUCCESS [01:53 min]
    [INFO] Spark Project Docker Integration Tests ............. SUCCESS [  8.271 s]
    [INFO] Spark Project REPL ................................. SUCCESS [ 46.352 s]
    [INFO] Spark Project YARN Shuffle Service ................. SUCCESS [ 18.256 s]
    [INFO] Spark Project YARN ................................. SUCCESS [01:37 min]
    [INFO] Spark Project Hive Thrift Server ................... SUCCESS [02:47 min]
    [INFO] Spark Project Assembly ............................. SUCCESS [02:55 min]
    [INFO] Spark Project External Twitter ..................... SUCCESS [ 56.260 s]
    [INFO] Spark Project External Flume Sink .................. SUCCESS [02:39 min]
    [INFO] Spark Project External Flume ....................... SUCCESS [ 27.604 s]
    [INFO] Spark Project External Flume Assembly .............. SUCCESS [ 16.969 s]
    [INFO] Spark Project External MQTT ........................ SUCCESS [03:33 min]
    [INFO] Spark Project External MQTT Assembly ............... SUCCESS [ 20.258 s]
    [INFO] Spark Project External ZeroMQ ...................... SUCCESS [01:17 min]
    [INFO] Spark Project External Kafka ....................... SUCCESS [01:24 min]
    [INFO] Spark Project Examples ............................. SUCCESS [07:13 min]
    [INFO] Spark Project External Kafka Assembly .............. SUCCESS [ 11.575 s]
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 44:07 min
    [INFO] Final Memory: 101M/1243M
    [INFO] ------------------------------------------------------------------------
    

    5.后续

    • 编译成功后会在当前目录下生成一个包:spark-1.6.0-bin-2.6.0-cdh5.7.0.tgz

    • 解压这个到你想解压的位置

     tar -zxvfspark-1.6.0-bin-2.6.0-cdh5.7.0.tgz -C /opt/cluster/
    
    • 可以运行一个Wordcount体验一下
    step1:进入解压目录执行
    bin/spark-shell
    step2:自己在HDFS上添加一个文件,随便写几个单词,空格隔开,执行scala
    sc.textFile("hdfs://hadoop-master:8020/user/master/mapreduce/wordcount/input/hello.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_ + _)
    step3:查看结果
    rse0.collect
    

    相关文章

      网友评论

          本文标题:Spark1.6.0编译(Hadoop版本2.6.0-cdh-5

          本文链接:https://www.haomeiwen.com/subject/bfpgwxtx.html