spark安装与部署

作者: yonggang_sun | 来源:发表于2016-05-18 17:13 被阅读1082次

    spark安装与部署

    spark概述

    1. spark平台结构


      spark统一栈
    2. spark官网

    spark的安装,配置,部署

    1. 下载配置jdk, scala, sbt, maven;

    2. 下载配置spark

    3. 修改~/.bash_profile

      export JAVA_HOME=$HOME/jdk1.7.0_79
      
      `export PATH=$JAVA_HOME/bin:$PATH`
      
      export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
      
      export SCALA_HOME=$HOME/scala/scala-2.10.5
      
      export SPARK_HOME=$HOME/spark-1.4.0-bin-hadoop2.6
      
      export HADOOP_HOME=$HOME/hadoop-2.6.0
      
      export HADOOP_CONF_DIR=$HOME/hadoop-2.6.0/etc/hadoop
      
      export MAVEN_HOME=$HOME/apache-maven-3.2.5
      
      export SBT_HOME=$HOME/sbt
      
      export PATH=$PATH:$SCALA_HOME/bin:$SPARK_HOME/bin:  $HADOOP_HOME/bin:$HADOOP_HOME/sbin:$MAVEN_HOME/bin:$SBT_HOME/bin source ~/.bash_profile
      
    4. 准备三台机器,系统为ubuntu 14.04LTS,配置好ip,共享文件夹(其中可能发生权限不够的问题,需要将当前用户加入到vsboxsf组下面去),mac下可以免密码登录三台机器,分别为: gg01, ggg02, ggg03。

    5. 准备软件,需要jdk,sbt,scala,maven,spark

    6. 配置路径:

      export JAVA_HOME=$HOME/jdk1.7.0_79
      

    export PATH=$JAVA_HOME/bin:$PATH
    export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    export SCALA_HOME=$HOME/scala-2.10.5
    export SPARK_HOME=$HOME/spark-1.4.0-bin-hadoop2.6
    export MAVEN_HOME=$HOME/apache-maven-3.2.5
    export SBT_HOME=$HOME/sbt
    export PATH=$PATH:$SCALA_HOME/bin:$SPARK_HOME/bin:$MAVEN_HOME/bin:$SBT_HOME/bin
    ```
    然后source ~/.bashrc,查看软件是否准备好了(java -version之类的)

    1. 修改spark的环境,添加spark-env.sh,添加路径
      export SCALA_HOME=/home/sunyonggang/scala-2.10.5
      

    export SPARK_MASTER_IP=gg01
    export SPARK_WORKER_MEMORY=1G
    export JAVA_HOME=/home/sunyonggang/jdk1.7.0_79
    ```
    然后给添加的slaves文件添加主机gg01,然后可以直接启动了。

    1. gg01的配置复制到ggg02, ggg03中去,启动集群
    sunyonggang@gg01:~/spark-1.4.0-bin-hadoop2.6$ jps
    5498 Jps
    5374 Worker
    5174 Master
    sunyonggang@ggg02:~/spark-1.4.0-bin-hadoop2.6/conf$ jps
    4055 Jps
    3979 Worker
    
    查看只有一个worker,那是由于我集群中的三台机器的eth0的ip地址都是一样的,将虚拟机改为桥接网络,自定义ip。
    
    1. Ubuntu使用固定ip:将网络连接换为桥接方式,发现gg01的ip地址变为了192.168.199.255(这个是broadcast地址重复),需要修改,另外两台没有问题,可以直接使用,但为了统一,建议修改ip地址,具体查看ubuntu如何固定ip(已保存evernote,或者自己搜索下)。

    2. 启动过程中还是只有一个worker,查看错误:

    ```
    16/04/06 19:14:19 INFO Worker: Retrying connection to master (attempt # 1)
    

    16/04/06 19:14:19 INFO Worker: Connecting to master akka.tcp://sparkMaster@gg01:7077/user/Master...
    16/04/06 19:14:19 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster@gg01:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: gg01/192.168.199.150:7077
    ```
    首先排除ssh的问题,然后修改hosts文件中的本机对应的ip,对于gg01来说:
    > 127.0.0.1 gg01
    >192.168.199.150 gg01

    重新启动
    
    master alive

    11.配置好zk,启动各个节点的zk:

    sunyonggang@gg01:~/zookeeper-3.4.6$ jps
    7808 QuorumPeerMain
    7833 Jps
    

    12.修改spark的配置文件,将指定master改为zookeeper选举模式

    export SCALA_HOME=/home/sunyonggang/scala-2.10.5
    .#    export SPARK_MASTER_IP=gg01
    export SPARK_WORKER_MEMORY=1G
    export JAVA_HOME=/home/sunyonggang/jdk1.7.0_79
    export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=gg01:2181,ggg02:2181,ggg03:2181 -Dspark.deploy.zookeeper.dir=/spark"
    

    配置好zk之后一定要注意启动zk,如果不启动,直接启动spark就报错。

    13.在gg01中启动集群,在ggg03中启动standby的master服务

    master alive standy by

    然后kill掉gg01中的master进程

    stand by to master

    到此,Spark HA搭建完毕。
    14.使用spark-submit提交一个任务,这边我选用的是examples中的一个PI的计算:

    sunyonggang@ggg03:~/spark-1.4.0-bin-hadoop2.6/examples$ cat simple.sbt
    name := "Example"
    
    version := "1.0"
    
    scalaVersion := "2.10.5"
    
    libraryDependencies += "org.apache.spark" %% "spark-core" % "1.4.0"
    
    sunyonggang@ggg03:~/spark-1.4.0-bin-hadoop2.6/examples$ sbt package
    [info] Set current project to Example (in build file:/home/sunyonggang/spark-1.4.0-bin-hadoop2.6/examples/)
    [info] Updating {file:/home/sunyonggang/spark-1.4.0-bin-hadoop2.6/examples/}examples...
    [info] Resolving org.fusesource.jansi#jansi;1.4 ...
    [info] Done updating.
    [info] Compiling 1 Scala source to /home/sunyonggang/spark-1.4.0-bin-hadoop2.6/examples/target/scala-2.10/classes...
    [info] Packaging /home/sunyonggang/spark-1.4.0-bin-hadoop2.6/examples/target/scala-2.10/example_2.10-1.0.jar ...
    [info] Done packaging.
    [success] Total time: 53 s, completed Apr 7, 2016 12:19:06 PM
    

    15.运行jar包:

    ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://gg01:7077 /home/sunyonggang/spark-1.4.0-bin-hadoop2.6/examples/target/scala-2.10/example_2.10-1.0.jar

    运行结果:

    16/04/07 12:27:25 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.199.150:48388 with 267.3 MB RAM, BlockManagerId(2, 192.168.199.150, 48388)
    16/04/07 12:27:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.199.146:35760 (size: 1202.0 B, free: 267.3 MB)
    16/04/07 12:27:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.199.150:48388 (size: 1202.0 B, free: 267.3 MB)
    16/04/07 12:27:26 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1176 ms on 192.168.199.146 (1/2)
    16/04/07 12:27:26 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:35) finished in 1.846 s
    16/04/07 12:27:26 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1094 ms on 192.168.199.150 (2/2)
    16/04/07 12:27:26 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:35, took 2.715414 s
    16/04/07 12:27:26 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
    Pi is roughly 3.14122
    16/04/07 12:27:26 INFO SparkUI: Stopped Spark web UI at http://192.168.199.145:4040
    

    相关文章

      网友评论

        本文标题: spark安装与部署

        本文链接:https://www.haomeiwen.com/subject/rfuwrttx.html