美文网首页
Spark集群部署

Spark集群部署

作者: 鹅鹅鹅_ | 来源:发表于2019-01-02 09:58 被阅读0次

一、简介


Spark是UC Berkeley AMPLab开发的类MapRed计算框架。MapRed框架适用于batch job,但是由于它自身的框架限制,第一,pull-based heartbeat作业调度。第二,shuffle中间结果全部落地disk,导致了高延迟,启动开销很大。
而Spark是为迭代式,交互式计算所生的。第一,它采用了actor model的akka作为通讯框架。第二,它使用了RDD分布式内存,操作之间的数据不需要dump到磁盘上,而是通过RDD Partition分布在各个节点内存中,极大的提高了数据间的流转,同时RDD之间维护了血统关系,一旦RDD fail掉了,能通过父RDD自动重建,保证了fault tolerance。
而且在Spark之上有丰富的应用,比如Shark,Spark Streaming,MLBase。我们在生产环境中已经使用Shark来作为Hive的一种补充,它共享了hive 的metastore,serde,使用方式也和hive几乎一样,如果data input size不是很大的情况下,相同语句确实比hive会快很多。

二、安装部署


  1. 下载安装配置Scala

    [root@master ~]# wget https://downloads.lightbend.com/scala/2.12.2/scala-2.12.2.tgz
    [root@master spark]# tar xvf scala-2.12.2.tgz -C /usr/local/program/scala/
    #在etc/profile中增加环境变量SCALA_HOME,并使之生效:
    vim /etc/profile
    export SCALA_HOME=/usr/local/program/scala/scala-2.12.2
    export PATH=$PATH:$SCALA_HOME/bin
    [root@master spark]# . /etc/profile
    
  2. 下载安装配置Spark

    #因为我现有的hadoop是2.7.1版本,故...
    [root@master spark]# wget https://d3kbcqa49mib13.cloudfront.net/spark-2.1.1-bin-hadoop2.7.tgz
    [root@master spark]# tar xvf spark-2.1.1-bin-hadoop2.7.tgz -C /usr/local/program/spark/
    #在etc/profile中增加环境变量SPARK_HOME,并使之生效:
    export SPARK_HOME=/usr/local/program/spark/spark-2.1.1-bin-hadoop2.7
    export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
    [root@master spark]# . /etc/profile
    #在m1上配置Spark,修改spark-env.sh配置文件
    #进入spark的conf目录
    [root@master spark]# cd /usr/local/program/spark/spark-2.1.1-bin-hadoop2.7/conf/
    [root@master conf]# cp spark-env.sh.template spark-env.sh
    [root@master conf]# cat spark-env.sh
    export SCALA_HOME=/usr/local/program/scala/scala-2.12.2
    export HADOOP_HOME=/home/hadoop/hadoop-2.7.3
    export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64
    export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    #export SPARK_JAR=/usr/local/program/spark/
    export SPARK_MASTER_IP=master
    #修改conf/slaves文件,将计算节点的主机名添加到该文件,一行一个
    # 这里应该包含master,将master也同时作为一个计算节点
    [root@master conf]# cat slaves
    slave01
    slave02
    slave03
    slave04
    
  3. 配置ssh免密码登陆

  4. 复制到集群节点

    [root@master conf]# scp /etc/profile slave01:/etc/
    [root@master conf]# scp -r /usr/local/program/spark/spark-2.1.1-bin-hadoop2.7 slave02:/usr/local/program/spark/
    [root@master conf]# scp -r /usr/local/program/scala/scala-2.12.2/ slave02:/usr/local/program/scala/
    
  5. 启动master和slaves

    [root@master conf]# start-master.sh 
    [root@master conf]# start-slaves.sh 
    
  6. 通过web端口访问spark

    http://master:8080
    

三、 运行简单的example

  1. 单机运行

    #计算圆周率
    [root@master spark-2.1.1-bin-hadoop2.7]# ./bin/run-example SparkPi 10
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    17/06/05 19:19:00 INFO SparkContext: Running Spark version 2.1.1
    17/06/05 19:19:00 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
    17/06/05 19:19:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    17/06/05 19:19:00 INFO SecurityManager: Changing view acls to: root
    17/06/05 19:19:00 INFO SecurityManager: Changing modify acls to: root
    17/06/05 19:19:00 INFO SecurityManager: Changing view acls groups to: 
    17/06/05 19:19:00 INFO SecurityManager: Changing modify acls groups to: 
    17/06/05 19:19:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
    ...
    17/06/05 19:19:02 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 0.761265 s
    Pi is roughly 3.143967143967144
    
    
  2. spark-shell的简单使用

    [root@master spark-2.1.1-bin-hadoop2.7]# spark-shell
    scala> val s=sc.textFile("hdfs://master:9000/user/hadoop/test/Temperature.txt")
    s: org.apache.spark.rdd.RDD[String] = hdfs://master:9000/user/hadoop/test/Temperature.txt MapPartitionsRDD[3] at textFile at <console>:24
    
    scala> s.count
    res1: Long = 11
    [hadoop@slave02 ~]$ hdfs dfs -cat test/Temperature.txt
    2015,1,24
    2015,3,56
    2015,1,3
    2015,2,-43
    2015,4,5
    2015,3,46
    2014,2,64
    2015,1,4
    2015,1,21
    2015,2,35
    2015,2,0
    
    
  3. 集群提交作业

    [hadoop@master ~]$ spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1  /usr/local/program/spark/spark-2.1.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.1.jar 100
    17/06/06 16:03:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    17/06/06 16:03:22 INFO client.RMProxy: Connecting to ResourceManager at master/10.10.18.229:8032
    17/06/06 16:03:22 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers
    17/06/06 16:03:22 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
    ...
    17/06/06 16:04:33 INFO yarn.Client: Application report for application_1494595290830_0061 (state: RUNNING)
    17/06/06 16:04:34 INFO yarn.Client: Application report for application_1494595290830_0061 (state: RUNNING)
    17/06/06 16:04:35 INFO yarn.Client: Application report for application_1494595290830_0061 (state: RUNNING)
    17/06/06 16:04:36 INFO yarn.Client: Application report for application_1494595290830_0061 (state: FINISHED)
    17/06/06 16:04:36 INFO yarn.Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: 10.10.19.232
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1496736259866
         final status: SUCCEEDED
         tracking URL: http://master:8088/proxy/application_1494595290830_0061/
         user: hadoop
    17/06/06 16:04:36 INFO util.ShutdownHookManager: Shutdown hook called
    17/06/06 16:04:36 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-6039cb14-8084-404e-b970-633dff4dd086
    
    

相关文章

网友评论

      本文标题:Spark集群部署

      本文链接:https://www.haomeiwen.com/subject/wxialqtx.html