-
wget https://www-eu.apache.org/dist/spark/spark-2.3.2/spark-2.3.2-bin-hadoop2.7.tgz
spark 有3个模式,
1)Local模式:
2)Standalone模式(独立模式):独立的监控页面——正式运行
3)Yarn模式:统一都用hadoop自带的页面
-
安装
tar zcvf spark-2.3.2-bin-hadoop2.7.tgz # master scp -p spark-2.3.2-bin-hadoop2.7.tgz slave:/usr/local/src/ ssh slave tar zcvf spark-2.3.2-bin-hadoop2.7.tgz #slave
Local模式
-
启动
./sbin/start-master.sh #master ./sbin/start-salve.sh spark://master:7077 #slave
-
测试
webUI访问端口:8080
进入spark shell
./spark-shell –master spark://master:7077
image.png
./spark-submit --class org.apache.spark.examples.SparkPi --master spark://master:7077 ../examples/jars/spark-examples_2.11-2.3.2.jar 100
image.png
image.png
image.png
Standalone模式
-
配置
cd spark-2.3.2-bin-hadoop2.7/conf cp spark-env.sh.template spark-env.sh vim spark-env.sh export SCALA_HOME=/usr/local/src/scala-2.12.7 export JAVA_HOME=/usr/local/src/jdk1.8.0_181 SPARK_MASTER_IP=master export HADOOP_HOME=/usr/local/src/scala- export HADOOP_CONF_DIR=/usr/local/src/scala- cp spark-defaults.conf.template spark-defaults.conf vim spark-defaults.conf spark.master spark://master:7077 scp -p spark-env.sh slave1:/usr/local/src/spark-2.3.2-bin-hadoop2.7/conf/ scp -p spark-defaults.conf slave1:/usr/local/src/spark-2.3.2-bin-hadoop2.7/conf/
-
启动
./sbin/start-master.sh #master ./sbin/start-salve.sh #slave
Yarn模式
-
连接
#spark local ./bin/run-example SparkPi 10 --master local[2] #spark standalone ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://master:7077 lib/spark-examples-1.6.0-hadoop2.6.0.jar 100 #spark on yarn ./bin/spark-submit --class org.apache.spark.exmples.SparkPi --master yarn-cluster lib/psark-examples-1.6.0-hadoop2.6.0.jar 10
网友评论