选择的最新版的Spark,目前是2.3.0。编译Spark源码,使用自己安装的maven进行编译,其中-T参数是设置编译的线程数,这里设置的是20
mvn -T 5 -DskipTests clean package
经过等待,Spark源码编译完成,如下图所示:
build
./bin/spark-shell
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
*** *** WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Spark context Web UI available at http://***
Spark context available as 'sc' (master = local[*], app id = local-1545728268512).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.0-SNAPSHOT
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.
scala> env;
网友评论