使用过spark的盆友都知道spark官方的thriftserver不知道cluster模式,鉴于工作需要,对STS使用cluster模式进行调研测试,测试结果如下:
-
修改源码支持cluster提交
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
1
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2.scala
![](https://img.haomeiwen.com/i6432486/94958b6ca7f3b042.png)
![](https://img.haomeiwen.com/i6432486/4efee7e14c395999.png)
-
替换对应jar文件
spark-hive-thriftserver_2.11-2.4.5.jar
spark-core_2.11-2.4.5.jar -
启动thriftserver
bin/spark-submit --master yarn --deploy-mode cluster --driver-memory 3g --executor-memory 2g --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 jars/spark-hive-thriftserver_2.11-2.4.5.jar --hiveconf --hiveconf hive.server2.thrift.port=10000
- 找到dirver对应到地址,使用beeline进行测试
网友评论