在官方示例中给出的模型存储路径是hdfs路径,但是在个人运行本示例时,脚本运行阻塞,最终超时报错
解决办法有两种
一将模型的存储路径设置为本地路径
/usr/local/app/spark-2.3.1/bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--num-executors 4 \
--executor-memory 1G \
--py-files mnist_dist.py \
mnist_spark.py \
--images /mnist/csv/train/images \
--labels /mnist/csv/train/labels \
--format csv \
--mode train \
--model file:///home/model
此外还可以配置一些其他参数
#export QUEUE=default
#--py-files tfspark.zip,mnist_dist.py \
#--conf spark.yarn.maxAppAttempts=1 \
#--conf spark.dynamicAllocation.enabled=false \
#--queue ${QUEUE} \
二 在提交脚本中配置参数 (推荐做法)
需要将libjvm.so libhdfs.so 添加到LD_LIBRARY_PATH 然后glob配置到hadoop的包依赖路径
export LD_LIBRARY_PATH=/usr/local/app/hadoop-3.1.0/lib/native/examples${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} #关键配置
export LD_LIBRARY_PATH=/usr/local/jdk1.8.0_161/jre/lib/amd64/server${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} #关键配置
export QUEUE=default
/usr/local/app/spark-2.3.1/bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--num-executors 2 \
--executor-memory 2G \
--py-files mnist_dist.py \
--conf spark.dynamicAllocation.enabled=false \
--conf spark.yarn.maxAppAttempts=1 \
--conf spark.executorEnv.LD_LIBRARY_PATH=$LD_LIBRARY_PATH \
--conf spark.executorEnv.CLASSPATH=$(/usr/local/app/hadoop-3.1.0/bin/hadoop classpath --glob) \ # 关键配置
mnist_spark.py \
--images /mnist/csv/train/images \
--labels /mnist/csv/train/labels \
--format csv \
--mode train \
--model /mnist/model
网友评论