美文网首页
(十五)Spark提交作业至Yarn报错:diagnostics

(十五)Spark提交作业至Yarn报错:diagnostics

作者: 白面葫芦娃92 | 来源:发表于2018-09-19 09:57 被阅读0次

测试Spark提交SparkPi作业至yarn报错:
diagnostics: Application application_1537283877986_0003 failed 2 times due to AM Container for appattempt_1537283877986_0003_000002 exited with exitCode: 1

[hadoop@hadoop001 bin]$ ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn /home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/examples/jars/spark-examples_2.11-2.3.1.jar 3
18/09/18 23:58:14 INFO spark.SparkContext: Running Spark version 2.3.1
18/09/18 23:58:14 INFO spark.SparkContext: Submitted application: Spark Pi
18/09/18 23:58:14 INFO spark.SecurityManager: Changing view acls to: hadoop
18/09/18 23:58:14 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/09/18 23:58:14 INFO spark.SecurityManager: Changing view acls groups to: 
18/09/18 23:58:14 INFO spark.SecurityManager: Changing modify acls groups to: 
18/09/18 23:58:14 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/09/18 23:58:16 INFO util.Utils: Successfully started service 'sparkDriver' on port 41639.
18/09/18 23:58:16 INFO spark.SparkEnv: Registering MapOutputTracker
18/09/18 23:58:16 INFO spark.SparkEnv: Registering BlockManagerMaster
18/09/18 23:58:16 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/09/18 23:58:16 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/09/18 23:58:16 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-6db4c90a-f8d0-4e5e-a67d-2bf70d16aae8
18/09/18 23:58:16 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
18/09/18 23:58:17 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/09/18 23:58:17 INFO util.log: Logging initialized @10487ms
18/09/18 23:58:18 INFO server.Server: jetty-9.3.z-SNAPSHOT
18/09/18 23:58:18 INFO server.Server: Started @11246ms
18/09/18 23:58:18 INFO server.AbstractConnector: Started ServerConnector@4cc61eb1{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/09/18 23:58:18 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3704122f{/jobs,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74d7184a{/jobs/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@51b01960{/jobs/job,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@27dc79f7{/jobs/job/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6b85300e{/stages,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3aaf4f07{/stages/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5cbf9e9f{/stages/stage,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1a38ba58{/stages/stage/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3ad394e6{/stages/pool,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6058e535{/stages/pool/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@42deb43a{/storage,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1deb2c43{/storage/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3bb9efbc{/storage/rdd,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1cefc4b3{/storage/rdd/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2b27cc70{/environment,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6f6a7463{/environment/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1bdaa23d{/executors,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@79f227a9{/executors/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6ca320ab{/executors/threadDump,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@50d68830{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e53135d{/static,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@27a0a5a2{/,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7692cd34{/api,null,AVAILABLE,@Spark}
18/09/18 23:58:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@70f43b45{/jobs/job/kill,null,AVAILABLE,@Spark}
18/09/18 23:58:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26d10f2e{/stages/stage/kill,null,AVAILABLE,@Spark}
18/09/18 23:58:19 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoop001:4040
18/09/18 23:58:19 INFO spark.SparkContext: Added JAR file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/examples/jars/spark-examples_2.11-2.3.1.jar at spark://hadoop001:41639/jars/spark-examples_2.11-2.3.1.jar with timestamp 1537286299079
18/09/18 23:58:22 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/09/18 23:58:23 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
18/09/18 23:58:23 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
18/09/18 23:58:23 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/09/18 23:58:23 INFO yarn.Client: Setting up container launch context for our AM
18/09/18 23:58:23 INFO yarn.Client: Setting up the launch environment for our AM container
18/09/18 23:58:23 INFO yarn.Client: Preparing resources for our AM container
18/09/18 23:58:28 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/09/18 23:58:31 INFO yarn.Client: Uploading resource file:/tmp/spark-982e8e1d-bcee-42c3-93a0-22ae4fc86e8b/__spark_libs__5486999757908911575.zip -> hdfs://192.168.137.141:9000/user/hadoop/.sparkStaging/application_1537283877986_0003/__spark_libs__5486999757908911575.zip
18/09/18 23:58:39 INFO yarn.Client: Uploading resource file:/tmp/spark-982e8e1d-bcee-42c3-93a0-22ae4fc86e8b/__spark_conf__2871846930574195005.zip -> hdfs://192.168.137.141:9000/user/hadoop/.sparkStaging/application_1537283877986_0003/__spark_conf__.zip
18/09/18 23:58:39 INFO spark.SecurityManager: Changing view acls to: hadoop
18/09/18 23:58:39 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/09/18 23:58:39 INFO spark.SecurityManager: Changing view acls groups to: 
18/09/18 23:58:39 INFO spark.SecurityManager: Changing modify acls groups to: 
18/09/18 23:58:39 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/09/18 23:58:39 INFO yarn.Client: Submitting application application_1537283877986_0003 to ResourceManager
18/09/18 23:58:39 INFO impl.YarnClientImpl: Submitted application application_1537283877986_0003
18/09/18 23:58:39 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1537283877986_0003 and attemptId None
18/09/18 23:58:40 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:40 INFO yarn.Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.hadoop
         start time: 1537286319648
         final status: UNDEFINED
         tracking URL: http://hadoop001:8088/proxy/application_1537283877986_0003/
         user: hadoop
18/09/18 23:58:41 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:42 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:43 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:44 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:45 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:46 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:47 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:49 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:50 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:51 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:52 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:53 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:54 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:55 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:56 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:57 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:58 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:58:59 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:59:00 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:59:01 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:59:02 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:59:03 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:59:04 INFO yarn.Client: Application report for application_1537283877986_0003 (state: ACCEPTED)
18/09/18 23:59:05 INFO yarn.Client: Application report for application_1537283877986_0003 (state: FAILED)
18/09/18 23:59:05 INFO yarn.Client: 
         client token: N/A
         diagnostics: Application application_1537283877986_0003 failed 2 times due to AM Container for appattempt_1537283877986_0003_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://hadoop001:8088/proxy/application_1537283877986_0003/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1537283877986_0003_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
        at org.apache.hadoop.util.Shell.run(Shell.java:478)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.hadoop
         start time: 1537286319648
         final status: FAILED
         tracking URL: http://hadoop001:8088/cluster/app/application_1537283877986_0003
         user: hadoop
18/09/18 23:59:05 INFO yarn.Client: Deleted staging directory hdfs://192.168.137.141:9000/user/hadoop/.sparkStaging/application_1537283877986_0003
18/09/18 23:59:05 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/09/18 23:59:05 INFO server.AbstractConnector: Stopped Spark@4cc61eb1{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/09/18 23:59:05 INFO ui.SparkUI: Stopped Spark web UI at http://hadoop001:4040
18/09/18 23:59:05 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
18/09/18 23:59:05 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
18/09/18 23:59:05 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/09/18 23:59:05 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
18/09/18 23:59:05 INFO cluster.YarnClientSchedulerBackend: Stopped
18/09/18 23:59:05 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/09/18 23:59:05 INFO memory.MemoryStore: MemoryStore cleared
18/09/18 23:59:05 INFO storage.BlockManager: BlockManager stopped
18/09/18 23:59:05 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/09/18 23:59:05 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running
18/09/18 23:59:05 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/09/18 23:59:05 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/09/18 23:59:05 INFO util.ShutdownHookManager: Shutdown hook called
18/09/18 23:59:05 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-982e8e1d-bcee-42c3-93a0-22ae4fc86e8b
18/09/18 23:59:05 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-bb8d1891-3b86-44e7-97ee-b1308b7e434a
网上报这个错的情况很多,但具体原因各不相同,需要查看yarn上的log日志,日志截图如下:

Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/spark/network/util/ByteUnit : Unsupported major.minor version 52.0
这个错误是java版本不匹配
spark使用的java1.8,hadoop-2.6.0-cdh5.7.0使用的是java1.7
更改hadoop-2.6.0-cdh5.7.0的jdk为1.8即可

[hadoop@hadoop000 hadoop]$ cd $HADOOP_HOME/etc/hadoop
[hadoop@hadoop000 hadoop]$ vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_45

重启hadoop之后,再次运行spark-submit成功(切记一定要重启hadoop)

[hadoop@hadoop000 bin]$ ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn /home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/examples/jars/spark-examples_2.11-2.3.1.jar 3
18/09/19 17:30:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/09/19 17:30:29 INFO spark.SparkContext: Running Spark version 2.3.1
18/09/19 17:30:29 INFO spark.SparkContext: Submitted application: Spark Pi
18/09/19 17:30:29 INFO spark.SecurityManager: Changing view acls to: hadoop
18/09/19 17:30:29 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/09/19 17:30:29 INFO spark.SecurityManager: Changing view acls groups to: 
18/09/19 17:30:29 INFO spark.SecurityManager: Changing modify acls groups to: 
18/09/19 17:30:29 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/09/19 17:30:29 INFO util.Utils: Successfully started service 'sparkDriver' on port 49228.
18/09/19 17:30:30 INFO spark.SparkEnv: Registering MapOutputTracker
18/09/19 17:30:30 INFO spark.SparkEnv: Registering BlockManagerMaster
18/09/19 17:30:30 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/09/19 17:30:30 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/09/19 17:30:30 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-1d6abd9b-d2b8-4109-8218-e74ecac1b5b5
18/09/19 17:30:30 INFO memory.MemoryStore: MemoryStore started with capacity 413.9 MB
18/09/19 17:30:30 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/09/19 17:30:30 INFO util.log: Logging initialized @3911ms
18/09/19 17:30:30 INFO server.Server: jetty-9.3.z-SNAPSHOT
18/09/19 17:30:30 INFO server.Server: Started @4064ms
18/09/19 17:30:30 INFO server.AbstractConnector: Started ServerConnector@50ca15a3{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/09/19 17:30:30 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5167268{/jobs,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1bc53649{/jobs/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@88d6f9b{/jobs/job,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@475b7792{/jobs/job/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@751e664e{/stages,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@160c3ec1{/stages/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@182b435b{/stages/stage,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7577b641{/stages/stage/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3704122f{/stages/pool,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3153ddfc{/stages/pool/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60afd40d{/storage,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@28a2a3e7{/storage/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3f2049b6{/storage/rdd,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@10b3df93{/storage/rdd/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@ea27e34{/environment,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33a2499c{/environment/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@e72dba7{/executors,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33c2bd{/executors/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1dfd5f51{/executors/threadDump,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3c321bdb{/executors/threadDump/json,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24855019{/static,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5a2f016d{/,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1a38ba58{/api,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1deb2c43{/jobs/job/kill,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3bb9efbc{/stages/stage/kill,null,AVAILABLE,@Spark}
18/09/19 17:30:30 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoop000:4040
18/09/19 17:30:31 INFO spark.SparkContext: Added JAR file:/home/hadoop/app/spark-2.3.1-bin-2.6.0-cdh5.7.0/examples/jars/spark-examples_2.11-2.3.1.jar at spark://hadoop000:49228/jars/spark-examples_2.11-2.3.1.jar with timestamp 1537349431038
18/09/19 17:30:32 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/09/19 17:30:32 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
18/09/19 17:30:32 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
18/09/19 17:30:32 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/09/19 17:30:32 INFO yarn.Client: Setting up container launch context for our AM
18/09/19 17:30:32 INFO yarn.Client: Setting up the launch environment for our AM container
18/09/19 17:30:32 INFO yarn.Client: Preparing resources for our AM container
18/09/19 17:30:35 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/09/19 17:30:44 INFO yarn.Client: Uploading resource file:/tmp/spark-8152492d-487e-4d35-962a-42344edea033/__spark_libs__2104928720237052389.zip -> hdfs://192.168.137.251:9000/user/hadoop/.sparkStaging/application_1537349385350_0001/__spark_libs__2104928720237052389.zip
18/09/19 17:30:54 INFO yarn.Client: Uploading resource file:/tmp/spark-8152492d-487e-4d35-962a-42344edea033/__spark_conf__1822648312505136721.zip -> hdfs://192.168.137.251:9000/user/hadoop/.sparkStaging/application_1537349385350_0001/__spark_conf__.zip
18/09/19 17:30:54 INFO spark.SecurityManager: Changing view acls to: hadoop
18/09/19 17:30:54 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/09/19 17:30:54 INFO spark.SecurityManager: Changing view acls groups to: 
18/09/19 17:30:54 INFO spark.SecurityManager: Changing modify acls groups to: 
18/09/19 17:30:54 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/09/19 17:30:54 INFO yarn.Client: Submitting application application_1537349385350_0001 to ResourceManager
18/09/19 17:30:56 INFO impl.YarnClientImpl: Submitted application application_1537349385350_0001
18/09/19 17:30:56 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1537349385350_0001 and attemptId None
18/09/19 17:30:57 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:30:57 INFO yarn.Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.hadoop
         start time: 1537349455760
         final status: UNDEFINED
         tracking URL: http://hadoop000:8088/proxy/application_1537349385350_0001/
         user: hadoop
18/09/19 17:30:58 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:30:59 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:00 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:01 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:02 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:03 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:04 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:05 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:06 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:07 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:08 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:09 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:10 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:11 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:12 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:13 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:14 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:15 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:16 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:17 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop000, PROXY_URI_BASES -> http://hadoop000:8088/proxy/application_1537349385350_0001), /proxy/application_1537349385350_0001
18/09/19 17:31:17 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
18/09/19 17:31:17 INFO yarn.Client: Application report for application_1537349385350_0001 (state: ACCEPTED)
18/09/19 17:31:18 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
18/09/19 17:31:18 INFO yarn.Client: Application report for application_1537349385350_0001 (state: RUNNING)
18/09/19 17:31:18 INFO yarn.Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: 192.168.137.251
         ApplicationMaster RPC port: 0
         queue: root.hadoop
         start time: 1537349455760
         final status: UNDEFINED
         tracking URL: http://hadoop000:8088/proxy/application_1537349385350_0001/
         user: hadoop
18/09/19 17:31:18 INFO cluster.YarnClientSchedulerBackend: Application application_1537349385350_0001 has started running.
18/09/19 17:31:18 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35486.
18/09/19 17:31:18 INFO netty.NettyBlockTransferService: Server created on hadoop000:35486
18/09/19 17:31:18 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/09/19 17:31:19 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoop000, 35486, None)
18/09/19 17:31:19 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop000:35486 with 413.9 MB RAM, BlockManagerId(driver, hadoop000, 35486, None)
18/09/19 17:31:19 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoop000, 35486, None)
18/09/19 17:31:19 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoop000, 35486, None)
18/09/19 17:31:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20cdb152{/metrics/json,null,AVAILABLE,@Spark}
18/09/19 17:31:19 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
18/09/19 17:31:22 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
18/09/19 17:31:22 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 3 output partitions
18/09/19 17:31:22 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
18/09/19 17:31:22 INFO scheduler.DAGScheduler: Parents of final stage: List()
18/09/19 17:31:22 INFO scheduler.DAGScheduler: Missing parents: List()
18/09/19 17:31:23 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
18/09/19 17:31:24 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1832.0 B, free 413.9 MB)
18/09/19 17:31:25 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1181.0 B, free 413.9 MB)
18/09/19 17:31:25 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop000:35486 (size: 1181.0 B, free: 413.9 MB)
18/09/19 17:31:25 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1039
18/09/19 17:31:25 INFO scheduler.DAGScheduler: Submitting 3 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks are for partitions Vector(0, 1, 2))
18/09/19 17:31:25 INFO cluster.YarnScheduler: Adding task set 0.0 with 3 tasks
18/09/19 17:31:32 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.137.251:48399) with ID 1
18/09/19 17:31:32 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, hadoop000, executor 1, partition 0, PROCESS_LOCAL, 7864 bytes)
18/09/19 17:31:32 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop000:42154 with 413.9 MB RAM, BlockManagerId(1, hadoop000, 42154, None)
18/09/19 17:31:32 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.137.251:48401) with ID 2
18/09/19 17:31:32 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, hadoop000, executor 2, partition 1, PROCESS_LOCAL, 7864 bytes)
18/09/19 17:31:33 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop000:46492 with 413.9 MB RAM, BlockManagerId(2, hadoop000, 46492, None)
18/09/19 17:31:35 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop000:46492 (size: 1181.0 B, free: 413.9 MB)
18/09/19 17:31:35 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop000:42154 (size: 1181.0 B, free: 413.9 MB)
18/09/19 17:31:36 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, hadoop000, executor 2, partition 2, PROCESS_LOCAL, 7864 bytes)
18/09/19 17:31:36 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 4671 ms on hadoop000 (executor 1) (1/3)
18/09/19 17:31:36 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 4013 ms on hadoop000 (executor 2) (2/3)
18/09/19 17:31:37 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 180 ms on hadoop000 (executor 2) (3/3)
18/09/19 17:31:37 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
18/09/19 17:31:37 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 13.759 s
18/09/19 17:31:37 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 14.770947 s
Pi is roughly 3.1405571351904507
18/09/19 17:31:37 INFO server.AbstractConnector: Stopped Spark@50ca15a3{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
18/09/19 17:31:37 INFO ui.SparkUI: Stopped Spark web UI at http://hadoop000:4040
18/09/19 17:31:37 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
18/09/19 17:31:37 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
18/09/19 17:31:37 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/09/19 17:31:37 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
18/09/19 17:31:37 INFO cluster.YarnClientSchedulerBackend: Stopped
18/09/19 17:31:37 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/09/19 17:31:37 INFO memory.MemoryStore: MemoryStore cleared
18/09/19 17:31:37 INFO storage.BlockManager: BlockManager stopped
18/09/19 17:31:37 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/09/19 17:31:37 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/09/19 17:31:37 INFO spark.SparkContext: Successfully stopped SparkContext
18/09/19 17:31:37 INFO util.ShutdownHookManager: Shutdown hook called
18/09/19 17:31:37 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-94b19219-9d16-4d3d-bbd8-ade23b440d40
18/09/19 17:31:37 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-8152492d-487e-4d35-962a-42344edea033

相关文章

网友评论

      本文标题:(十五)Spark提交作业至Yarn报错:diagnostics

      本文链接:https://www.haomeiwen.com/subject/ypqbnftx.html