美文网首页JanusGraph图数据库大数据 爬虫Python AI Sql
Hadoop-gremlin批量导入出现java.util.No

Hadoop-gremlin批量导入出现java.util.No

作者: zlcook | 来源:发表于2017-11-29 15:43 被阅读103次

    场景:在gremlin终端中通过Hadoop-gremlin批量导入生成json,

    graph = GraphFactory.open('data/zl/hadoop-load-company-modern.properties')
    blvp = BulkLoaderVertexProgram.build().bulkLoader(OneTimeBulkLoader).writeGraph('data/zl/company-hbase-es.properties').create(graph)
    graph.compute(SparkGraphComputer).program(blvp).submit().get()
    

    出现了如下错误:

    Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 1 times, most recent failure: Lost task 0.0 in stage 5.0 (TID 3, localhost): java.util.NoSuchElementException
            at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:204)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoader.getVertexById(BulkLoader.java:116)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.lambda$executeInternal$4(BulkLoaderVertexProgram.java:251)
            at java.util.Iterator.forEachRemaining(Unknown Source)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.executeInternal(BulkLoaderVertexProgram.java:249)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.execute(BulkLoaderVertexProgram.java:197)
            at org.apache.tinkerpop.gremlin.spark.process.computer.SparkExecutor.lambda$null$5(SparkExecutor.java:118)
            at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils$3.next(IteratorUtils.java:247)
            at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
            at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
            at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
            at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:189)
            at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
            at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
            at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
            at org.apache.spark.scheduler.Task.run(Task.scala:89)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            at java.lang.Thread.run(Unknown Source)
    
    Driver stacktrace:
            at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
            at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
            at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
            at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
            at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
            at scala.Option.foreach(Option.scala:257)
            at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
            at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
            at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
            at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
            at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
            at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:920)
            at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918)
            at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
            at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
            at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
            at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918)
            at org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225)
            at org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46)
            at org.apache.tinkerpop.gremlin.spark.process.computer.SparkExecutor.executeVertexProgramIteration(SparkExecutor.java:179)
            at org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer.lambda$submitWithExecutor$0(SparkGraphComputer.java:279)
            at java.util.concurrent.FutureTask.run(Unknown Source)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            at java.lang.Thread.run(Unknown Source)
    Caused by: java.util.NoSuchElementException
            at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:204)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoader.getVertexById(BulkLoader.java:116)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.lambda$executeInternal$4(BulkLoaderVertexProgram.java:251)
            at java.util.Iterator.forEachRemaining(Unknown Source)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.executeInternal(BulkLoaderVertexProgram.java:249)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.execute(BulkLoaderVertexProgram.java:197)
            at org.apache.tinkerpop.gremlin.spark.process.computer.SparkExecutor.lambda$null$5(SparkExecutor.java:118)
            at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils$3.next(IteratorUtils.java:247)
            at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
            at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
            at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
            at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:189)
            at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
            at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
            at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
            at org.apache.spark.scheduler.Task.run(Task.scala:89)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
            ... 3 more
    
    
    • 问题是java.util.NoSuchElementException,但是这个问题很不明确,我想知道更具体的错误,比如是json文件中的那条记录产生了这个影响等。

    • 解决:让其打印更详细的信息

    • 由上面错误知道在调用at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next方法时出现了NoSuchElementException异常

    • 经过查询知道org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal是gremlin-core-x.x.x.jar包中的类。

    • 通过修改gremlin-core-xxx模块,在合适的方法处让其打印更详细的日志信息。

    • gremlin-core是tinkerpop项目中的一个模块,所以git clone https://github.com/apache/tinkerpop.git 修改gremlin-core模块,让其打印更详细信息。

    • 最后将编译好的gremlin-core-x.x.x.jar替换掉janusgraph-0.2.0-hadoop2\lib目录中的版本

    • 最后错误如下,让其打印出了顶点id边等信息,

      16:13:27 ERROR org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram  - ▒▒▒▒▒▒▒:sourceVertex=v[eefbad45-a079-4883-b936-42817618f094]edge=e[e4d13af5-ff29-4646-a06d-9ee20cfe8f8e][eefbad45-a079-4883-b936-42817618f094-class_staff_2_staff->82e1e894-3abc-41e5-ba16-7bba53a7df67]
    16:13:27 ERROR org.apache.spark.executor.Executor  - Managed memory leak detected; size = 5309058 bytes, TID = 3
    16:13:27 ERROR org.apache.spark.executor.Executor  - Exception in task 0.0 in stage 5.0 (TID 3)
    java.util.NoSuchElementException
            at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.next(DefaultTraversal.java:204)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoader.getVertexById(BulkLoader.java:116)
            at org.apache.tinkerpop.gremlin.process.computer.bulkloading.BulkLoaderVertexProgram.lambda$executeInternal$4(BulkLoaderVertexProgram.java:255)
            at java.util.Iterator.forEachRemaining(Unknown Source)
    
    • 经查看json文件,82e1e894-3abc-41e5-ba16-7bba53a7df67顶点的json数据和前一条数据并排放了,而导入的json文件中的json数据必须是一个顶点数据的json占一行。

    结论

    • json文件数据格式有误产生的,一个顶点的json数据应该占用一行,而不是并排放置。)
    • 最后产生错误的原因很简单,但是找错却比较费事,其实针对上面这个问题,在知道json格式应该是单行显示情况下,通过上面的错误提示,采用小黄鸭调试法应该就能找出原因的,但是,桌子上没有小黄鸭!!!

    相关文章

      网友评论

        本文标题:Hadoop-gremlin批量导入出现java.util.No

        本文链接:https://www.haomeiwen.com/subject/elzubxtx.html