美文网首页我爱编程
janusgraph gremlin-hadoop spark

janusgraph gremlin-hadoop spark

作者: 清歌笑染红尘 | 来源:发表于2017-11-21 10:37 被阅读1425次

    基于apache hadoop的配置安装

    安装相关的大数据组件,包括:

    • hadoop 2.6.2
    • spark 1.6.1
    • hbase 1.0.0
    • zookeeper 3.4.10
    • janusgraph 0.2.0

    环境变量的配置

    每台机器上都需要配置如下环境变量

    export JAVA_HOME=/usr/local/lib/jdk1.8.0_60
    export HBASE_CONF_DIR=/opt/hbase-1.0.0/conf
    export HADOOP_CONF_DIR=/opt/hadoop-2.6.5/etc/hadoop
    export HADOOP_HOME=/opt/hadoop-2.6.5
    export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    export CLASSPATH=$HADOOP_CONF_DIR:$SPARK_CONF_DIR:$HBASE_CONF_DIR
    export SPARK_CONF_DIR=/opt/spark-1.6.1-bin-hadoop2.6/conf
    
    

    添加相应的jar到$JANUSGRAPH_HOME/lib

    • 添加spark的spark-assembly-1.6.1-hadoop2.6.0.jar。由于其中包含了相应的hadoop的jar所以不需要单独的添加hadoop的jar。
    • 添加hbase的相关jar。这些jar需要和hbase的发行版本相匹配,要不然会出java.net.ConnectException: Connection refused的问题。当出现这个问题的时候需要删除和版本不匹配的jar,并重启hbase的相关服务解决。

    NOTE:

    • 在添加相关jar之前,需要删除之前jansgraph自带的相应的jar。
    • 由于hbase-client-1.0.0.jar依赖的guava版本为16,所以需要删除掉自带的guava-18.jar,更换为16版本。要不然会出现
      org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator

    $JANUSGRAPH_HOME/lib分发到集群的每台机器上。

    配置$JANUSGRAPH_HOME/conf/hadoop-graph/hadoop-load.properties

    #
    # Hadoop Graph Configuration
    #
    gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
    gremlin.hadoop.graphInputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoInputFormat
    gremlin.hadoop.graphOutputFormat=org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
    gremlin.hadoop.inputLocation=./data/grateful-dead.kryo
    gremlin.hadoop.outputLocation=output
    gremlin.hadoop.jarsInDistributedCache=true
    
    #
    # GiraphGraphComputer Configuration
    #
    giraph.minWorkers=2
    giraph.maxWorkers=2
    giraph.useOutOfCoreGraph=true
    giraph.useOutOfCoreMessages=true
    mapred.map.child.java.opts=-Xmx1024m
    mapred.reduce.child.java.opts=-Xmx1024m
    giraph.numInputThreads=4
    giraph.numComputeThreads=4
    giraph.maxMessagesInMemory=100000
    
    #
    # SparkGraphComputer Configuration
    #
    spark.master=yarn-client
    spark.executor.memory=512m
    spark.executor.instances=2
    spark.executor.cores=4
    spark.serializer=org.apache.spark.serializer.KryoSerializer
    spark.ui.port=14040
    spark.app.name=janusgraph-data-load
    spark.app.id=janusgraph-data-load
    #以下两个配置只对spark的jar有效,用来提高spark相关jar的加载速度
    #spark.yarn.jar=hdfs://wangmaoshuai.novalocal:8020/user/root/share/lib/spark/spark-assembly-1.6.1-hadoop2.6.0.jar
    #spark.yarn.archive=hdfs://wangmaoshuai.novalocal:8020/user/root/share/lib/spark/janusgraph-0.2.0.zip
    spark.yarn.am.extraJavaOptions=-Djava.library.path=/opt/hadoop-2.6.5/lib/native
    #配置成分发到集群的janusgraph-lib的文件地址
    spark.executor.extraClassPath=/opt/janusgraph-lib/*:/opt/hadoop-2.6.5/etc/hadoop:/opt/hbase-1.0.0/conf:/opt/spark-1.6.1-bin-hadoop2.6/conf
    
    spark.executor.extraJavaOptions=-Djava.library.path=/opt/hadoop-2.6.5/lib/native
    
    #cache config
    gremlin.spark.persistContext=true
    gremlin.spark.graphStorageLevel=MEMORY_AND_DISK
    #saprk history
    spark.history.provider=org.apache.spark.deploy.yarn.history.YarnHistoryProvider
    spark.history.ui.port=18080
    spark.history.kerberos.keytab=none
    spark.history.kerberos.principal=none
    spark.yarn.services=org.apache.spark.deploy.yarn.history.YarnHistoryService
    spark.yarn.historyServer.address=http://wangmaoshuai.novalocal:18080
    

    配置$JANUSGRAPH_HOME/conf/hadoop-graph/read-hbase.properties

    #
    # Hadoop Graph Configuration
    #
    gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
    gremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.hbase.HBaseInputFormat
    gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat
    
    gremlin.hadoop.jarsInDistributedCache=true
    gremlin.hadoop.inputLocation=none
    gremlin.hadoop.outputLocation=output
    
    #
    # JanusGraph HBase InputFormat configuration
    #
    janusgraphmr.ioformat.conf.storage.backend=hbase
    janusgraphmr.ioformat.conf.storage.hostname=10.110.13.210
    #zookeeper.znode.parent=/hbase-unsecure
    janusgraphmr.ioformat.conf.storage.hbase.table=SparkYarnImportTest
    
    #
    # SparkGraphComputer Configuration
    #
    spark.master=yarn-client
    spark.serializer=org.apache.spark.serializer.KryoSerializer
    
    spark.executor.extraClassPath=/opt/janusgraph-lib/*:/opt/hadoop-2.6.5/etc/hadoop:/opt/hbase-1.0.0/conf:/opt/spark-1.6.1-bin-hadoop2.6/conf
    spark.yarn.am.extraJavaOptions=-Djava.library.path=/opt/hadoop-2.6.5/lib/native
    spark.executor.extraJavaOptions=-Djava.library.path=/opt/hadoop-2.6.5/lib/native
    

    测试

    bin/gremlin.sh
    
             \,,,/
             (o o)
    -----oOOo-(3)-oOOo-----
    plugin activated: janusgraph.imports
    gremlin> :plugin use tinkerpop.hadoop
    ==>tinkerpop.hadoop activated
    gremlin> :plugin use tinkerpop.spark
    ==>tinkerpop.spark activated
    gremlin> :load data/grateful-dead-janusgraph-schema.groovy
    ==>true
    ==>true
    gremlin> graph = JanusGraphFactory.open('conf/janusgraph-hbase.properties')
    ==>standardjanusgraph[hbase:[kg-server-96.kg.com, kg-agent-95.kg.com, kg-agent-97.kg.com]]
    gremlin> defineGratefulDeadSchema(graph)
    ==>null
    gremlin> graph.close()
    ==>null
    gremlin> if (!hdfs.exists('data/grateful-dead.kryo')) hdfs.copyFromLocal('data/grateful-dead.kryo','data/grateful-dead.kryo')
    ==>null
    gremlin> graph = GraphFactory.open('conf/hadoop-graph/hadoop-load.properties')
    ==>hadoopgraph[gryoinputformat->nulloutputformat]
    gremlin> blvp = BulkLoaderVertexProgram.build().writeGraph('conf/janusgraph-hbase.properties').create(graph)
    ==>BulkLoaderVertexProgram[bulkLoader=IncrementalBulkLoader,vertexIdProperty=bulkLoader.vertex.id,userSuppliedIds=false,keepOriginalIds=true,batchSize=0]
    gremlin> graph.compute(SparkGraphComputer).program(blvp).submit().get()
    ...
    ==>result[hadoopgraph[gryoinputformat->nulloutputformat],memory[size:0]]
    gremlin> graph.close()
    ==>null
    gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-hbase.properties')
    ==>hadoopgraph[cassandrainputformat->gryooutputformat]
    gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
    ==>graphtraversalsource[hadoopgraph[cassandrainputformat->gryooutputformat], sparkgraphcomputer]
    gremlin> g.V().count()
    ...
    ==>808
    

    相关文章

      网友评论

        本文标题:janusgraph gremlin-hadoop spark

        本文链接:https://www.haomeiwen.com/subject/komivxtx.html