美文网首页
Janusgraph spark on yarn

Janusgraph spark on yarn

作者: 皇甫LG | 来源:发表于2020-03-24 19:06 被阅读0次

一、环境说明

1.1 基于CDH5.11.1的配置安装

  • hadoop 2.6.0

  • spark 2.2.0

  • hbase 1.2.0

  • zookeeper 3.4.5

  • janusgraph 0.3.1

  • Scala 2.11.8

  • Java 1.8.0

1.2 服务分布

zookeeper主机 Hbase主机 janusgraph
Hbasetest1.com Hbasetest1.com Hbasetest3.com
Hbasetest2.com Hbasetest2.com
Hbasetest3.com Hbasetest3.com

二、系统配置

# 配置环境变量:
export JAVA_HOME=/usr/java/jdk1.8/

export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2/

export JANUSGRAPH_HOME=/home/we-op/janusgraph

export JANUSGRAPH_CONF_DIR=$JANUSGRAPH_HOME/conf/

export HBASE_CONF_DIR=/etc/hbase/conf/

export SPARK_CONF_DIR=$SPARK_HOME/conf/

export HADOOP_CONF_DIR=/etc/hadoop/conf/

export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop/

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JANUSGRAPH_HOME/bin:$SPARK_HOME/bin

export CLASSPATH=$HADOOP_CONF_DIR:$SPARK_CONF_DIR:$HBASE_CONF_DIR:$JANUSGRAPH_CONF_DIR

三、源码编译Janusgraph

GitHub地址:https://github.com/JanusGraph/janusgraph/tree/v0.3.1

2.1 修改janusgraph-core源码

打开并修改janusgraph-core/src/main/java/org/janusgraph/graphdb/database/idassigner/StandardIDPool.java

import com.google.common.base.inter.Stopwatch;

2.2 拷贝目录


cp -R janusgraph-hbase-parent/janusgraph-hbase-core/src/main/java/com/ janusgraph-core/src/main/java/

cd janusgraph-core/src/main/java/com/google/common/base

mkdir inter

mv Stopwatch.java inter/

# 打开并修改Stopwath.java

package com.google.common.base.inter;

import com.google.common.base.Ticker;

2.3 修改janusgraph-0.3.1/pom.xml文件


# 修改组件依赖 版本为cdh相同版本

<hadoop2.version>2.6.0-cdh5.11.1</hadoop2.version>

<hbase1.version>1.2.0-cdh5.11.1</hbase1.version>

<hbase.server.version>1.2.0-cdh5.11.1</hbase.server.version>

<zookeeper.version>3.4.5-cdh5.11.1</zookeeper.version>


#增加数据源:

<repository>

<id>cloudera</id>

<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>

</repository>


# 增加maven和java版本检查, 如果编译时使用了-Denforcer.skip=true 则不用添加以下内容

<execution>

  <id>enforce-dependency-convergence</id>

    <goals>

        <goal>enforce</goal>

    </goals>

<configuration>

    <rules>

        <requireMavenVersion>

            <version>3.6.2</version>

        </requireMavenVersion>

        <requireJavaVersion>

            <version>1.8.0</version>

        </requireJavaVersion>

    </rules>

</configuration>

</execution>

#修改janusgraph-cql/pom.xml

<directory>${basedir}/src/test/resources</directory>

2.3 编译命令

mvn clean install -Pjanusgraph-release -Dgpg.skip=true -DskipTests=true -Denforcer.skip=true --fail-at-end

编译成功后:janusgraph-dist/janusgraph-dist-hadoop-2/target/

可以看到对应的zip包: janusgraph-0.3.1-hadoop2.zip

四、替换依赖包

使用hdfs用户;解压janusgraph-0.3.1-hadoop2.zip到指定目录。

4.1 加入新依赖包

$CDH_HOME/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.11.1.jar

$CDH_HOME/jars/zkclient-0.10.jar

$SPARK_HOME/jars/spark-core_2.11-2.2.0.cloudera2.jar

$SPARK_HOME/jars/spark-network-common_2.11-2.2.0.cloudera2.jar

$SPARK_HOME/jars/spark-yarn_2.11-2.2.0.cloudera2.jar

拷贝以上jar包到 $JANUSGRAPH_HOME/lib/目录下。

4.2 删除旧依赖包

$JANUSGRAPH_HOME/lib/spark-network-common_2.11-2.2.0.jar

$JANUSGRAPH_HOME/lib/spark-core_2.11-2.2.0.jar

五、修改属性文件

# 修改配置文件conf/janusgraph-hbase.properties

storage.backend=hbase
storage.hostname=Hbasetest1.com,Hbasetest2.com,Hbasetest3.com


#修改:conf/hadoop-graph/hadoop-load.properties

janusgraphmr.ioformat.conf.storage.backend=hbase
janusgraphmr.ioformat.conf.storage.hostname=Hbasetest1.com,Hbasetest2.com,Hbasetest3.com
spark.master=yarn
spark.executor.memory=1g
spark.executor.instances=2
spark.executor.cores=4
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.yarn.jars=/opt/cloudera/parcels/SPARK2/lib/spark2/jars/*

spark.yarn.am.extraJavaOptions=-Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native
spark.executor.extraJavaOptions=-Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native
spark.executor.extraClassPath=/home/we-op/janusgraph/lib/*:/etc/hadoop/conf:/etc/hbase/conf:/etc/spark2/conf


#修改:conf/hadoop-graph/read-hbase.properties

janusgraphmr.ioformat.conf.storage.backend=hbase
janusgraphmr.ioformat.conf.storage.hostname=Hbasetest1.com,Hbasetest2.com,Hbasetest3.com
janusgraphmr.ioformat.conf.storage.hbase.table=janusgraph
spark.master=yarn
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.yarn.am.extraJavaOptions=-Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native
spark.executor.extraJavaOptions=-Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native
spark.executor.extraClassPath=/home/we-op/janusgraph/lib/*:/etc/hadoop:/etc/hbase/conf:/etc/spark2/conf

六、测试

6.1 导入测试数据:


gremlin> :load data/grateful-dead-janusgraph-schema.groovy --执行一次

==>true

==>true

gremlin> graph=GraphFactory.open("conf/gremlin-server/offline_hbase_all_in_one_20190614.properties")

==>standardjanusgraph[hbase:[Hbasetest1.com, Hbasetest2.com, Hbasetest3.com]]

gremlin> g=graph.traversal()

==>graphtraversalsource[standardjanusgraph[hbase:[Hbasetest1.com, Hbasetest2.com, Hbasetest3.com]], standard]

gremlin> defineGratefulDeadSchema(graph)

==>null

gremlin> g.addV("song").property('name','123')

==>v[4272]

gremlin> g.tx().commit()

==>null

gremlin> g.V().count()

16:02:16 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes

==>1

gremlin> graph.close()

==>null

6.2 执行读写测试:

image.png

相关文章

网友评论

      本文标题:Janusgraph spark on yarn

      本文链接:https://www.haomeiwen.com/subject/vmsayhtx.html