美文网首页玩转Spark
Phoenix 与Spark整合,完善大数据计算栈

Phoenix 与Spark整合,完善大数据计算栈

作者: 曹振华 | 来源:发表于2016-07-24 11:28 被阅读1066次

    本篇文章主要讲解phoenix与spark做整合,目的是将phoenix做存储,spark做计算层。这样就结合了phoenix查询速度快和spark计算速度快的优点。
    在这里将Phoenix的表作为spark的RDD或者DataFrames来操作,并且将操作的结果写回phoenix中。
    这样做也扩大了两者的使用场景。
    下面我们就来做两者的整合
    先说下版本:
    Phoenix 版本 4.4.0
    Hbase版本 0.98
    spark版本 spark-1.5.2-bin-hadoop2.6
    首先配置 SPARK_CLASSPATH
    要想在spark中操作phoenix,就必须让spark可以找到phoenix的相关类,所以我们把client放到spark_classpath中
    export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/phoenix/phoenix-spark-4.4.0-HBase-0.98-tests.jar
    export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/phoenix/phoenix-4.4.0-HBase-0.98-client.jar
    export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/hadoop/phoenix/phoenix-server-client-4.4.0-HBase-0.98.jar
    这样就可以在spark-shell中操作phoenix了(很简单吧)。
    下来结合两者做下实验:
    1> 在phoenix中创建几张表
    [hadoop@10.10.113.45 ~/phoenix/bin]$>./sqlline.py 10.10.113.45:2181
    0: jdbc:phoenix:10.10.113.45:2181> CREATE TABLE EMAIL_ENRON(
    . . . . . . . . . . . . . . . . .> MAIL_FROM BIGINT NOT NULL,
    . . . . . . . . . . . . . . . . .> MAIL_TO BIGINT NOT NULL
    . . . . . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY(MAIL_FROM, MAIL_TO));
    0: jdbc:phoenix:10.10.113.45:2181> CREATE TABLE EMAIL_ENRON_PAGERANK(
    . . . . . . . . . . . . . . . . .> ID BIGINT NOT NULL,
    . . . . . . . . . . . . . . . . .> RANK DOUBLE
    . . . . . . . . . . . . . . . . .> CONSTRAINT pk PRIMARY KEY(ID));
    No rows affected (0.52 seconds)
    查看下是否创建成功
    0: jdbc:phoenix:10.10.113.45:2181> !tables
    +------------------------------------------+------------------------------------------+------------------------------------------+--------------+
    | TABLE_CAT | TABLE_SCHEM | TABLE_NAME | |
    +------------------------------------------+------------------------------------------+------------------------------------------+--------------+
    | | SYSTEM | CATALOG | SYSTEM TABLE |
    | | SYSTEM | FUNCTION | SYSTEM TABLE |
    | | SYSTEM | SEQUENCE | SYSTEM TABLE |
    | | SYSTEM | STATS | SYSTEM TABLE |
    | | | EMAIL_ENRON | TABLE |
    | | | EMAIL_ENRON_PAGERANK | TABLE |
    +------------------------------------------+------------------------------------------+------------------------------------------+--------------+
    0: jdbc:phoenix:10.10.113.45:2181>
    2> 在将数据load到phoenix中,数据有40万行。
    [hadoop@10.10.113.45 ~/phoenix/bin]$>./psql.py -t EMAIL_ENRON 10.10.113.45:2181 /home/hadoop/sfs/enron.csv
    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    15/12/03 10:06:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    csv columns from database.
    CSV Upsert complete. 367662 rows upserted
    Time: 21.783 sec(s)
    数据来源:https://snap.stanford.edu/data/email-Enron.html
    然后在查询下
    0: jdbc:phoenix:10.10.113.45:2181> select count(*) from EMAIL_ENRON;
    +------------------------------------------+
    | COUNT(1) |
    +------------------------------------------+
    | 367662 |
    +------------------------------------------+
    1 row selected (0.289 seconds)
    看37万数据,查询不到一秒!!!
    下面进入到spark-shell 的交互模式,我们做一个PageRank 算法的例子。
    [hadoop@10.10.113.45 ~/spark/bin]$>./spark-shell
    scala> import org.apache.spark.graphx._
    import org.apache.spark.graphx._
    scala> import org.apache.phoenix.spark._
    import org.apache.phoenix.spark._
    scala> val rdd = sc.phoenixTableAsRDD("EMAIL_ENRON", Seq("MAIL_FROM", "MAIL_TO"), zkUrl=Some("10.10.113.45"))
    rdd: org.apache.spark.rdd.RDD[Map[String,AnyRef]] = MapPartitionsRDD[2] at map at SparkContextFunctions.scala:39

    scala> val rawEdges = rdd.map{ e => (e("MAIL_FROM").asInstanceOf[VertexId], e("MAIL_TO").asInstanceOf[VertexId]) }
    rawEdges: org.apache.spark.rdd.RDD[(org.apache.spark.graphx.VertexId, org.apache.spark.graphx.VertexId)] = MapPartitionsRDD[3] at map at <console>:29

    scala> val graph = Graph.fromEdgeTuples(rawEdges, 1.0)
    graph: org.apache.spark.graphx.Graph[Double,Int] = org.apache.spark.graphx.impl.GraphImpl@621bb3c3

    scala> val pr = graph.pageRank(0.001)
    pr: org.apache.spark.graphx.Graph[Double,Double] = org.apache.spark.graphx.impl.GraphImpl@55e444b1
    scala>pr.vertices.saveToPhoenix("EMAIL_ENRON_PAGERANK", Seq("ID", "RANK"), zkUrl = Some("10.10.113.45"))(这一步会很耗内存,可能有的同学在测试的时候会报OOM,建议增大spark中executor memory,driver memory的大小)

    我们在去phoenix中查看一下结果。
    0: jdbc:phoenix:10.10.113.45:2181> select count(*) from EMAIL_ENRON_PAGERANK;
    +------------------------------------------+
    | COUNT(1) |
    +------------------------------------------+
    | 29000 |
    +------------------------------------------+
    1 row selected (0.113 seconds)
    0: jdbc:phoenix:10.10.113.45:2181> SELECT * FROM EMAIL_ENRON_PAGERANK ORDER BY RANK DESC LIMIT 5;
    +------------------------------------------+------------------------------------------+
    | ID | RANK |
    +------------------------------------------+------------------------------------------+
    | 273 | 117.18141799210386 |
    | 140 | 108.63091596789913 |
    | 458 | 107.2728800448782 |
    | 588 | 106.11840798585399 |
    | 566 | 105.13932886531066 |
    +------------------------------------------+------------------------------------------+
    5 rows selected (0.568 seconds)

    作者:头条号 / 数据库那些事
    链接:http://toutiao.com/i6223959691949507074/
    来源:头条号(今日头条旗下创作平台)

    相关文章

      网友评论

        本文标题:Phoenix 与Spark整合,完善大数据计算栈

        本文链接:https://www.haomeiwen.com/subject/ftowjttx.html