美文网首页Spark小白学习之路
Spark Streaming + Kafka +Hbase项目

Spark Streaming + Kafka +Hbase项目

作者: 北邮郭大宝 | 来源:发表于2018-11-20 10:29 被阅读125次

    同学们在学习Spark Steaming的过程中,可能缺乏一个练手的项目,这次通过一个有实际背景的小项目,把学过的Spark Steaming、Hbase、Kafka都串起来。

    1.项目介绍

    1.1 项目流程

    Spark Streaming读取kafka数据源发来的json格式的数据流,在批次内完成数据的清洗和过滤,再从HBase读取补充数据,拼接成新的json字符串写进下游kafka。

    1.2 项目详解

    • 上游kafka topic为kafka_streaming_topic,内容是json格式的数据流,例如{"id":"001","name":"郭大宝","subject":"语文","score":"60"}
    • spark streaming 从kafka读取数据,完成数据清洗,并筛选出分数>=60分的数据
    • 通过id作为rowkey,批量从Hbase中查询学生信息数据,例如{"id":"001","name":"郭大宝","sex":"男","age":"26"}
    • 两个json完成拼接,并写入下游topic hello_topic

    2.环境准备

    2.1 组件安装

    首先需要安装必要的大数据组件,安装的版本信息如下:
    - Spark 2.1.2
    - kafka 0.10.0.1
    - HBase 1.2.0
    - Zookeeper 3.4.5

    2.2 HBase Table的创建

    • Hbase创建table student,列族名为cf
      create table 'student','cf'
      
    • 存入两条数据
      put 'student','001','cf:info','{"id":"001","name":"郭大宝","sex":"男","age":"26"}'
      put 'student','002','cf:info','{"id":"002","name":"郭星宇","sex":"男","age":"26"}'
      

    2.3 Kafka Topic的创建

    • 创建kafka的两个topic,分别是kafka_streaming_topic、hello_topic
      kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic kafka_streaming_topic
      kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello_topic
      

    3.Code

    3.1 项目结构

    streamingDemo_mulu.jpg

    简单解释一下:

    • Output、Score、Output三个是Java Bean
    • MsgHandler完成对数据流的操作,包括json格式判断、必备字段检查、成绩>=60筛选、json to Bean、合并Bean等操作
    • ConfigManager读取配置参数
    • conf.properties 配置信息
    • StreamingDemo是程序主函数
    • HBaseUtils Hbase工具类
    • StreamingDemoTest 测试类

    3.2 主函数

    初始化spark,和一些配置信息的读取,通过KafkaUtils.createDirectStream读取kafka数据

    完成如下几个操作
    - 清洗和筛选数据,返回(id,ScoreBean)的RDD
    - 构造id List集合,批量从Hbase查询结果,构造(id,studentJsonStr)的resMap集合,方便后续O(1)查询
    - 遍历每条数据,从resMap查到结果,合并出新的Java Bean
    - Java Bean to Json String,并写入到kafka

    package com.bupt.spark.APP
    
    import java.util.Properties
    import com.alibaba.fastjson.serializer.SerializerFeature
    import com.alibaba.fastjson.{JSON, JSONObject, TypeReference}
    import com.bupt.Hbase.HBaseUtils
    import com.bupt.spark.Bean.{Output, Score}
    import com.bupt.spark.Handler.MsgHandler
    import com.bupt.spark.Utils.ConfigManager
    import org.apache.hadoop.hbase.util.Bytes
    import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
    import org.apache.spark.SparkConf
    import org.apache.kafka.common.serialization.{StringDeserializer, StringSerializer}
    import org.apache.spark.rdd.RDD
    import org.slf4j.LoggerFactory
    import org.apache.spark.streaming.kafka010._
    import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
    import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
    import org.apache.spark.streaming.{Seconds, StreamingContext};
    
    /**
      * Created by guoxingyu on 2018/11/18.
      */
    object StreamingDemo {
    
      val LOG = LoggerFactory.getLogger(StreamingDemo.getClass)
    
      def main(args: Array[String]): Unit = {
        if (args.length != 1) {
          println("Usage: <properties>")
          LOG.error("properties file not exists")
          System.exit(1)
        }
    
        // init spark
        val configManager = new ConfigManager(args(0))
        val sparkConf = new SparkConf().setAppName(configManager.getProperty("steaming.appName")).setMaster("local[*]")
        val ssc = new StreamingContext(sparkConf,Seconds(configManager.getProperty("streaming.interval").toInt))
    
        // kafkaConsumerParams
        val kafkaConsumerParams = Map[String, Object](
          "bootstrap.servers" -> configManager.getProperty("bootstrap.servers"),
          "key.deserializer" -> classOf[StringDeserializer],
          "value.deserializer" -> classOf[StringDeserializer],
          "group.id" ->  configManager.getProperty("group.id"),
          "auto.offset.reset" -> configManager.getProperty("auto.offset.reset"),
          "enable.auto.commit" -> (false: java.lang.Boolean)
        )
    
        // kafkaProducerParams
        val props = new Properties()
        props.setProperty("metadata.broker.list",configManager.getProperty("metadata.broker.list"))
        props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,configManager.getProperty("bootstrap.servers"))
        props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,classOf[StringSerializer].getName)
        props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,classOf[StringSerializer].getName)
    
        val inputTopics = Array(configManager.getProperty("input.topics"))
        val outputTopics = configManager.getProperty("output.topics")
    
        // create stream
        val stream = KafkaUtils.createDirectStream[String, String](
          ssc,
          PreferConsistent,
          Subscribe[String, String](inputTopics, kafkaConsumerParams)
        )
    
        // stream process
        stream.foreachRDD(rdd => {
          val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
          if (!rdd.isEmpty()) {
            // clean and pick up msg
            val MsgHandler = new MsgHandler()
            val cleanStreamRDD: RDD[(String, Score)] = rdd.mapPartitions(iter => {
              iter.map(line => {
                if(MsgHandler.cleanAndPickUpMsg(line.value(),configManager)) {
                  val scoreInfo = MsgHandler.getScoreBean(line.value()) // json to java bean
                  if (scoreInfo != null) {
                    (scoreInfo.getId,scoreInfo) // return (id,score bean)
                  } else {
                    null
                  }
                } else {
                  null
                }
              })
            }).filter(f => {
              f != null
            })
    
            // query from hbase, merge json, write into kafka
            cleanStreamRDD.foreachPartition(iter => {
              val lst = iter.toList
              if (!lst.isEmpty) {
                val rowkeys = lst.map(_._1).toSet.toList // get rowkey list
    
                if (!rowkeys.isEmpty) {
                  val res = HBaseUtils.multipleGet(configManager.getProperty("hbase.tableName"),rowkeys).filter(f=> { // get jsonStr from hbase
                    !f.isEmpty
                  })
                  val resMap = res.map(f=> {
                    (Bytes.toString(f.getRow),Bytes.toString(f.getValue(Bytes.toBytes(configManager.getProperty("hbase.table.cf"))
                      ,Bytes.toBytes(configManager.getProperty("hbase.table.column")))))
                  }).toMap // get result map
    
                  lst.foreach(line => {
                    if (resMap.nonEmpty && resMap.get(line._1) != null) {
                      val studentJsonStr = resMap.getOrElse(line._1,null)
                      val studentInfo = MsgHandler.getStudentBean(studentJsonStr)  // get student bean
                      val outputInfo: Output = MsgHandler.getOutputBean(line._2,studentInfo) // merge two bean
    
                      if (outputInfo != null) {
                        val outputJsonStr: String = JSON.toJSONString(outputInfo, SerializerFeature.WriteNullStringAsEmpty)
                        val producer = new KafkaProducer[String,String](props)
                        println(outputJsonStr)
                        producer.send(new ProducerRecord(outputTopics,"key",outputJsonStr))  // write into kafka
                        producer.close()
                      }
                    }
                  })
                }
              }
            })
          }
          stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
        })
        ssc.start()
        ssc.awaitTermination()
      }
    }
    

    4.结果验证

    • 开启kafka producer shell, 向kafka_streaming_topic写数据
    streamingDemo_inputTopic.jpg
    • 开启kafka consumer shell, 消费hello_topic
    streamingDemo_outputTopic.jpg

    5.总结

    通过这个小项目,希望大家可以掌握基本的Spark Streaming流处理操作,包括读写kafka,查询hbase,spark streaming Dstream操作。详细代码请参阅https://github.com/tygxy/StreamingDemo

    相关文章

      网友评论

        本文标题:Spark Streaming + Kafka +Hbase项目

        本文链接:https://www.haomeiwen.com/subject/lvjtqqtx.html