美文网首页
(八)SparkStreaming自定义spark-sink从F

(八)SparkStreaming自定义spark-sink从F

作者: 白面葫芦娃92 | 来源:发表于2018-11-23 23:06 被阅读0次
  • Flume pushes data into the sink, and the data stays buffered.
  • Spark Streaming uses a reliable Flume receiver and transactions to pull data from the sink. Transactions succeed only after data is received and replicated by Spark Streaming.
  • This ensures stronger reliability and fault-tolerance guarantees than the previous approach.
    这种方式可靠性更高
    需要先启动flume,再启动spark app
    1.IDEA开发代码
import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

object StreamingFlumeApp02 {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setMaster("local[2]").setAppName("StreamingFlumeApp02")
    val ssc = new StreamingContext(sparkConf,Seconds(10))

    val lines = FlumeUtils.createPollingStream(ssc,"192.168.137.141",41414)
//    SparkFlumeEvent ==> String
    lines.map(x=>new String(x.event.getBody.array()).trim)
      .flatMap(_.split(",")).map((_,1)).reduceByKey(_+_)
      .print()

    ssc.start()
    ssc.awaitTermination()
  }
}

2.flume-agent配置文件

spark-sink-agent.sources = netcat-source
spark-sink-agent.sinks = spark-sink
spark-sink-agent.channels = netcat-memory-channel

spark-sink-agent.sources.netcat-source.type = netcat
spark-sink-agent.sources.netcat-source.bind = hadoop001
spark-sink-agent.sources.netcat-source.port = 44444

spark-sink-agent.channels.netcat-memory-channel.type = memory

spark-sink-agent.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
spark-sink-agent.sinks.spark-sink.hostname = hadoop001
spark-sink-agent.sinks.spark-sink.port = 41414

spark-sink-agent.sources.netcat-source.channels = netcat-memory-channel
spark-sink-agent.sinks.spark-sink.channel = netcat-memory-channel

3.启动flume

[hadoop@hadoop001 bin]$ pwd
/home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin/bin
[hadoop@hadoop001 bin]$ ./flume-ng agent \
> --name spark-sink-agent \
> --conf $FLUME_HOME/conf \
> --conf-file /home/hadoop/script/flume/streaming_pull_flume.conf \
> -Dflume.root.logger=INFO,console \
> -Dflume.monitoring.type=http \
> -Dflume.monitoring.port=34343

4.打包提交到spark

[hadoop@hadoop001 ~]$ cd $SPARK_HOME
[hadoop@hadoop001 spark-2.3.1-bin-2.6.0-cdh5.7.0]$ cd bin
[hadoop@hadoop001 bin]$ ./spark-submit --master local[2] \
> --packages org.apache.spark:spark-streaming-flume_2.11:2.3.1 \
> --class com.ruozedata.SparkStreaming.StreamingFlumeApp02 \
> /home/hadoop/lib/spark-train-1.0.jar

5.启动telnet

[hadoop@hadoop001 lib]$ telnet hadoop001 44444
Trying 192.168.137.141...
Connected to hadoop001.
Escape character is '^]'.
spark,spark,spark
OK
huluwa,huluwa
OK

6.结果:

-------------------------------------------
Time: 1538148560000 ms
-------------------------------------------
(huluwa,2)
(spark,3)

相关文章

网友评论

      本文标题:(八)SparkStreaming自定义spark-sink从F

      本文链接:https://www.haomeiwen.com/subject/uopmoftx.html