Spark Streaming集成Flume有两种方式,分别是基于Push的和基于Pull的,本篇文档参考Spark官网,基于Spark 2.2.0和Flume 1.6.0
-
Push-based
这种方式是Flume通过Agent去push数据,Spark Streaming使用一个receiver接收数据,这种基于push的方式,必须先启动Spark Streaming的receiver接收数据,然后再启动Flume
- 编写Flume Agent:flume_push_streaming.conf,Agent采用netcat为source,avro的sink以及memery的channel:
simple-agent.sources = netcat-source
simple-agent.sinks = avro-sink
simple-agent.channels = memory-channel
simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind = hadoop000
simple-agent.sources.netcat-source.port = 44444
simple-agent.sinks.avro-sink.type = avro
simple-agent.sinks.avro-sink.hostname = 192.168.199.203
simple-agent.sinks.avro-sink.port = 41414
simple-agent.channels.memory-channel.type = memory
simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.avro-sink.channel = memory-channel
- 编写Spark Streaming应用程序
引入如下spark streaming整合flume的依赖:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
应用程序核心代码如下:
if(args.length!=2){
System.err.print("Usage:FlumePushWordCount <hostname> <port>")
System.exit(1)
}
val Array(hostname,port) = args
val sparkConf = new SparkConf()//.setAppName("FlumePushWordCount").setMaster("local[2]")
val ssc = new StreamingContext(sparkConf,Seconds(5))
val flumeStream = FlumeUtils.createStream(ssc, hostname, port.toInt)
flumeStream.map(x=>new String(x.event.getBody.array()).trim())
.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
- 生产环境运行
因为Spark Streaming集成Flume的jar包并未打到程序包里,所以spark-submit启动的时候需要通过--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0添加该jar包,第一次会先去下载jar包,速度会稍慢,第二次就可以直接使用了,详细命令如下:
spark-submit \
--class com.yxzc.FlumePushWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/lib/sparktrain-1.0.jar \
hadoop000 41414
如果生产环境不能连接外网,或者网速很差时,可以先从maven仓库下载该jar包,然后在spark-submit是通过--jars指定该jar,个人比较推荐这种方式
- 启动Flume
步骤1里已经编写好了Flume Agent,这些直接启动就可以了,启动命令:
flume-ng agent \
--name simple-agent \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/flume_push_streaming.conf \
-Dflume.root.logger=INFO,console
- telnet(telnet hadoop000 44444)输入数据,查看Spark Streaming程序输出结果
- Pull-based
这种方式是Flume推送数据到一个sink里缓存起来,然后Spark Streaming程序从该sink拉取数据,这种方式成功的前提是数据收到了并被Spark Streaming以多副本的方式成功保存数据,这种方式比push更可靠,容错性更高,这种方式应该先启动Flume,然后再启动Spark Streaming程序
- 编写Flume Agent:flume_pull_streaming.conf,Agent采用netcat的source,SparkSink和memery的channel
simple-agent.sources = netcat-source
simple-agent.sinks = spark-sink
simple-agent.channels = memory-channel
simple-agent.sources.netcat-source.type = netcat
simple-agent.sources.netcat-source.bind = hadoop000
simple-agent.sources.netcat-source.port = 44444
simple-agent.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
simple-agent.sinks.spark-sink.hostname = hadoop000
simple-agent.sinks.spark-sink.port = 41414
simple-agent.channels.memory-channel.type = memory
simple-agent.sources.netcat-source.channels = memory-channel
simple-agent.sinks.spark-sink.channel = memory-channel
- 编写Spark Streaming应用程序
引入如下spark streaming整合flume的依赖:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-flume-sink_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.5</version>
</dependency>
应用程序核心代码如下:
if(args.length!=2){
System.err.print("Usage:FlumePullWordCount <hostname> <port>")
System.exit(1)
}
val Array(hostname,port) = args
val sparkConf = new SparkConf()//.setAppName("FlumePullWordCount").setMaster("local[2]")
val ssc = new StreamingContext(sparkConf,Seconds(5))
val flumeStream = FlumeUtils.createPollingStream(ssc, hostname, port.toInt)
flumeStream.map(x=>new String(x.event.getBody.array()).trim())
.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
- 启动Flume
步骤1里已经编写好了Flume Agent,这些直接启动就可以了,启动命令:
flume-ng agent \
--name simple-agent \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/flume_pull_streaming.conf \
-Dflume.root.logger=INFO,console
- 生产环境运行
这里跟push方式没有区别,可以参考上边push方式,详细启动命令为:
spark-submit \
--class com.yxzc.FlumePullWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/lib/sparktrain-1.0.jar \
hadoop000 41414
- telnet(telnet hadoop000 44444)输入数据,查看Spark Streaming程序输出结果
Spark Streaming作为一个流处理框架,Flume作为一个日志收集框架,直接对接其实是有问题的,尤其不同时间段产生日志的数量差异较大时,负载均衡、吞吐量都对机器性能有所要求和限制,所以一般使用Kafka作为一个缓冲队列,从Flume过来的数据先缓存到Kafka里,Spark Streaming从Kafka里获取数据。
网友评论