本节将阐述如何使用本地模式的flink完成批处理的词频统计。
1 系统、软件以及前提约束
- CentOS 7 64 工作站 作者的机子ip是192.168.100.141,请读者根据自己实际情况设置
- idea 2018.1
2 操作
- 1 在idea中创建一个maven项目
- 2 修改该maven项目的pom.xml中的依赖
<dependencies>
<dependency>
<!--spark依赖-->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<!--scala依赖-->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.8</version>
</dependency>
<!--storm依赖-->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
<version>1.2.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-core -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.5.1</version>
</dependency>
</dependencies>
- 3 在src/main/java中添加SocketWindowWordCountWithFlink.java
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.operators.DataSource;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
public class FlinkBatchDemo {
public static void main(String[] args) throws Exception {
//请确认D盘下有info.txt文本文件,每个单词之间用空格隔开
String filePath = "D:\\info.txt";
//set up the batch execution environment
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
//get some data from the environment
DataSource<String> text = env.readTextFile(filePath);
text.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
@Override
public void flatMap(String value, Collector<Tuple2<String, Integer>> collector) throws Exception {
String[] tokens = value.toLowerCase().split(" ");
for (String token : tokens) {
if (token.length() > 0) {
collector.collect(new Tuple2<>(token, 1));
}
}
}
}).groupBy(0).sum(1).print();
}
}
- 4 测试
执行以上的main方法,便能看到统计结果。在该例子中,我们以文本文件来模拟批量数据来源。
以上就是Flink中的批处理操作完成词频统计。
网友评论