原文地址:https://ci.apache.org/projects/flink/flink-docs-release-1.6/quickstart/setup_quickstart.html
只需几个简单的步骤即可启动并运行Flink示例程序。
设置:下载并配置Flink
Flink可在 Linux,Mac OS X 和 Windows上运行。为了能够运行 Flink,唯一的要求是安装一个有效的 Java 8.x 环境。Windows用户请查看 Flink on Windows 指南,该指南介绍了如何在Windows上配置运行Flink。
您可以运行以下命令来检查Java环境:
java -version
如果你有Java 8,将输出如下所示:
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
下载与解压
- 从 downloads page 下载二进制文件。您可以选择任何您喜欢的 Hadoop/Scala组合。 如果您打算只使用本地文件系统,任何Hadoop版本都可以正常工作。
- 转到下载目录。
- 解压缩下载的存档。
$ cd ~/Downloads # Go to download directory
$ tar xzf flink-*.tgz # Unpack the downloaded archive
$ cd flink-1.6.1
启动本地Flink群集
$ ./bin/start-cluster.sh # Start Flink
检查Dispatcher的web前端 http://localhost:8081 并确保一切正常运行。Web前端应报告单个可用的 TaskManager 实例。
您还可以通过检查logs目录中的日志文件来验证系统是否正在运行:
$ tail log/flink-*-standalonesession-*.log
INFO ... - Rest endpoint listening at localhost:8081
INFO ... - http://localhost:8081 was granted leadership ...
INFO ... - Web frontend listening at http://localhost:8081.
INFO ... - Starting RPC endpoint for StandaloneResourceManager at akka://flink/user/resourcemanager .
INFO ... - Starting RPC endpoint for StandaloneDispatcher at akka://flink/user/dispatcher .
INFO ... - ResourceManager akka.tcp://flink@localhost:6123/user/resourcemanager was granted leadership ...
INFO ... - Starting the SlotManager.
INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was granted leadership ...
INFO ... - Recovering all persisted jobs.
INFO ... - Registering TaskManager ... under ... at the SlotManager.
看代码
您可以在scala和 Java上的GitHub上找到此 SocketWindowWordCount 示例的完整源代码。
public class SocketWindowWordCount {
public static void main(String[] args) throws Exception {
// the port to connect to
final int port;
try {
final ParameterTool params = ParameterTool.fromArgs(args);
port = params.getInt("port");
} catch (Exception e) {
System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'");
return;
}
// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// get input data by connecting to the socket
DataStream<String> text = env.socketTextStream("localhost", port, "\n");
// parse the data, group it, window it, and aggregate the counts
DataStream<WordWithCount> windowCounts = text
.flatMap(new FlatMapFunction<String, WordWithCount>() {
@Override
public void flatMap(String value, Collector<WordWithCount> out) {
for (String word : value.split("\\s")) {
out.collect(new WordWithCount(word, 1L));
}
}
})
.keyBy("word")
.timeWindow(Time.seconds(5), Time.seconds(1))
.reduce(new ReduceFunction<WordWithCount>() {
@Override
public WordWithCount reduce(WordWithCount a, WordWithCount b) {
return new WordWithCount(a.word, a.count + b.count);
}
});
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1);
env.execute("Socket Window WordCount");
}
// Data type for words with count
public static class WordWithCount {
public String word;
public long count;
public WordWithCount() {}
public WordWithCount(String word, long count) {
this.word = word;
this.count = count;
}
@Override
public String toString() {
return word + " : " + count;
}
}
}
运行示例
现在,我们将运行此Flink应用程序。它将从socket读取文本,并且每5秒打印一次在前5秒内每个不同单词的出现次数。
- 首先,我们使用netcat来启动本地服务器
$ nc -l 9000
- 提交Flink 程序:
$ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
Starting execution of program
程序连接到 socket 并等待输入。 您可以检查Web界面以验证作业是否按预期运行:
单词在5秒的时间窗口中计算(处理时间,切换窗口)并输出到 stdout。 监视TaskManager 的输出文件并在 nc 中写入一些文本(输入在点击后逐行发送到Flink):
$ nc -l 9000
lorem ipsum
ipsum ipsum ipsum
bye
.out 文件在每次时间窗口结束后输出统计总数:
$ tail -f log/flink-*-taskexecutor-*.out
lorem : 1
bye : 1
ipsum : 4
可以用以下命令停止Flink:
$ ./bin/stop-cluster.sh
网友评论