美文网首页
Apache Flink 源码解析(一)flink源码编译

Apache Flink 源码解析(一)flink源码编译

作者: 懂码哥 | 来源:发表于2021-01-20 10:41 被阅读0次

    1.GitHub拉取flink源码(这里使用1.11版本)

    git clone https://github.com/apache/flink.git
    cd flink
    git checkout -b release-1.11 origin/release-1.11
    

    2.配置maven镜像以及node镜像(重要)

    settings.xml配置华为镜像

    <mirror>
        <id>huaweicloud</id>
        <mirrorOf>*</mirrorOf>
        <url>https://mirrors.huaweicloud.com/repository/maven/</url>
    </mirror>
    

    Node华为云镜像

    依赖管理工具如何配置

    • 1.临时使用
      npm --registry https://repo.huaweicloud.com/repository/npm/ install express
      
    • 2.持久使用
      npm config set https://repo.huaweicloud.com/repository/npm/
      
      • 配置后可通过下面方式来验证是否成功
        npm config get registry

      • npm info express

    3.编译flink-runtime-web工程中的web-dashboard

    flink-1.11.0/flink-runtime-web/pom.xml

    • 注释插件com.github.eirslett
    <!--            <plugin>-->
    <!--                <groupId>com.github.eirslett</groupId>-->
    <!--                <artifactId>frontend-maven-plugin</artifactId>-->
    <!--                <version>1.6</version>-->
    <!--                <executions>-->
    <!--                    <execution>-->
    <!--                        <id>install node and npm</id>-->
    <!--                        <goals>-->
    <!--                            <goal>install-node-and-npm</goal>-->
    <!--                        </goals>-->
    <!--                        <configuration>-->
    <!--                            <nodeVersion>v10.9.0</nodeVersion>-->
    <!--                        </configuration>-->
    <!--                    </execution>-->
    <!--                    <execution>-->
    <!--                        <id>npm install</id>-->
    <!--                        <goals>-->
    <!--                            <goal>npm</goal>-->
    <!--                        </goals>-->
    <!--                        <configuration>-->
    <!--                            <arguments>ci &#45;&#45;cache-max=0 &#45;&#45;no-save</arguments>-->
    <!--                            <environmentVariables>-->
    <!--                                <HUSKY_SKIP_INSTALL>true</HUSKY_SKIP_INSTALL>-->
    <!--                            </environmentVariables>-->
    <!--                        </configuration>-->
    <!--                    </execution>-->
    <!--                    <execution>-->
    <!--                        <id>npm run build</id>-->
    <!--                        <goals>-->
    <!--                            <goal>npm</goal>-->
    <!--                        </goals>-->
    <!--                        <configuration>-->
    <!--                            <arguments>run build</arguments>-->
    <!--                        </configuration>-->
    <!--                    </execution>-->
    <!--                </executions>-->
    <!--                <configuration>-->
    <!--                    <workingDirectory>web-dashboard</workingDirectory>-->
    <!--                </configuration>-->
    <!--            </plugin>-->
    

    进入flink-runtime-web/web-dashboard文件夹路径下

    • 通过maven插件编译可能会报错,所以我这边通过提前编译好web项目
    npm install
    
    npm run build
    

    4.编译命令

    自动使用pom里面的hadoop版本去编译,但是一般情况下,我们都会有自己指定的版本,所以一般不用这个

    # 删除已有的build,编译flink binary
    # 接着把flink binary安装在maven的local repository(默认是~/.m2/repository)中
    mvn clean install -DskipTests
     
    # 另一种编译命令,相对于上面这个命令,主要的确保是:
    # 不编译tests、QA plugins和JavaDocs,因此编译要更快一些
    mvn clean install -DskipTests -Dfast
    

    另外,在一些情况下,我们可能并不想把编译后的flink binary安装在maven的local repository下,我们可以使用下面的命令:

    mvn clean package -Dmaven.test.skip=true
    # 删除已有的build,编译flink binary
    mvn clean package -DskipTests
    # 另一种编译命令,相对于上面这个命令,主要的确保是:
    # 不编译tests、QA plugins和JavaDocs,因此编译要更快一些
    mvn clean package -DskipTests -Dfast
    

    如果你需要使用指定hadoop的版本,可以通过指定“-Dhadoop.version”来设置,编译命令如下:

    mvn clean install -DskipTests -Dhadoop.version=2.6.0
    # 或者
    mvn clean package -DskipTests -Dhadoop.version=2.6.1
    

    maven 编译的时候跳过测试代码、javadoc 和代码风格检查,这样可以减少不少时间。

    -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true
    
    最后决定用的官方提供命令是(耐心等待即可):
    mvn clean install -DskipTests
    

    编译成功后,编译出完整的flink-binary,在源码目录flink-dist/target/中:

    5.最终编译成功

    [INFO] ------------------------------------------------------------------------
    [INFO] Reactor Summary for flink 1.11-SNAPSHOT:
    [INFO] 
    [INFO] force-shading ...................................... SUCCESS [  2.608 s]
    [INFO] flink .............................................. SUCCESS [ 18.285 s]
    [INFO] flink-annotations .................................. SUCCESS [  3.696 s]
    [INFO] flink-test-utils-parent ............................ SUCCESS [  0.567 s]
    [INFO] flink-test-utils-junit ............................. SUCCESS [  2.263 s]
    [INFO] flink-metrics ...................................... SUCCESS [  0.196 s]
    [INFO] flink-metrics-core ................................. SUCCESS [  2.706 s]
    [INFO] flink-core ......................................... SUCCESS [ 37.144 s]
    [INFO] flink-java ......................................... SUCCESS [  6.778 s]
    [INFO] flink-queryable-state .............................. SUCCESS [  0.106 s]
    [INFO] flink-queryable-state-client-java .................. SUCCESS [  1.149 s]
    [INFO] flink-filesystems .................................. SUCCESS [  0.144 s]
    [INFO] flink-hadoop-fs .................................... SUCCESS [  2.008 s]
    [INFO] flink-runtime ...................................... SUCCESS [06:00 min]
    [INFO] flink-scala ........................................ SUCCESS [ 39.403 s]
    [INFO] flink-mapr-fs ...................................... SUCCESS [  0.697 s]
    [INFO] flink-filesystems :: flink-fs-hadoop-shaded ........ SUCCESS [  3.671 s]
    [INFO] flink-s3-fs-base ................................... SUCCESS [  1.377 s]
    [INFO] flink-s3-fs-hadoop ................................. SUCCESS [  4.785 s]
    [INFO] flink-s3-fs-presto ................................. SUCCESS [  7.042 s]
    [INFO] flink-swift-fs-hadoop .............................. SUCCESS [ 18.866 s]
    [INFO] flink-oss-fs-hadoop ................................ SUCCESS [  4.545 s]
    [INFO] flink-azure-fs-hadoop .............................. SUCCESS [  7.136 s]
    [INFO] flink-optimizer .................................... SUCCESS [  4.514 s]
    [INFO] flink-streaming-java ............................... SUCCESS [ 25.042 s]
    [INFO] flink-clients ...................................... SUCCESS [  2.882 s]
    [INFO] flink-test-utils ................................... SUCCESS [  1.383 s]
    [INFO] flink-runtime-web .................................. SUCCESS [  4.296 s]
    [INFO] flink-examples ..................................... SUCCESS [  0.197 s]
    [INFO] flink-examples-batch ............................... SUCCESS [ 14.272 s]
    [INFO] flink-connectors ................................... SUCCESS [  0.266 s]
    [INFO] flink-hadoop-compatibility ......................... SUCCESS [  9.601 s]
    [INFO] flink-state-backends ............................... SUCCESS [  0.136 s]
    [INFO] flink-statebackend-rocksdb ......................... SUCCESS [  2.279 s]
    [INFO] flink-tests ........................................ SUCCESS [ 39.879 s]
    [INFO] flink-streaming-scala .............................. SUCCESS [ 35.098 s]
    [INFO] flink-hcatalog ..................................... SUCCESS [  4.735 s]
    [INFO] flink-table ........................................ SUCCESS [  0.163 s]
    [INFO] flink-table-common ................................. SUCCESS [  6.135 s]
    [INFO] flink-table-api-java ............................... SUCCESS [  3.459 s]
    [INFO] flink-table-api-java-bridge ........................ SUCCESS [  1.391 s]
    [INFO] flink-table-api-scala .............................. SUCCESS [  9.354 s]
    [INFO] flink-table-api-scala-bridge ....................... SUCCESS [  7.310 s]
    [INFO] flink-sql-parser ................................... SUCCESS [ 42.203 s]
    [INFO] flink-libraries .................................... SUCCESS [  0.101 s]
    [INFO] flink-cep .......................................... SUCCESS [  3.480 s]
    [INFO] flink-table-planner ................................ SUCCESS [01:52 min]
    [INFO] flink-sql-parser-hive .............................. SUCCESS [  2.390 s]
    [INFO] flink-table-runtime-blink .......................... SUCCESS [  7.284 s]
    [INFO] flink-table-planner-blink .......................... SUCCESS [02:28 min]
    [INFO] flink-metrics-jmx .................................. SUCCESS [  0.441 s]
    [INFO] flink-formats ...................................... SUCCESS [  0.083 s]
    [INFO] flink-json ......................................... SUCCESS [  1.064 s]
    [INFO] flink-connector-kafka-base ......................... SUCCESS [  2.721 s]
    [INFO] flink-avro ......................................... SUCCESS [  7.381 s]
    [INFO] flink-csv .......................................... SUCCESS [  1.063 s]
    [INFO] flink-connector-kafka-0.10 ......................... SUCCESS [  1.605 s]
    [INFO] flink-connector-kafka-0.11 ......................... SUCCESS [  1.236 s]
    [INFO] flink-connector-elasticsearch-base ................. SUCCESS [  1.893 s]
    [INFO] flink-connector-elasticsearch5 ..................... SUCCESS [ 10.351 s]
    [INFO] flink-connector-elasticsearch6 ..................... SUCCESS [  2.125 s]
    [INFO] flink-connector-elasticsearch7 ..................... SUCCESS [  1.537 s]
    [INFO] flink-connector-hbase .............................. SUCCESS [  3.130 s]
    [INFO] flink-hadoop-bulk .................................. SUCCESS [  0.556 s]
    [INFO] flink-orc .......................................... SUCCESS [  1.425 s]
    [INFO] flink-orc-nohive ................................... SUCCESS [  0.758 s]
    [INFO] flink-parquet ...................................... SUCCESS [  1.649 s]
    [INFO] flink-connector-hive ............................... SUCCESS [  5.703 s]
    [INFO] flink-connector-jdbc ............................... SUCCESS [  1.966 s]
    [INFO] flink-connector-rabbitmq ........................... SUCCESS [  0.565 s]
    [INFO] flink-connector-twitter ............................ SUCCESS [  1.494 s]
    [INFO] flink-connector-nifi ............................... SUCCESS [  0.589 s]
    [INFO] flink-connector-cassandra .......................... SUCCESS [  2.988 s]
    [INFO] flink-connector-filesystem ......................... SUCCESS [  0.911 s]
    [INFO] flink-connector-kafka .............................. SUCCESS [  1.507 s]
    [INFO] flink-connector-gcp-pubsub ......................... SUCCESS [  1.894 s]
    [INFO] flink-connector-kinesis ............................ SUCCESS [  7.093 s]
    [INFO] flink-sql-connector-elasticsearch7 ................. SUCCESS [  5.767 s]
    [INFO] flink-connector-base ............................... SUCCESS [  0.754 s]
    [INFO] flink-sql-connector-elasticsearch6 ................. SUCCESS [  4.672 s]
    [INFO] flink-sql-connector-kafka-0.10 ..................... SUCCESS [  0.418 s]
    [INFO] flink-sql-connector-kafka-0.11 ..................... SUCCESS [  0.499 s]
    [INFO] flink-sql-connector-kafka .......................... SUCCESS [  0.907 s]
    [INFO] flink-sql-connector-hive-1.2.2 ..................... SUCCESS [  3.881 s]
    [INFO] flink-sql-connector-hive-2.2.0 ..................... SUCCESS [  4.541 s]
    [INFO] flink-sql-connector-hive-2.3.6 ..................... SUCCESS [  4.423 s]
    [INFO] flink-sql-connector-hive-3.1.2 ..................... SUCCESS [  6.630 s]
    [INFO] flink-avro-confluent-registry ...................... SUCCESS [  0.423 s]
    [INFO] flink-sequence-file ................................ SUCCESS [  0.442 s]
    [INFO] flink-compress ..................................... SUCCESS [  0.499 s]
    [INFO] flink-sql-orc ...................................... SUCCESS [  0.351 s]
    [INFO] flink-sql-parquet .................................. SUCCESS [  0.600 s]
    [INFO] flink-examples-streaming ........................... SUCCESS [ 11.003 s]
    [INFO] flink-examples-table ............................... SUCCESS [  7.050 s]
    [INFO] flink-examples-build-helper ........................ SUCCESS [  0.152 s]
    [INFO] flink-examples-streaming-twitter ................... SUCCESS [  0.801 s]
    [INFO] flink-examples-streaming-state-machine ............. SUCCESS [  0.701 s]
    [INFO] flink-examples-streaming-gcp-pubsub ................ SUCCESS [  3.879 s]
    [INFO] flink-container .................................... SUCCESS [  0.369 s]
    [INFO] flink-queryable-state-runtime ...................... SUCCESS [  1.202 s]
    [INFO] flink-mesos ........................................ SUCCESS [ 22.941 s]
    [INFO] flink-kubernetes ................................... SUCCESS [  5.005 s]
    [INFO] flink-yarn ......................................... SUCCESS [  1.869 s]
    [INFO] flink-gelly ........................................ SUCCESS [  4.610 s]
    [INFO] flink-gelly-scala .................................. SUCCESS [ 14.469 s]
    [INFO] flink-gelly-examples ............................... SUCCESS [ 10.733 s]
    [INFO] flink-external-resources ........................... SUCCESS [  0.104 s]
    [INFO] flink-external-resource-gpu ........................ SUCCESS [  0.368 s]
    [INFO] flink-metrics-dropwizard ........................... SUCCESS [  0.387 s]
    [INFO] flink-metrics-graphite ............................. SUCCESS [  0.256 s]
    [INFO] flink-metrics-influxdb ............................. SUCCESS [  0.804 s]
    [INFO] flink-metrics-prometheus ........................... SUCCESS [  0.490 s]
    [INFO] flink-metrics-statsd ............................... SUCCESS [  0.325 s]
    [INFO] flink-metrics-datadog .............................. SUCCESS [  0.375 s]
    [INFO] flink-metrics-slf4j ................................ SUCCESS [  0.299 s]
    [INFO] flink-cep-scala .................................... SUCCESS [ 10.022 s]
    [INFO] flink-table-uber ................................... SUCCESS [  5.428 s]
    [INFO] flink-table-uber-blink ............................. SUCCESS [  6.203 s]
    [INFO] flink-python ....................................... SUCCESS [ 13.268 s]
    [INFO] flink-sql-client ................................... SUCCESS [  2.642 s]
    [INFO] flink-state-processor-api .......................... SUCCESS [  1.207 s]
    [INFO] flink-ml-parent .................................... SUCCESS [  0.137 s]
    [INFO] flink-ml-api ....................................... SUCCESS [  0.500 s]
    [INFO] flink-ml-lib ....................................... SUCCESS [  1.158 s]
    [INFO] flink-ml-uber ...................................... SUCCESS [  0.192 s]
    [INFO] flink-scala-shell .................................. SUCCESS [  9.500 s]
    [INFO] flink-dist ......................................... SUCCESS [ 32.561 s]
    [INFO] flink-yarn-tests ................................... SUCCESS [  4.344 s]
    [INFO] flink-end-to-end-tests ............................. SUCCESS [ 12.308 s]
    [INFO] flink-cli-test ..................................... SUCCESS [  0.429 s]
    [INFO] flink-parent-child-classloading-test-program ....... SUCCESS [  0.359 s]
    [INFO] flink-parent-child-classloading-test-lib-package ... SUCCESS [  0.314 s]
    [INFO] flink-dataset-allround-test ........................ SUCCESS [  0.298 s]
    [INFO] flink-dataset-fine-grained-recovery-test ........... SUCCESS [  0.313 s]
    [INFO] flink-datastream-allround-test ..................... SUCCESS [  1.400 s]
    [INFO] flink-batch-sql-test ............................... SUCCESS [  0.288 s]
    [INFO] flink-stream-sql-test .............................. SUCCESS [  0.298 s]
    [INFO] flink-bucketing-sink-test .......................... SUCCESS [  0.552 s]
    [INFO] flink-distributed-cache-via-blob ................... SUCCESS [  0.262 s]
    [INFO] flink-high-parallelism-iterations-test ............. SUCCESS [  6.488 s]
    [INFO] flink-stream-stateful-job-upgrade-test ............. SUCCESS [  0.901 s]
    [INFO] flink-queryable-state-test ......................... SUCCESS [  1.354 s]
    [INFO] flink-local-recovery-and-allocation-test ........... SUCCESS [  0.300 s]
    [INFO] flink-elasticsearch5-test .......................... SUCCESS [  4.594 s]
    [INFO] flink-elasticsearch6-test .......................... SUCCESS [  2.752 s]
    [INFO] flink-quickstart ................................... SUCCESS [  0.855 s]
    [INFO] flink-quickstart-java .............................. SUCCESS [  4.550 s]
    [INFO] flink-quickstart-scala ............................. SUCCESS [  0.195 s]
    [INFO] flink-quickstart-test .............................. SUCCESS [  0.296 s]
    [INFO] flink-confluent-schema-registry .................... SUCCESS [  1.061 s]
    [INFO] flink-stream-state-ttl-test ........................ SUCCESS [  3.233 s]
    [INFO] flink-sql-client-test .............................. SUCCESS [  0.504 s]
    [INFO] flink-streaming-file-sink-test ..................... SUCCESS [  0.219 s]
    [INFO] flink-state-evolution-test ......................... SUCCESS [  0.822 s]
    [INFO] flink-rocksdb-state-memory-control-test ............ SUCCESS [  0.752 s]
    [INFO] flink-end-to-end-tests-common ...................... SUCCESS [  1.059 s]
    [INFO] flink-metrics-availability-test .................... SUCCESS [  0.351 s]
    [INFO] flink-metrics-reporter-prometheus-test ............. SUCCESS [  0.450 s]
    [INFO] flink-heavy-deployment-stress-test ................. SUCCESS [  6.453 s]
    [INFO] flink-connector-gcp-pubsub-emulator-tests .......... SUCCESS [  1.147 s]
    [INFO] flink-streaming-kafka-test-base .................... SUCCESS [  0.399 s]
    [INFO] flink-streaming-kafka-test ......................... SUCCESS [  5.791 s]
    [INFO] flink-streaming-kafka011-test ...................... SUCCESS [  5.234 s]
    [INFO] flink-streaming-kafka010-test ...................... SUCCESS [  5.001 s]
    [INFO] flink-plugins-test ................................. SUCCESS [  0.082 s]
    [INFO] dummy-fs ........................................... SUCCESS [  0.197 s]
    [INFO] another-dummy-fs ................................... SUCCESS [  0.200 s]
    [INFO] flink-tpch-test .................................... SUCCESS [  0.664 s]
    [INFO] flink-streaming-kinesis-test ....................... SUCCESS [  9.636 s]
    [INFO] flink-elasticsearch7-test .......................... SUCCESS [  2.948 s]
    [INFO] flink-end-to-end-tests-common-kafka ................ SUCCESS [  1.200 s]
    [INFO] flink-tpcds-test ................................... SUCCESS [  1.249 s]
    [INFO] flink-netty-shuffle-memory-control-test ............ SUCCESS [  0.215 s]
    [INFO] flink-python-test .................................. SUCCESS [  4.799 s]
    [INFO] flink-statebackend-heap-spillable .................. SUCCESS [  0.795 s]
    [INFO] flink-contrib ...................................... SUCCESS [  0.082 s]
    [INFO] flink-connector-wikiedits .......................... SUCCESS [  0.364 s]
    [INFO] flink-fs-tests ..................................... SUCCESS [  0.635 s]
    [INFO] flink-docs ......................................... SUCCESS [  1.153 s]
    [INFO] flink-walkthroughs ................................. SUCCESS [  0.095 s]
    [INFO] flink-walkthrough-common ........................... SUCCESS [  1.153 s]
    [INFO] flink-walkthrough-datastream-java .................. SUCCESS [  0.148 s]
    [INFO] flink-walkthrough-datastream-scala ................. SUCCESS [  0.173 s]
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time:  23:09 min
    [INFO] Finished at: 2021-01-20T10:22:41+08:00
    [INFO] ------------------------------------------------------------------------
    

    7.执行编译问题解决

    • 找不到kafka-schema-registry-client-4.1.0.jar

      http://packages.confluent.io/maven/io/confluent/kafka-schema-registry-client/4.1.0/kafka-schema-registry-client-4.1.0.jar
      

      到上面这个地址进行下载,然后到文件所在目录执行命令进行安装

      mvn install:install-file -DgroupId=io.confluent -DartifactId=kafka-schema-registry-client -Dversion=4.1.0 -Dpackaging=jar -Dfile=kafka-schema-registry-client-4.1.0.jar
      
    • 编译flink-runtime-web报错,打开flink-runtime-web的pom文件,添加node和npm镜像

      • 报错信息

        [ERROR] Failed to execute goal com.github.eirslett:frontend-maven-plugin:1.6:install-node-and-npm (install node and npm) on project flink-runtime-web_2.11: Could not download Node.js: Got error code 404 from the server. -> [Help 1]
        
      • 解决方案:参考[2.编译flink-runtime-web工程中的web-dashboard] 提前编译好web-dashboard工程

    8.喜欢点赞和关注哦,后续会不断更新源码解读,同时欢迎留下您们的脚印以及问题,谢谢大家!

    相关文章

      网友评论

          本文标题:Apache Flink 源码解析(一)flink源码编译

          本文链接:https://www.haomeiwen.com/subject/nxahzktx.html