美文网首页Flink实战
Flink实战之FileSystem-parquet支持ZSTD

Flink实战之FileSystem-parquet支持ZSTD

作者: 〇白衣卿相〇 | 来源:发表于2020-12-07 00:06 被阅读0次

    前言

    ZSTD压缩算法是现在最流行压缩算法了,有着高压缩比和压缩性能强的有点,已经被各大框架所使用。
    目前hadoop 3.1.0版本已经支持ZSTD算法。所以可以使用Flink写HDFS时使用这个算法。具体如何操作请往下看

    升级hadoop版本

    Flink1.11已经支持hadoop 3.x+版本了,所以Flink依赖的hadoop shaded包版本不达标的需要升级。
    由于从Flink1.11开始,官方已经不提供hadoop shaded支持,可以使用CLASS_PATH方式 参考
    但是任然可以自己编译hadoop 3.1版本的shaded包,具体步骤如下:

    1. git clone https://github.com/apache/flink-shaded.git
    2. 切换到release-10.0,由于flink1.11 已经不再支持hadoop shaded包,所以需要我们自己打包
    3. 修改flink-shaded-hadoop-2-parent的pom.xml版本为11.0、hadoop.version 为3.1.0
    4. 修改flink-shaded-hadoop-2的pom.xml parent version为11.0、version为${hadoop.version}-11.0
    5. 修改flink-shaded-hadoop-2-uber的pom.xml parent version为11.0、version为{hadoop.version}-11.0、以及依赖的flink-shaded-hadoop-2 version 为{hadoop.version}-11.0
    6. 在根目录下执行命令
      mvn clean install -DskipTests -Dcheckstyle.skip=true -Dhadoop.version=3.1.0
    7. 完成后将flink-shaded-hadoop-2-uber-3.1.0-11.0.jar放到flink lib目录下

    hadoop集群支持

    同时hadoop集群版本也要支持ZSTD,升级集群或者将ZSTD功能merge到低版本中。

    SQL

    现在就可以写flink任务了,DDL如下:

    CREATE TABLE flink_filesystem_parquet_zstd (
        a BIGINT,
        b STRING,
        c STRING,
        d STRING,
        e STRING
    ) PARTITIONED BY (e) WITH (
      'connector'='filesystem',
      'path' = 'hdfs://xxx',
      'format'='parquet',
      'hadoop-user'='hadoop',
      'parquet.compression'='ZSTD'
    );
    

    遇到的问题

    1. NoClassDefException org.apache.hadoop.io.compress.ZStandardCodec
      原因:依赖的flink shaded hadoop包是2.7.2版本,这个类是hadoop-common 3.1.0中的类,所以需要升级hadoop包到3.1.0
    2. shaded 包升级之后执行报错
    2020-10-21 15:08:14
    
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsZstd()Z
    
      at org.apache.hadoop.util.NativeCodeLoader.buildSupportsZstd(Native Method)
    
      at org.apache.hadoop.io.compress.ZStandardCodec.checkNativeCodeLoaded(ZStandardCodec.java:64)
    
      at org.apache.hadoop.io.compress.ZStandardCodec.getCompressorType(ZStandardCodec.java:153)
    
      at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
    
      at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168)
    
      at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.<init>(CodecFactory.java:144)
    
      at org.apache.parquet.hadoop.CodecFactory.createCompressor(CodecFactory.java:206)
    
      at org.apache.parquet.hadoop.CodecFactory.getCompressor(CodecFactory.java:189)
    
      at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:285)
    
      at org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:530)
    
      at org.apache.flink.formats.parquet.row.ParquetPb2AvroBuilder$FlinkParquetPb2AvroBuilder.createWriter(ParquetPb2AvroBuilder.java:212)
    
      at org.apache.flink.formats.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:57)
    
      at org.apache.flink.table.filesystem.FileSystemTableSink$ProjectionBulkFactory.create(FileSystemTableSink.java:521)
    
      at org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNew(BulkBucketWriter.java:69)
    
      at org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.openNewInProgressFile(OutputStreamBasedPartFileWriter.java:83)
    
      at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.rollPartFile(Bucket.java:209)
    
      at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:200)
    
      at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:282)
    
      at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:104)
    
      at org.apache.flink.table.filesystem.stream.StreamingFileWriter.processElement(StreamingFileWriter.java:127)
    
      at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:161)
    
      at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:178)
    
      at org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:153)
    
      at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67)
    
      at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:345)
    
      at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:191)
    
      at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:181)
    
      at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:558)
    
      at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:530)
    
      at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
    
      at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
    
      at java.lang.Thread.run(Thread.java:748)
    

    原因:线上环境是hadoop2.6版本 还不支持zstd压缩,之后在一个测试环境上进行,该环境还是2.6版本但是将3.1的特性merge了进来。

    相关文章

      网友评论

        本文标题:Flink实战之FileSystem-parquet支持ZSTD

        本文链接:https://www.haomeiwen.com/subject/dutewktx.html