美文网首页Hadoop
Hadoop支持LZO压缩

Hadoop支持LZO压缩

作者: 勇于自信 | 来源:发表于2020-08-18 13:54 被阅读0次
    1.环境准备

    maven
    gcc-c++
    lzo-devel
    zlib-devel
    autoconf
    automake
    libtool

    maven安装:
    1)下载
    wget https://mirrors.tuna.tsinghua.edu.cn/apache/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
    2)解压
    tar -zxvf apache-maven-3.6.3-bin.tar.gz
    3)环境变量配置
    vim /root/.bash_profile

    MAVEN_HOME=/usr/local/src/apache-maven-3.6.3
    export MAVEN_HOME
    PATH=$MAVEN_HOME/bin:$PATH
    export PATH
    

    4)修改setting.xml,增加阿里云镜像
    vim conf/settings.xml

     <mirrors>
        <id>alimaven</id>
        <name>aliyun maven</name>
        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
        <mirrorOf>central</mirrorOf>
     </mirrors>
    

    其他环境安装:
    yum -y install gcc-c++ lzo-devel zlib-devel autoconf automake libtool

    2.下载、安装并编译LZO

    wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.10.tar.gz
    tar -zxvf lzo-2.10.tar.gz
    cd lzo-2.10
    ./configure -prefix=/usr/local/hadoop/lzo
    make
    make install

    3.编译hadoop-lzo源码

    1)下载源码:wget https://github.com/twitter/hadoop-lzo/archive/master.zip
    2)解压:unzip master.zip
    3)进入/usr/local/src/hadoop-lzo-master,修改pom配置
    <hadoop.current.version>2.7.2</hadoop.current.version>
    4)声明两个临时变量
    export C_INCLUDE_PATH=/usr/local/lzo/include
    export LIBRARY_PATH=/usr/local/hadoop/lzo/lib
    5)编译
    进入hadoop-lzo-master执行
    mvn package -Dmaven.test.skip=true

    6)进入target,hadoop-lzo-0.4.21-SNAPSHOT.jar即是编译成功的hadoop-lzo组件


    4.hadoop配置lzo

    1)将编译好的jar放入hadoop下的common目录下,并分发到slave节点
    cp hadoop-lzo-0.4.21-SNAPSHOT.jar /usr/local/src/hadoop-2.7.3/share/hadoop/common/
    scp hadoop-lzo-0.4.21-SNAPSHOT.jar slave1:/usr/local/src/hadoop-2.7.3/share/hadoop/common/
    scp hadoop-lzo-0.4.21-SNAPSHOT.jar slave2:/usr/local/src/hadoop-2.7.3/share/hadoop/common/
    scp hadoop-lzo-0.4.21-SNAPSHOT.jar slave3:/usr/local/src/hadoop-2.7.3/share/hadoop/common/
    2)配置hadoop下的core-site.xml,在configuration下增加:

    <property>
        <name>io.compression.codecs</name>
        <value>
            org.apache.hadoop.io.compress.GzipCodec,
            org.apache.hadoop.io.compress.DefaultCodec,
            org.apache.hadoop.io.compress.BZip2Codec,
            org.apache.hadoop.io.compress.SnappyCodec,
            com.hadoop.compression.lzo.LzoCodec,
            com.hadoop.compression.lzo.LzopCodec
        </value>
    </property> 
    
    <property>
        <name>io.compression.codec.lzo.class</name>
        <value>com.hadoop.compression.lzo.LzoCodec</value>
    </property> 
    

    增加完毕后分发:
    scp core-site.xml slave1:$PWD
    scp core-site.xml slave2:$PWD
    scp core-site.xml slave3:$PWD
    3)启动hadoop
    start-dfs.sh
    start-yarn.sh

    5.LZO创建索引

    1)上传文件
    hadoop fs -put bigtable.lzo /input
    2)对上传的LZO文件建索引
    创建LZO文件的索引,LZO压缩文件的可切片特性依赖于其索引,故我们需要手动为
    LZO压缩文件创建索引。若无索引,LZO文件的切片只有一个。
    hadoop jar /usr/local/src/hadoop-2.7.3/share/hadoop/common/hadoop-lzo-0.4.21-SNAPSHOT.jar com.hadoop.compression.lzo.DistributedLzoIndexer /input/bigtable.lzo


    3)执行wordcount程序
    hadoop jar /usr/local/src/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output1


    相关文章

      网友评论

        本文标题:Hadoop支持LZO压缩

        本文链接:https://www.haomeiwen.com/subject/ffphjktx.html