美文网首页HadoopHadoop系Spark_Flink_Hadoop
Hadoop集群搭建-03编译安装hadoop

Hadoop集群搭建-03编译安装hadoop

作者: 叫我懒猫 | 来源:发表于2019-07-22 15:45 被阅读0次

    Hadoop集群搭建-02安装配置Zookeeperhttps://www.jianshu.com/p/7ebfdfc90832
    Hadoop集群搭建-01前期准备https://www.jianshu.com/p/109c9f5bd0ea
    本文原文在我的开源中国https://my.oschina.net/finchxu/blog/3077378

    hadoop的编译和安装是直接在一台机器上搞得,姑且用nn1机器。

    全程切换到root用户下操作

    1.hadoop的一些资源在这里:https://www.lanzous.com/b849710/ 密码:9vui

    [hadoop@nn1 zk_op]$ su - root
    [root@nn1 ~]# mkdir /tmp/hadoop_c
    [root@nn1 ~]# cd /tmp/hadoop_c/
    用xshell的rz命令上传源码包到上面的目录。
    [root@nn1 hadoop_c]# tar -xzf /tmp/hadoop_c/hadoop-2.7.3-src.tar.gz -C /usr/local/
    
    

    yum安装一下乱七八糟要用到的软件和插件

    yum -y install svn ncurses-devel gcc* lzo-devel zlib-devel autoconf automake libtool cmake openssl-devel bzip2
    

    2.编译安装protobuf,谷歌的通信和存储协议,必须要用

    [root@nn1 ~]# tar -zxf protobuf-2.5.0.tar.gz -C /usr/local/
    [root@nn1 ~]# cd /usr/local/protobuf-2.5.0
    进行编译安装
    [root@nn1 protobuf-2.5.0]# ./configure
    [root@nn1 protobuf-2.5.0]# make && make install
    [root@nn1 protobuf-2.5.0]# protoc --version
    

    3.解压安装ant

    [root@nn1 hadoop_c]# tar -xf apache-ant-1.9.9-bin.tar.bz2 -C /usr/local/
    
    

    4.解压安装findbugs

    [root@nn1 hadoop_c]# tar -zxf apache-ant-1.9.9-bin.tar.bz2 -C /usr/local/
    

    5.解压安装maven

    因为咱的hadoop在后边是用maven编译的。

    [root@nn1 hadoop_c]# tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/
    

    6.编译安装snappy,谷歌的压缩算法

    [root@nn1 hadoop_c]# tar -xzf snappy-1.1.3.tar.gz -C /usr/local/
    [root@nn1 hadoop_c]# cd /usr/local/snappy-1.1.3/
    [root@nn1 snappy-1.1.3]# ./configure
    [root@nn1 snappy-1.1.3]# make && make install
    

    7.确保已经安装jdk8

    8.在环境变量中添加hadoop编译所需的各种软件的变量

    [root@nn1 snappy-1.1.3]# vim /etc/profile
    #set Hadoop_compile
    export MAVEN_HOME=/usr/local/apache-maven-3.3.9
    export FINDBUGS_HOME=/usr/local/findbugs-3.0.1
    export PROTOBUF_HOME=/usr/local/protobuf-2.5.0
    export ANT_HOME=/usr/local/apache-ant-1.9.9
    export PATH=$PATH:$MAVEN_HOME/bin:$FINDBUGS_HOME/bin:$ANT_HOME/bin
    export MAVEN_OPTS="-Xmx2g -XX:MaxMetaspaceSize=512M -XX:ReservedCodeCacheSize=512m"
    
    让环境变量生效:
    [root@nn1 snappy-1.1.3]# source /etc/profile
    查看maven版本
    [root@nn1 snappy-1.1.3]# mvn -v
    Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-11T00:41:47+08:00)
    Maven home: /usr/local/apache-maven-3.3.9
    Java version: 1.8.0_144, vendor: Oracle Corporation
    Java home: /usr/java/jdk1.8.0_144/jre
    Default locale: zh_CN, platform encoding: UTF-8
    OS name: "linux", version: "3.10.0-957.21.3.el7.x86_64", arch: "amd64", family: "unix"
    [root@nn1 snappy-1.1.3]#
    

    9.更改maven的settings.xml文件,更改仓库,配置本地仓库位置

    [root@nn1 hadoop_c]# cd /usr/local/apache-maven-3.3.9/conf/
    [root@nn1 conf]# cp settings.xml settings.xml.bak
    [root@nn1 conf]# rm -rf settings.xml
    [root@nn1 conf]# cp /tmp/hadoop_c/settings.xml settings.xml
    
    可以直接用上边网盘里弄好的配置文件
    主要修改了:
    <!-- 本地仓库 -->
      <localRepository>/data/maven/repositories</localRepository>
    和远程仓库
    <mirror>
          <id>huaweicloud</id>
          <mirrorOf>central</mirrorOf>
          <url>https://repo.huaweicloud.com/repository/maven/</url>
        </mirror>
        <mirror>
            <id>nexus-aliyun</id>
              <mirrorOf>central</mirrorOf>
              <name>Nexus aliyun</name>
              <url>http://maven.aliyun.com/nexus/content/groups/public</url>
          </mirror>
        <mirror>        
          <id>maven</id>        
          <name>MavenMirror</name>        
          <url>http://repo1.maven.org/maven2/</url>        
          <mirrorOf>central</mirrorOf>        
       </mirror>
    

    把网盘里的maven离线仓库下载解压成如下路径/data/maven/repositories/各种jar包

    下边开始用maven编译hadoop源码

    [root@nn1 conf]# cd /usr/local/hadoop-2.7.3-src/
    [root@nn1 hadoop-2.7.3-src]# nohup mvn clean package -Pdist,native -DskipTests -Dtar -Dbundle.snappy -Dsnappy.lib=/usr/local/lib > /tmp/hadoop_log 2>&1 &
    
    

    nohup命令用来让任务在后台运行,这里重定向输出到了日志文件,这里通过查看日志文件来监控安装进程。

    [root@nn1 hadoop-2.7.3-src]# tail -f /tmp/hadoop_log
    

    10.编译完成后就会在/usr/local/hadoop-2.7.3-src/hadoop-dist/target/下生成一个tar.gz的包,把这个包cp到我们的home,然后批量发送给其他4台机器。

    [root@nn1 ~]# exit
    登出
    [hadoop@nn1 ~]$ cp /usr/local/hadoop-2.7.3-src/hadoop-dist/target/hadoop-2.7.3.tar.gz ~/
    [hadoop@nn1 hadoop_base_op]$ ./scp_all.sh ../hadoop-2.7.3.tar.gz /tmp/
    
    

    11.把5台机器的tar包批量解压到各自的/usr/local/下

    [hadoop@nn1 hadoop_base_op]$ ./ssh_root.sh tar -zxf /tmp/hadoop-2.7.3.tar.gz -C /usr/local/
    修改文件用户和权限
    [hadoop@nn1 hadoop_base_op]$ ./ssh_all.sh chmod -R 770 /usr/local/hadoop-2.7.3
    [hadoop@nn1 hadoop_base_op]$ ./ssh_root.sh ln -s /usr/local/hadoop-2.7.3 /usr/local/hadoop
    

    这时候hadoop就安装结束了,检查一下是否正确安装

    [hadoop@s1 ~]$ source /etc/profile
    [hadoop@s1 ~]$ hadoop checknative
    19/07/22 15:37:25 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
    19/07/22 15:37:25 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
    Native library checking:
    hadoop:  true /usr/local/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0
    zlib:    true /usr/lib64/libz.so.1
    snappy:  true /usr/local/hadoop-2.7.3/lib/native/libsnappy.so.1
    lz4:     true revision:99
    bzip2:   true /usr/lib64/libbz2.so.1
    openssl: true /usr/lib64/libcrypto.so
    
    

    如果提示hadoop是未知命令,那么就要查看环境变量是否正确配置,然后可以重新source一下环境变量。

    Hadoop集群搭建-02安装配置Zookeeperhttps://www.jianshu.com/p/7ebfdfc90832
    Hadoop集群搭建-01前期准备https://www.jianshu.com/p/109c9f5bd0ea
    本文原文在我的开源中国https://my.oschina.net/finchxu/blog/3077378

    相关文章

      网友评论

        本文标题:Hadoop集群搭建-03编译安装hadoop

        本文链接:https://www.haomeiwen.com/subject/ehswlctx.html