美文网首页
2019-05-15

2019-05-15

作者: 泡泡_bbb9 | 来源:发表于2019-05-15 14:51 被阅读0次

主流程

  • 1.关闭防火墙

  • 2.机器间实现免密

  • 3.zk部署

  • 4.hadoop部署

实现流程

2.免密登录

ssh-keygen

回车一直到最后(有3次)

生成本机秘钥后,同步到自己机器和其他机器

ssh-copy-id host

xshell 可以开启关联,发送键盘到所有会话 3台机器只需要执行三次复制指令即可

ssh host 测试

3.zk部署

-1上解压缩zk压缩包到目录 /opt/module/

tar -zvxf zkXXXX.gz -C /opt/module

cd /opt/module/zk/conf

cp zoo_sample.cfg zoo.cfg

修改zoo为

# example sakes.

dataDir=/opt/module/zookeeper-3.4.10/zkData/

dataLogDir=/opt/module/zookeeper-3.4.10/logs/

# the port at which the clients will connect

clientPort=2181

server.1=98-4.guoding:2888:3888

server.2=98-5.guoding:2888:3888

server.3=98-6.guoding:2888:3888

复制到其他机器

scp -r zk root@host:/opt/module/zk

3台同时启动

bin/zkServer.sh start

4.hadoop 部署

解压缩到/opt/module/hadoop/下

修改etc/hadoop下面的以下文件:

core-site.xml

hdfs-site.xml

yarn-site.xml

mapred-site.xml

hadoop-env.sh

slaves

相应文件内容:

core-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
            <value>hdfs://nncluster</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/module/hadoop-2.8.4/data</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>98-4.guoding:2181,98-5.guoding:2181,98-6.guoding:2181</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>nncluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.nncluster</name>
        <value>nn1,nn2</value>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.nncluster.nn1</name>
        <value>98-4.guoding:9000</value>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.nncluster.nn2</name>
        <value>98-5.guoding:9000</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.nncluster.nn1</name>
        <value>98-4.guoding:50070</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.nncluster.nn2</name>
        <value>98-5.guoding:50070</value>
    </property>

    <property>
        <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://98-4.guoding:8485;98-5.guoding:8485;98-6.guoding:8485/nncluster</value>
    </property>

    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>

    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>

    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/module/hadoop-2.8.4/data/jn</value>
    </property>

    <property>
        <name>dfs.permissions.enable</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>

    <property>
        <name>dfs.client.failover.proxy.provider.nncluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
<property> 
    <name>yarn.nodemanager.aux-services</name> 
    <value>mapreduce_shuffle</value> 
</property>

<property> 
    <name>yarn.resourcemanager.ha.enabled</name> 
    <value>true</value> 
</property>

<property> 
    <name>yarn.resourcemanager.cluster-id</name> 
    <value>cluster-yarn1</value> 
</property>

<property> 
    <name>yarn.resourcemanager.ha.rm-ids</name> 
    <value>rm1,rm2</value> 
</property>

<property> 
    <name>yarn.resourcemanager.hostname.rm1</name> 
    <value>98-4.guoding</value> 
</property>

<property> 
    <name>yarn.resourcemanager.hostname.rm2</name> 
    <value>98-5.guoding</value> 
</property>

<property> 
    <name>yarn.resourcemanager.zk-address</name>
    <value>98-4.guoding:2181,98-5.guoding:2181,98-6.guoding:2181</value> 
</property>

<property> 
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value> 
</property>

<property> 
    <name>yarn.resourcemanager.store.class</name>    
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>98-4.guoding:10020</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>98-5.guoding:19888</value>
    </property>

</configuration>

slaves

98-4.guoding
98-5.guoding
98-6.guoding

hadoop-env.sh 修改jdk执行为真实目录 引用指向不行

hadoop启动

三台执行

./hadoop-daemon.sh start journalnode

nn1格式化并启动namenode

hdfs namenode -format
./hadoop-daemon.sh start namenode

nn2执行同步元数据信息 ,启动namenode

hdfs namenode -bootstrapStandby
./hadoop-daemon.sh start namenode

启动datanode模块 主节点启动会带动其他节点

./hadoop-daemons.sh start datanode

将nn1切换为Active (--forcemanual 问题:脑烈,具体看zk的leader在哪台机器上,强制切换)

   ./hdfs haadmin -transitionToActive  --forcemanual nn1

在nn2上启动yarn集群

./start-yarn.sh


image.png
image.png

相关文章

网友评论

      本文标题:2019-05-15

      本文链接:https://www.haomeiwen.com/subject/zvppaqtx.html