把hadoop配置部署到每台机器上
1、把hadoop配置分发到每个机器上,并解压
将配置文件压缩包上传到~hadoop/up
cd ~/up
rz
hadoop配置文件地址
ll /usr/local/hadoop/etc/hadoop/
批量备份hadoop老的配置
./ssh_all.sh mv /usr/local/hadoop/etc/hadoop /usr/local/hadoop/etc/hadoop_back
批量将自定义的配置拷贝到/tmp 目录下
./scp_all.sh ../up/hadoop.tar.gz /tmp/
批量将自定义配置 压缩包解压到/usr/local/hadoop/etc/
批量检查配置是否正确解压
./ssh_all.sh head /usr/local/hadoop/etc/hadoop/hadoop-env.sh
2、批量修改/usr/local/hadoop/etc/hadoop目录下的文件权限为770
修改每个机器上的配置文件地址权限
./ssh_all.sh chmod -R 770 /usr/local/hadoop/etc/hadoop
3、初始化分布式文件系统HDFS
步骤:
1)启动zookeeper
2)启动 journalnode
3)启动zookeeper客户端,初始化HA的zookeeper信息
4)对nn1上的namenode进行格式化
5)启动nn1上的namenode
6)在nn2上 同步namenode
7)启动nn2上的namenode
8)启动ZKFC
9)启动datanode
3.1 启动zookeeper
![](https://img.haomeiwen.com/i20508603/5017adf373eb13c2.png)
启动zookeeper
./ssh_all_zookeeper.sh /usr/local/zookeeper/bin/zkServer.sh start
查看zookeeper运行的状态
./ssh_all_zookeeper.sh /usr/local/zookeeper/bin/zkServer.sh status
![](https://img.haomeiwen.com/i20508603/2a793030efd3a038.png)
3.2 启动 journalnode
![](https://img.haomeiwen.com/i20508603/43c5a0dc1a07616e.png)
在每个要运行的机器上执行 hadoop-daemon.sh start journalnode
cd /usr/local/hadoop/logs/ hadoop相关服务日志文件地址
vim hadoop-hadoop-journalnode-nn1.hadoop.log
网友评论