美文网首页
完全分布式安装Hadoop

完全分布式安装Hadoop

作者: tonyemail_st | 来源:发表于2017-09-29 21:34 被阅读0次

四个节点

[root@master hadoop]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.100 master
192.168.1.101 slave1
192.168.1.102 slave2
192.168.1.103 slave3

master需要无密码远程登陆所有slave节点

[root@master local]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
0e:8e:9d:c9:55:01:a1:ed:e2:35:92:c7:c4:6a:a8:53 root@master
The key's randomart image is:
+--[ RSA 2048]----+
|        oo.      |
|       +   .     |
|      . + .      |
|     . * .       |
|    E B S        |
|   o B @ .       |
|  o . B .        |
|   .             |
|                 |
+-----------------+
root@master local]# ssh-copy-id slave1
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@slave1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'slave1'"
and check to make sure that only the key(s) you wanted were added.
root@master local]# ssh-copy-id slave2
......
root@master local]# ssh-copy-id slave3
......
root@master local]# ssh-copy-id master
......

core-site.xml(四个节点)

<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://master:9000</value>
</property>

</configuration>

配置slaves文件

[root@master hadoop]# cat /usr/local/hadoop/etc/hadoop/slaves 
slave1
slave2
slave3

安装jdk

[root@master bin]# rpm -ivh ~/jdk-8u91-linux-x64.rpm (四个节点)
Preparing...                          ################################# [100%]
Updating / installing...
   1:jdk1.8.0_91-2000:1.8.0_91-fcs    ################################# [100%]
Unpacking JAR files...
    tools.jar...
    plugin.jar...
    javaws.jar...
    deploy.jar...
    rt.jar...
    jsse.jar...
    charsets.jar...
    localedata.jar...
    jfxrt.jar...

填加PATH变量(四个节点)

[root@master bin]# tail -2 /etc/profile
PATH=$PATH:/usr/local/hadoop/bin
JAVA_HOME=/usr/java/default/

填加JAVA_HOME(四个节点)

[root@master hadoop]# grep JAVA_HOME /usr/local/hadoop/etc/hadoop/hadoop-env.sh 
# The only required environment variable is JAVA_HOME.  All others are
# set JAVA_HOME in this file, so that it is correctly defined on
export JAVA_HOME=/usr/java/default

格式化(仅master)

hadoop namenode -format

启动集群(仅master)

start-dfs.sh

验证结果

[root@master hadoop]# jps
4290 NameNode
4382 Jps

查看集群报告

[root@master hadoop]# hdfs dfsadmin -report

相关文章

网友评论

      本文标题:完全分布式安装Hadoop

      本文链接:https://www.haomeiwen.com/subject/evspextx.html