环境:
Centos7.0,JDK1.8
设置主机名
hostname hadoop1 hadoop2 hadoop3
hosts
/etc/hosts
192.168.31.17 hadoop1
192.168.31.210 hadoop2
192.168.31.65 hadoop3
免密钥登陆
ssh-keygen -t rsa
ssh-copy-id -i hadoop1
ssh-copy-id -i hadoop2
ssh-copy-id -i hadoop3
关闭防火墙
systemctl disable firewalld
yum remove firewalld -y
下载并解压hadoop,conf下修改
1.hadoop-env.sh
export JAVA_HOME=/usr/local/jdk/
2.core-site.xml

3.hdfs-site.xml

4.mapred-site.xml

5.masters hadoop1
6.slaves hadoop2 hadoop3
7.传到其它服务器
scp -r ./hadoop-1.1.2 hadoop2:/root
scp -r ./hadoop-1.1.2 hadoop3:/root
8.运行hadoop
hadoop的bin目录下运行:hadoop namenode -format
hadoop的bin目录下运行:./start-all.sh

简单例子测试
创建两个数据文件
echo "hello world" >test1.txt
echo "hello hadoop" >test2.txt
导入数据文件,运行hadoop example例子
bin/hadoop dfs -put ../input in
bin/hadoop jar hadoop-examples-1.1.2.jar wordcount in out


查看运行结果
bin/hadoop dfs -cat ./outs/*

web地址
http://192.168.31.17:50030/jobtracker.jsp
网友评论