前置条件:JAVA环境变量
一般都建议把软件安装在/usr/local/src 目录下面
更多请参考官网http://hadoop.apache.org/docs/r1.0.4/cn/cluster_setup.html
- 解压安装包
tar -zxvf hadoop-2.7.5.tar.gz
- 配置hadoop环境变量
# 打开配置文件
vim /etc/profile
# 在配置文件中末尾输入
export JAVA_HOME=/usr/local/src/jdk/jdk1.8
export HADOOP_HOME=/usr/local/src/hadoop
export PATH=$PATH:$JAVA_HOME/bin:/usr/local/src/hadoop/bin
测试hadoop配置是否生效
root@503ae25fe58d:/usr/local/src/hadoop/sbin# echo ${HADOOP_HOME}
/usr/local/src/hadoop
- 配置hadoop-env.sh的JAVA_HOME参数
改成jdk的绝对路径
# The java implementation to use.
export JAVA_HOME=/usr/local/src/jdk/jdk1.8
- 配置core-site.xml文件
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/src/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
- 配置hdfs-site.xml文件
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/src/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/src/hadoop/tmp/dfs/data</value>
</property>
</configuration>
- 格式化一个分布式文件系统
bin/hadoop namenode -format
-------输出分界线------
19/08/16 16:25:04 INFO util.GSet: Computing capacity for map NameNodeRetryCache
19/08/16 16:25:04 INFO util.GSet: VM type = 64-bit
19/08/16 16:25:04 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
19/08/16 16:25:04 INFO util.GSet: capacity = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /usr/local/src/hadoop/tmp/dfs/name ? (Y or N) Y
19/08/16 16:25:06 INFO namenode.FSImage: Allocated new BlockPoolId: BP-701930534-172.17.0.9-1565943906442
19/08/16 16:25:07 INFO common.Storage: Storage directory /usr/local/src/hadoop/tmp/dfs/name has been successfully formatted.
19/08/16 16:25:07 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/src/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
19/08/16 16:25:07 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/src/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 320 bytes saved in 0 seconds.
19/08/16 16:25:07 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/08/16 16:25:07 INFO util.ExitUtil: Exiting with status 0
19/08/16 16:25:07 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at 503ae25fe58d/172.17.0.9
- 启动过程可能会报错Cannot assign requested address
https://www.jianshu.com/p/ecfb791dcb3b
- 启动Hadoop
root@503ae25fe58d:/usr/local/src/hadoop/sbin# ./start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-root-namenode-503ae25fe58d.out
localhost: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-root-datanode-503ae25fe58d.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-root-secondarynamenode-503ae25fe58d.out
- 查看状态jps
root@503ae25fe58d:/usr/local/src/hadoop/sbin# jps
1585 SecondaryNameNode
1192 NameNode
1802 Jps
- 停止Hadoop
root@503ae25fe58d:/usr/local/src/hadoop/sbin# stop-dfs.sh
-bash: stop-dfs.sh: command not found
root@503ae25fe58d:/usr/local/src/hadoop/sbin# .stop-dfs.sh
-bash: .stop-dfs.sh: command not found
root@503ae25fe58d:/usr/local/src/hadoop/sbin# ./stop-dfs.sh
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
root@503ae25fe58d:/usr/local/src/hadoop/sbin# jps
2382 Jps
网友评论