1. Hadoop 安装
官网 https://hadoop.apache.org/releases.html
# 解压缩文件并指定解压缩后的目录
tar -zxvf hadoop-3.3.6.tar.gz -C /Users/xing
# 目录增加权限
sudo chmod -R 777 hadoop-3.3.6
配置文件 hadoop-3.3.6/etc/haoop/hadoop-env.sh
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_281.jdk/Contents/Home
export HADOOP_HOME=/Users/xing/hadoop-3.3.6
配置 Hadoop 环境
% open -e ~/.bash_profile
% source ~/.bash_profile
export HADOOP_HOME=/Users/xing/hadoop-3.3.6
export PATH=$PATH:$HADOOP_HOME/bin
查看
% hadoop version
1.2 修改配置文件
配置 /hadoop-3.3.6/etc/hadoop/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/Users/xing/hadoop-3.3.6/libexec/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
配置 /hadoop-3.3.6/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>127.0.0.1:50090</value>
</property>
</configuration>
配置 /hadoop-3.3.6/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:8021</value>
</property>
</configuration>
1.3 格式化HDFS
% sudo hdfs namenode -format
仅首次运行时需要
1.4 设置免密登陆
修改主机名为 localhost
sudo scutil --set HostName localhost
配置ssh
ssh-keygen -t rsa (一路回车直到完成)
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod og-wx ~/.ssh/authorized_keys
2. 启动Hadoop
# hadoop-3.3.6/sbin目录下执行
% ./start-dfs.sh
% ./stop-dfs.sh
启动/关闭YARN服务
./start-yarn.sh
./stop-yarn.sh
启动/关闭Hadoop服务(上面两个命令的合并)
./start-all.sh
./stop-all.sh
检查是否正常启动 jps
% jps
1090
17605 DataNode
21943 Jps
21847 NodeManager
21751 ResourceManager
17741 SecondaryNameNode
查看地址
http://localhost:50090
http://localhost:8088
网友评论