从机列表文件 etc/hadoop/slaves
(注: 这个文件只需要在master主机保存一份, 其他从机上不需要)
基本参数 etc/hadoop/core-site.xml
-
fs.defaultFS
指定Namenode(IP地址或域名)和端口号54310 -
hadoop.tmp.dir
指定Datanode从机的临时文件存储位置
<?xml version="1.0"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://RaspberryPiHadoopMaster:54310</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hdfs/tmp</value>
</property>
</configuration>
HDFS详细参数 etc/hadoop/hdfs-site.xml
-
dfs.replication
定义文件冗余存储次数, 一般等于3。单机伪集群可以设置为1.
<?xml version="1.0"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>5242880</value>
</property>
</configuration>
Yarn 进程调度参数 etc/hadoop/yarn-site.xml
优化设置了CPU和内存占用上限
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>4</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>128</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>4</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
<description>Whether virtual memory limits will be enforced for containers</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>4</value>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>RaspberryPiHadoopMaster:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>RaspberryPiHadoopMaster:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>RaspberryPiHadoopMaster:8040</value>
</property>
</configuration>
ssh密钥部署
(此步骤不可跳过, 具体过程就几条命令)
在Master上执行HDFS分区格式化命令
hdfs namenode -format
(此步骤不可跳过)
启动 start-dfs.sh
(首次执行时需要确认主机ssh公钥)
将原始数据文件上传到HDFS
自己随便复制一个英文txt文件, 此处我的文件名是books.txt
hdfs dfs -copyFromLocal books.txt /books.txt
hdfs dfs -ls /
启动 start-yarn.sh
执行wordcount测试程序
hadoop jar /opt/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar \
wordcount /books.txt /wordcount-result
hdfs dfs -ls /wordcount-result
hdfs dfs -cat /wordcount-result/* | head -n 20
HDFS 硬盘剩余空间百分比
hdfs dfsadmin -report
或者打开网页 http://127.0.0.1:50070
也可以查看
网友评论