core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/your/tmp/dir</value>
<description>需要默认创建tmp目录用于临时文件的存放,不然会放在/tmp目录下</description>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>300</value>
<description>The number of seconds between two periodic checkpoints</description>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>${hadoop.tmp.dir}/dfs/namesecondary</value>
<description>Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy</description>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value> <num_replication> </value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/data/hadoop/hdfs/namenode/</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>node1:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>node1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>node1:8050</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>12288</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>36864<value>
<description>单个任务可申请最大内存,默认8192MB</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://node1:19888/jobhistory/logs</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node1</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Hadoop pid 存放位置:
(1)修改hadoop-env.sh
修改如下,如果没有下面的设置,可以直接添加:
export HADOOP_PID_DIR=/ROOT/server/pids_hadoop_hbase
export HADOOP_SECURE_DN_PID_DIR=/ROOT/server/pids_hadoop_hbase
上述配置,影响
NameNode
DataNode
SecondaryNameNode
进程pid存储
(2)修改mapred-env.sh
修改
export HADOOP_MAPRED_PID_DIR=/ROOT/server/pids_hadoop_hbase
上述配置,影响
JobHistoryServer
进程pid存储
(3)修改yarn-env.sh
修改或者添加(不存在此项配置时),这里面我没有找到pid的环境设置变量,所以就直接添加了
export YARN_PID_DIR=/ROOT/server/pids_hadoop_hbase
上述配置,影响
NodeManager
ResourceManager
进程pid存储
网友评论