https://my.oschina.net/jackieyeah/blog/657032
正在运行的任务
hadoop job -list
杀掉任务
hadoop job -kill job_1528518736003_0011
给文件加权限
hadoop fs -chmod -R 755 /*.data
java -version
bashrc
export JAVA_HOME=/usr/local/src/jdk1.7.0_79
export PATH="$PATH:$JAVA_HOME/bin"
export HADOOP_HOME=/usr/local/src/hadoop-2.6.1
export PATH="$PATH:$HADOOP_HOME/bin:$JAVA_HOME/bin"
source activate py27tf
etc/hadoop/hadoop-env.sh
添加
export JAVA_HOME=/usr/local/src/jdk1.7.0_79
etc/hadoop/yarn-env.sh
export JAVA_HOME=/usr/local/src/jdk1.7.0_79
slaves 添加从节点
vm2
vm3
创建tmp目录
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://vm1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/src/hadoop-2.6.1/tmp</value>
</property>
创建data name目录
hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>vm1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/src/hadoop-2.6.1/tmp/dfs/name</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>file:/usr/local/src/hadoop-2.6.1/tmp/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache..hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>vm1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>vm1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>vm1:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>vm1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>vm1:8088</value>
</property>
hadoop2.6复制到其他两台机器上
初始化
hadoop namenode -format
启动
./sbin/start-all.sh
网友评论