1.x伪分布式环境安装(单机):
1.前置准备:配置主机名(不要使用ip),安装jdk及下载hadoop的gz包,配置java和hadoop的环境变量, vi /etc/profile:
export JAVA_HOME=/usr/java/jdk1.7.0_67
export HADOOP_PREFIX=/opt/sxt/hadoop-2.6.5
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
设置本机的免密钥登录:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
2.配置:修改*-env.sh (在/hadoop-2.7.2/etc/hadoop/ 下),将刚才配置的JAVA_HOME配置进去,(hadoop-env.sh,yarn-env.sh,mapred-env.sh)
3.修改core-site.xml,加入配置:
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/sxt/hadoop/local</value>
</property>
4.修改hdfs-site.xml,加入配置:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node01:50090</value>
</property>
5.修改slaves,将内容写为当前主机名即可:vi slaves node01
6.初始化hdfs,hdfs namenode -format
7.启动:start-dfs.sh; jps查看是否启动三个进程DataNode,NameNode,SecondaryNameNode ; 可在浏览器上查看http://172.168.1.108:50070
8.测试:
创建目录:hdfs dfs -mkdir /user
查看目录:hdfs dfs -ls /user
上传文件:hdfs dfs -put file / (file指本地文件绝对路径,/指hdfs的文件路径,例子: hdfs dfs -put /opt/sxt/hadoop-2.7.2.tar.gz /user/root)
上传文件并指定block大小:hdfs dfs -D dfs.blocksize=1048576 -put test2 /user
网友评论