美文网首页
环境搭建

环境搭建

作者: tonyzmy | 来源:发表于2018-11-30 19:49 被阅读0次

1.安装虚拟机,选择GNOME桌面和开发环境,设置root用户.
2.环境配置:

  • 删除本地程序包含的java相关.
      rpm -qa | grep java//查询包含java的程序
      rpm -e --nodeps java*//删除上面查询出来的所有java 
  • 修改网卡信息,改为静态地址.
TYPE=Ethernet
BOOTPROTO=static    //设置静态
DEFROUTE=yes
IPV6INIT=no
NAME=ens33
UUID=ee3c79b2-594c-41d2-aa3a-b487ebfa936d   //uuidgen 网卡名生成uuid
DEVICE=ens33
ONBOOT=yes       //设置yes
IPADDR=192.168.81.20   //VMWAR 编辑->虚拟网络编辑器->添加VMnet8网络,NAT模式,自动生成子网和掩码,选择中间一个ip写入
NETMASK=255.255.255.0  
GATEWAY=192.168.81.2    //NAT设置中有网关
DNS1=202.114.32.3    //DNS地址可以在windows下用ipconfig -all 获得最近DNS
DNS2=202.114.32.1
  • 重启网络,停止网络管理服务(要做ping验证)
systemctl restart network
systemctl stop NetworkManager
systemctl disable NetworkManager

-修改hostname,重启(reboot)后生效

vim /etc/hostname
hostname master//临时设置hostname

-设置hosts和network

vim /etc/hosts   //ip地址 hostname   如:192.168.81.20 master
vim /etc/sysconfig/network //NETWORKING=yes \n HOSTNAME=master
  • 关闭防火墙
vim /etc/selinux/config   //修改SELINUX=disabled
systemctl stop firewalld
systemctl disable firewalld

-设置ssh互信

ssh-keygen -t rsa ///生成文件在/root/.ssh/
cat /root/.ssh/id_rsa.pub  > /root/.ssh/authorized_keys   //每个虚拟机都执行
ssh slave1 cat /root/.ssh/authorized_keys >> /root/.ssh/authorized_keys
ssh slave2 cat /root/.ssh/authorized_keys >> /root/.ssh/authorized_keys
ssh master cat /root/.ssh/authorized_keys> /root/.ssh/authorized_keys   //slave执行

-用ftp把安装包发送到/usr/local/src/

tar -zxvf jdk-8u172-linux-x64.tar.gz
tar -zxvf hadoop-2.6.5.tar.gz

-配置环境变量

vim ~/.bashrc   //添加下列配置

export JAVA_HOME=/usr/local/src/jdk1.8.0_172
export HADOOP_HOME=/usr/local/src/hadoop-2.6.5
export CLASSPATH=.:$CLASSPATH;$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source ~/.bashrc   //让配置生效,java -version检查
  • jdk环境不需要配置,需要配置hadoop
//hadoop-2.6.5/etc/hadoop/下需要配置的文件有:
//1.slaves
slave1
slave2
//2.core-site.xml
<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://master:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/usr/local/src/hadoop-2.6.5/tmp</value>
        </property>
</configuration>
//3.hdfs-site.xml
<configuration>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>master:9001</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/usr/local/src/hadoop-2.6.5/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/usr/local/src/hadoop-2.6.5/dfs/data</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
</configuration>
//mapred-site.xml,先执行mv mapred-site.xml.template mapred-site.xml
<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
</configuration>
//yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapreduce.ShuffleHandler</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address</name>
                <value>master:8032</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address</name>
                <value>master:8030</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address</name>
                <value>master:8035</value>
        </property>
        <property>
                <name>yarn.resourcemanager.admin.address</name>
                <value>master:8033</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address</name>
                <value>master:8088</value>
        </property>
</configuration>

  • 远程复制配置好的Hadoop到slave上
scp -rp /usr/local/src/hadoop-2.6.5/ root@slave1:/usr/local/src/
scp -rp /usr/local/src/hadoop-2.6.5/ root@slave2:/usr/local/src/

-文件格式化,启动集群

/usr/local/src/hadoop-2.6.5/bin/hadoop namenode -format
start-all.sh

-查看hdfs文件

hadoop fs -ls /
hadoop fs -put 文件名 要上传到的位置   //hadoop fs -put anaconda-ks.cfg /
  • 验证配置效果
jps   //下面正常结果
6084 Jps
2553 NameNode
2059 ResourceManager
2732 SecondaryNameNode

-浏览器查看
hadoop:192.168.81.20:8088
hdfs:192.168.81.20:50070
在windowC:$WINDOWS.~BT\NewOS\Windows\System32\drivers\etc\hosts中添加
192.168.81.20 master
192.168.81.21 slave1
192.168.81.22 slave2
可以直接访问master:8088,master:50070

相关文章

网友评论

      本文标题:环境搭建

      本文链接:https://www.haomeiwen.com/subject/uaxkcqtx.html