1、安装前的准备工作
1.1、服务器的配置检查
查看服务器操作系统:
[root@dc-hadoop118 ~]# lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: CentOS
Description: CentOS release 6.5 (Final)
Release: 6.5
Codename: Final
查看服务器的内存:
[root@dc-hadoop118 ~]# free -g
total used free shared buffers cached
Mem: 31 31 0 0 0 28
-/+ buffers/cache: 2 28
Swap: 31 0 30
[root@dc-hadoop118 ~]#
查看服务器的硬盘:
[root@dc-hadoop118 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 197G 2.3G 185G 2% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/sda1 2.0G 95M 1.8G 5% /boot
/dev/sda5 1.6T 868G 655G 57% /data1
/dev/sdb1 1.8T 861G 880G 50% /data2
/dev/sdc1 1.8T 860G 881G 50% /data3
/dev/sdd1 1.8T 844G 897G 49% /data4
/dev/sde1 1.8T 849G 892G 49% /data5
/dev/sdf1 1.8T 841G 900G 49% /data6
/dev/sdg1 1.8T 847G 894G 49% /data7
/dev/sdh1 1.8T 853G 888G 50% /data8
查看服务器的CPU
1 查看物理CPU的个数
[root@dc-hadoop118 ~]# cat /proc/cpuinfo |grep "physical id"|sort |uniq|wc -l
2
2个物理CPU
2、 查看逻辑CPU的个数
[root@dc-hadoop118 ~]# cat /proc/cpuinfo |grep "processor"|wc -l
24
24个逻辑CPU
3、 查看CPU是几核
[root@dc-hadoop118 ~]# cat /proc/cpuinfo |grep "cores"|uniq
cpu cores : 6
每个CPU是6核的
4、 查看CPU的主频
[root@dc-hadoop118 ~]# cat /proc/cpuinfo |grep MHz|uniq
cpu MHz : 2299.895
5.
[root@dc-hadoop118 ~]# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
24 Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
24个逻辑CPU, 以及CPU的型号 E5-2630(这个就是CPU的型号)
6.
[root@dc-hadoop118 ~]# getconf LONG_BIT
64
说明当前CPU运行在64位模式下
7. 查看机器型号
[root@dc-hadoop118 ~]# dmidecode | grep "Product Name"
Product Name: PowerEdge R720
Product Name: 0X6FFV
1.2 每一台服务器的基础设置修改
一。
修改对应网卡的IP地址的配置文件
vi /etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=static
IPADDR=192.168.137.2
NETMASK=255.255.255.0
GATEWAY=192.168.137.1
ONBOOT=yes
二。
修改对应网卡的网关的配置文件
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=dc-hadoop1
GATEWAY=192.168.137.1
三。
修改对应网卡的DNS的配置文件
vi /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
四。完成上面三个配置修改之后,重新启动网络配置
service network restart
然后在虚拟机上ping www.qq.com可以拼通。 说明已经可以上网了。
可以执行yum install update 更新一下源
-----------------------------------------------------------------------------------------------------------------------
服务器基础准备工作:先做一些服务器基础性的准备
http://blog.csdn.net/lxpbs8851/article/details/8489141
vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
vi /etc/security/limits.d/90-nproc.conf
* soft nproc 131072
root soft nproc unlimited
1. 防火墙要关闭
service iptables stop;
chkconfig iptables off;
2. selinux要关闭
vi /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled
重启机器即可
3. vi /etc/ssh/sshd_config
PubkeyAuthentication no 要修改为 PubkeyAuthentication yes
然后重新启动一下sshd服务
service sshd restart
prepare5: 关闭swap分区
在各个节点执行:
sysctl -w vm.swappiness=0
reboot
做好时间同步:
yum -y install ntp;
ntpdate time.nist.gov
如果报错:18 Apr 18:32:50 ntpdate[18820]: no server suitable for synchronization found
执行 service ntpdate restart
yum -y install openssh-clients
yum -y install vim
把一整批服务器的/etc/hosts都修改一下
vi /etc/hosts
192.168.0.30 dc-hadoop30
192.168.0.31 dc-hadoop31
192.168.0.32 dc-hadoop32
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
1.3 做好主节点到各个从节点的免密码SSH
prepare1:打通免密码SSH
打通dc-hadoop1到其他服务器的免密码SSH
在主节点上面:
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys;
chmod 700 ~/.ssh;
chmod 600 ~/.ssh/authorized_keys;
用ssh localhost测试通过
在dc-hadoop2,dc-hadoop3上面分别执行ssh-keygen -t rsa(这步一定要有,这步可以产生/root/.ssh/目录,并且正确设置目录的权限)
然后在dc-hadoop1上执行:
scp ~/.ssh/id_rsa.pub root@dc-hadoop139:/root/.ssh/id_rsa.pub
scp ~/.ssh/id_rsa.pub root@dc-hadoop140:/root/.ssh/id_rsa.pub
在各个从节点(dc-hadoop2,dc-hadoop3)上面:
cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys;
chmod 600 ~/.ssh/authorized_keys;
---------------------------------------------------------------------------------------------------------------------
把主节点的如下文件拷贝到某个节点,则该新节点也自然成了免密码的主节点
[root@dc-hadoop110 .ssh]# scp -r * root@60.191.57.130:/root/.ssh/
非ROOT账号,也可以打通免密码SSH:
==================================================================================
打通142 tools 到55的tools
在142:
ssh-keygen -t rsa
cat /home/tools/.ssh/id_rsa.pub >> /home/tools/.ssh/authorized_keys;
chmod 700 /home/tools/.ssh/;
chmod 600 /home/tools/.ssh/authorized_keys;
在55上
useradd -m tools;
ssh-keygen -t rsa
scp /home/tools/.ssh/id_rsa.pub root@dc-hadoop55:/home/tools/.ssh/id_rsa.pub
cat /home/tools/.ssh/id_rsa.pub>>/home/tools/.ssh/authorized_keys;
chmod 600 /home/tools/.ssh/authorized_keys;
1.4 安装JDK
prepare1: 安装JDK
在服务器每个节点,创建/opt/app目录
mkdir /opt/app;
将 jdk-7u45-linux-x64.tar上传到dc-hadoop1的/opt/app目录下面,并解压缩
tar -zxvf jdk-7u45-linux-x64.tar.gz
scp -r /opt/app/jdk1.7.0_45/ root@dc-hadoop139:/opt/app/;
scp -r /opt/app/jdk1.7.0_45/ root@dc-hadoop140:/opt/app/;
修改环境变量配置
vi /etc/profile
在文件最后增加以下内容:
export JAVA_HOME=/opt/app/jdk1.7.0_45
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
prepare2: 分发/etc/hosts和/etc/profile
scp -r /etc/profile root@dc-hadoop139:/etc/;
scp -r /etc/profile root@dc-hadoop140:/etc/;
source /etc/profile
这个步骤的目的是自动产生了一个/etc/yum.repos.d/cloudera-cdh4.repo文件
这个文件随后可以配置安装hadoop的源路径,后面我们会搭建一套本地源来使用yum安装hadoop
prepare3:
将 cloudera-cdh-4-0.x86_64.rpm 放到各个服务器节点的/root目录下
scp cloudera-cdh-4-0.x86_64.rpm root@dc-hadoop139:/root/;
scp cloudera-cdh-4-0.x86_64.rpm root@dc-hadoop140:/root/;
在集群的每一台机器上面,执行以下命令:
yum --nogpgcheck localinstall cloudera-cdh-4-0.x86_64.rpm
rpm --import http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
vi /etc/yum.repos.d/cloudera-cdh4.repo
baseurl=http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/4/
修改为 baseurl=http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/4.2.2/
网友评论