本节主要内容:
impala环境部署
1.系统环境:
OS:CentOS Linux release 7.5.1804 (Core)
CPU:2核心
Memory:1GB
运行用户:root
JDK版本:1.8.0_252
Hadoop版本:cdh5.16.2
2.集群各节点角色规划为:
172.26.37.245 node1.hadoop.com---->namenode,zookeeper,journalnode,hadoop-hdfs-zkfc,resourcenode,historyserver,hbase,hbase-master,hive,hive-metastore,hive-server2,hive-hbase,sqoop,impala,impala-server,impala-state-store,impala-catalog
172.26.37.246 node2.hadoop.com---->datanode,zookeeper,journalnode,nodemanager,hadoop-client,mapreduce,hbase-regionserver,impala,impala-server,hive
172.26.37.247 node3.hadoop.com---->datanode,nodemanager,hadoop-client,mapreduce,hive,mysql-server,impala,impala-server,
172.26.37.248 node4.hadoop.com---->namenode,zookeeper,journalnode,hadoop-hdfs-zkfc,hive,hive-server2,impala-shell
3.环境说明:
本次追加部署
172.26.37.245 node1.hadoop.com---->impala,impala-server,impala-state-store,impala-catalog
172.26.37.246 node2.hadoop.com---->impala,impala-server,hive
172.26.37.247 node3.hadoop.com---->impala,impala-server
172.26.37.248 node4.hadoop.com---->impala-shell
一.安装
node1节点
# yum -y install impala impala-server impala-state-store impala-catalog
node2节点
# yum -y install impala impala-server
node3节点
# yum -y install impala impala-server
node4节点
# yum -y install impala-shell
二.配置
node1节点
# cd /etc/impala/conf
# cp /usr/lib/hive/conf/hive-site.xml ./
# cp /etc/hadoop/conf/core-site.xml ./
# cp /etc/hadoop/conf/hdfs-site.xml ./
# cp /usr/lib/hbase/conf/hbase-site.xml ./
node2节点追加hive配置
# cp /usr/lib/hive/conf/hive-site.xml root@node2.hadoop.com:/usr/lib/hive/conf/hive-site.xml
node2节点
# cd /etc/impala/conf
# cp /usr/lib/hive/conf/hive-site.xml ./
# cp /etc/hadoop/conf/core-site.xml ./
# cp /etc/hadoop/conf/hdfs-site.xml ./
# cp /usr/lib/hbase/conf/hbase-site.xml ./
node3节点
# cd /etc/impala/conf
# cp /usr/lib/hive/conf/hive-site.xml ./
# cp /etc/hadoop/conf/core-site.xml ./
# cp /etc/hadoop/conf/hdfs-site.xml ./
# cp /usr/lib/hbase/conf/hbase-site.xml ./
node1、node2、node3节点
# cp -p /etc/impala/conf/hdfs-site.xml /etc/impala/conf/hdfs-site.xml.20200705
# vi /etc/impala/conf/hdfs-site.xml
增加以下内容
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>/var/run/hdfs-sockets/dn._PORT</value>
</property>
<property>
<name>dfs.client.file-block-storage-locations.timeout.millis</name>
<value>10000</value>
</property>
# mkdir /var/run/hdfs-sockets/
# chmod -R hdfs:hdfs /var/run/hdfs-sockets/
# usermod -a -G hadoop impala
# usermod -g -G hdfs impala
# vi /etc/impala/conf/hdfs-site.xml
增加以下内容
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
# cp -p /etc/hadoop/conf/hdfs-site.xml /etc/hadoop/conf/hdfs-site.xml.20200705
# vi /etc/hadoop/conf/hdfs-site.xml
增加以下内容
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
三.启动服务
Node1节点
# service impala-start-store start
# service impala-start-store status
# service impala-catalog start
# service impala-catalog status
# service impala-server start
# service impala-server status
Node2节点
# service impala-server start
# service impala-server status
Node3节点
# service impala-server start
# service impala-server status
四.测试
Node4节点
# vi /etc/default/impala
增加以下内容
IMPALA_STATE_STORE_ARGS=" -log_dir=${IMPALA_LOG_DIR} -state_store_port=${IMPALA_STATE_STORE_PORT}"
IMPALA_CATALOG_SERVER=node1.hadoop.com
IMPALA_STATE_STORE_PORT=24000
IMPALA_BACKEND_PORT=22000
IMPALA_LOG_DIR=/var/log/impala
IMPALA_CATALOG_ARGS=" -log_dir=${IMPALA_LOG_DIR}"
IMPALA_STATE_STORE_ARGS=" -log_dir=${IMPALA_LOG_DIR} -state_store_port=${IMPALA_STATE_STORE_PORT}"
IMPALA_SERVER_ARGS=" \
-log_dir=${IMPALA_LOG_DIR} \
-catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \
-state_store_port=${IMPALA_STATE_STORE_PORT} \
-use_statestore \
-state_store_host=${IMPALA_STATE_STORE_HOST} \
-be_port=${IMPALA_BACKEND_PORT}"
ENABLE_CORE_DUMPS=false
# sudu -u hdfs impala-shell
[not connected] > connect node1.hadoop.com:21000
[node1.hadoop.com:21000] >
[node1.hadoop.com:21000] >
网友评论