1、运行环境
主机IP 主机名
xxx.xxx.xxx.xxx po-master1
xxx.xxx.xxx.xxx po-master2
xxx.xxx.xxx.xxx po-slave1
xxx.xxx.xxx.xxx po-slave2
xxx.xxx.xxx.xxx po-slave3
2、配置主机名(分别在五台机器上执行)
vi /etc/sysconfig/network
hostname +主机名
例如: hostname po-master1
3、配置映射关系(把以下五条命令在五台机器上执行)
echo "xxx.xxx.xxx.xxx po-master1">>/etc/hosts
echo "xxx.xxx.xxx.xxx po-master2">>/etc/hosts
echo "xxx.xxx.xxx.xxx po-slave1">>/etc/hosts
echo "xxx.xxx.xxx.xxx po-slave2">>/etc/hosts
echo "xxx.xxx.xxx.xxx po-slave3">>/etc/hosts
4、安装JDK(在po-master1上执行)
4.1、下载JDK安装包:jdk-8u102-linux-x64.tar.gz
注:作者放到/soft/java具体位置可自行安排
4.2、 安装
cd /soft/java
mkdir jdk1.8.0_121
rpm -ivh jdk-7u76-linux-x64.rpm --prefix=/soft/java
ln -s -f jdk1.8.0_121/ jdk---创建连接
5、开放端口(五台机器上都需要配置)
/sbin/iptables -I INPUT -p tcp --dport 80 -j ACCEP
/sbin/iptables -A INPUT -s 0.0.0.0/0 -p tcp --dport 22 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 22 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 50010 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 1004 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 50075 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 1006 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 50020 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8020 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 50070 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 50470 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 50090 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 50495 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8485 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8480 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8032 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8030 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8031 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8033 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8088 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8040 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8042 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8041 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 10020 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 13562 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 19888 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 60000 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 60010 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 60020 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 60030 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 2181 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 2888 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 3888 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8080 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8085 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 9090 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 9095 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 9090 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 9083 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 10000 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 16000 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 2181 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 2888 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 3888 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 3181 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 4181 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 8019 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 9010 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 11000 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 11001 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 14000 -j ACCEPT
/sbin/iptables -A INPUT -s x.x.x.x -p tcp --dport 14001 -j ACCEPT
/etc/rc.d/init.d/iptables save
PS:关闭端口详解-参考CDH官网
https://www.cloudera.com/documentation/cdh/5-1-x/CDH5-Installation-Guide/cdh5ig_ports_cdh5.html
测试(可忽略)
/etc/init.d/iptables status
6、五台机器配置互相免秘钥登陆
6.1、创建ssh文件
如果已经创建不要覆盖
cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
分别把五台机器的公钥加载到authorized_keys
6.2、
vi /etc/ssh/sshd_config
打开如下内容
HostKey /etc/ssh/ssh_host_rsa_key
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
6.3、重启
/etc/init.d/sshd restart
6.4、测试ssh
ssh po-master1
ssh po-master2
ssh po-slave1
ssh po-slave2
ssh po-slave3
7、向其他机器分发jdk
scp -rp /soft/java/ root@po-master2:/soft/java
scp -rp /soft/java/ root@po-salve1:/soft/java
scp -rp /soft/java/ root@po-salve2:/soft/java
scp -rp /soft/java/ root@po-salve3:/soft/java
8、配置环境变量(分别在五台机器上执行)
echo "export JAVA_HOME=/soft/java/jdk" >> /etc/profile
echo "export PATH=$PATH:$HOME/bin:$JAVA_HOME:$JAVA_HOME/bin:/usr/bin/" >> /etc/profile
echo "export CLASSPATH=.:$JAVA_HOME/lib" >> /etc/profile
. /etc/profile
9、配置NTP服务器和客户端(因为使用阿里云此处省略)
10、配置mysql
10.1、上传mysql文件(博主放到/soft/mysql目录下)
10.2、解压
cd /soft/mysql
tar -zxvf mysql-5.7.17-linux-glibc2.5-x86_64.tar.gz -C /usr/local
10.3、将目录重命名
cd /usr/local
mv mysql-5.7.17-linux-glibc2.5-x86_64/ mysql
10.4、创建data目录
mkdir /usr/local/mysql/data
yum install libaio
10.5、安装mysql
cd /usr/local/mysql/bin
./mysql_install_db --user=root --basedir=/usr/local/mysql --datadir=//usr/local/mysql/data
10.5.1、官网下载yum源
https://dev.mysql.com/downloads/repo/yum/
10.5.2、安装yum源
yum localinstall mysql57-community-release-el6-9.noarch.rpm
10.5.3、安装mysql
yum install mysql-community-server
10.5.4、创建组和用户
groupadd mysql
useradd mysql -g mysql
10.5.5、修改配置文件开启二进制日志
vi /etc/my.cnf (在[mysqld]下面添加如下内容)
server-id=1
log-bin=/home/mysql/log/logbin.log
10.5.6、开启服务
service mysqld start
10.5.7、查看mysql默认的密码
grep 'temporary password' /var/log/mysqld.log
10.5.8、根据密码进入mysql
mysql -u root -p
ALTER USER 'root'@'localhost' IDENTIFIED BY 'xxxxxx';
例如:
ALTER USER 'root'@'localhost' IDENTIFIED BY 'xxxxxx';
Query OK, 0 rows affected (0.01 sec)
注:MySQL's
validate_password plugin is installed by default. This will require
that passwords contain at least one upper case letter, one lower case
letter, one digit, and one special character, and that the total
password length is at least 8 characters.
10.5.9、授权(给其他四台机器授权)
grant all privileges on oozie.* to 'oozie'@'localhost' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on oozie.* to 'oozie'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on oozie.* to 'oozie'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on oozie.* to 'oozie'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on oozie.* to 'oozie'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
GRANT all privileges on *.* to 'root'@'localhost' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on hive.* to 'hive'@'localhost' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on hive.* to 'hive'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on hive.* to 'hive'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on hive.* to 'hive'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
grant all privileges on hive.* to 'hive'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION;
flush privileges;
关于新版本的账户说明:
https://dev.mysql.com/doc/refman/5.7/en/adding-users.html
10.5.10、创建数据库
create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
create database oozie DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
11、安装cloudera manager
11.1、下载
地址:http://archive-primary.cloudera.com/cm5/cm/5/
放在/soft/bigdata/clouderamanager下
cd /soft/bigdata/clouderamanager
tar -xvf cloudera-manager-wheezy-cm5.10.0_amd64.tar.gz
11.2、创建用户(所有节点)
useradd --system --home=/soft/bigdata/clouderamanager/cm-5.10.0/run/cloudera-scm-server --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm
11.3、修改主机名和端口号
cd /soft/bigdata/clouderamanager/cm-5.10.0/etc/cloudera-scm-agent
vi config.ini
server_host=po-master1
server_port=7182
11.4、下载mysql驱动包
下载mysql-connector-java-5.1.7-bin.jar放到
/soft/bigdata/clouderamanager/cm-5.10.0/share/cmf/lib 目录下
11.5、为Cloudera Manager 5建立数据库
/soft/bigdata/clouderamanager/cm-5.10.0/share/cmf/schema/scm_prepare_database.sh mysql scm -hlocalhost -uroot -pxxxxxx --scm-host localhost scm xxxxxx scm
格式是:scm_prepare_database.sh 数据库类型 数据库 服务器 用户名 密码 –scm-host Cloudera_Manager_Server所在的机器,后面那三个不知道代表什么,直接照抄官网的了。
开启Cloudera Manager 5 Server端。
11.6、向其他机器分发CDH
scp -rp /soft/bigdata/clouderamanager root@po-master2:/soft/bigdata
scp -rp /soft/bigdata/clouderamanager root@po-slave1:/soft/bigdata
scp -rp /soft/bigdata/clouderamanager root@po-slave2:/soft/bigdata
scp -rp /soft/bigdata/clouderamanager root@po-slave3:/soft/bigdata
11.7、准备Parcels,用以安装CDH5
放在/soft/bigdata/clouderamanager/cloudera/parcel-repo,路径必须包含cloudera/parcel-repo
官方地址:
http://archive.cloudera.com/cdh5/parcels
需要下载以下两个文件
• CDH-5.10.0-1.cdh5.10.0.p0.41-el6.parcel
• manifest.json
打开 manifest.json找到CDH-5.10.0-1.cdh5.10.0.p0.41-el6.parcel的hash值里的内容
"hash": "52f95da433f203a05c2fd33eb0f144e6a5c9d558"
echo '52f95da433f203a05c2fd33eb0f144e6a5c9d558' >> CDH-5.10.0-1.cdh5.10.0.p0.41-el6.parcel.sha
11.8、启动
/soft/bigdata/clouderamanager/cm-5.10.0/etc/init.d/cloudera-scm-server start(主节点启动)
/soft/bigdata/clouderamanager/cm-5.10.0/etc/init.d/cloudera-scm-agent start(所有节点上启动)
11.9、登陆
http://po-master1:7180
默认用户密码都是admin
点击继续
Paste_Image.png
选择免费的点击继续
Paste_Image.png
勾选机器
Paste_Image.png
点击更多选项修改parcel路径
/soft/bigdata/clouderamanager/cloudera/parcel-repo
Paste_Image.png
需要重启所有节点的服务
/soft/bigdata/clouderamanager/cm-5.10.0/etc/init.d/cloudera-scm-server restart(主节点启动)
/soft/bigdata/clouderamanager/cm-5.10.0/etc/init.d/cloudera-scm-agent restart(所有节点上启动)
选择如下内容点击继续
Paste_Image.png
等待安装...
Paste_Image.png
安装完成,点击继续
Paste_Image.png
安装过程有个小提示:
已启用透明大页面压缩,可能会导致重大性能问题。请运行“echo never > /sys/kernel/mm/transparent_hugepage/defrag”以禁用此设置,然后将同一命令添加到 /etc/rc.local 等初始脚本中,以便在系统重启时予以设置。以下主机将受到影响:
Paste_Image.png
选择自定义服务,选择自己需要的服务
Paste_Image.png
Paste_Image.png
Paste_Image.png
Paste_Image.png
等待安装
Paste_Image.png
安装过程中会遇到错误:
是缺少jdbc驱动把文件考入到lib下即可
Paste_Image.png
配置NameNode HA
进入HDFS界面,点击“启用High Availability”
Paste_Image.png
输入NameService名称,这里设置为:nameservice1,点击继续按钮。
Paste_Image.png
配置JourNode的路径,修改为/opt/dfs/jn
Paste_Image.png
错误整理;
Fatal error during KafkaServer startup. Prepare to shutdownkafka.common.InconsistentBrokerIdException: Configured broker.id 52 doesn’t match stored broker.id 102 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:648)at kafka.server.KafkaServer.startup(KafkaServer.scala:187)at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)at kafka.Kafka$.main(Kafka.scala:67)at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:76)at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala)
进入到/var/local/kafka/data目录查看meta.propertie里面的kakfa 的broker id是什么
[main]: Metastore Thrift Server threw an exception…javax.jdo.JDOFatalInternalException: Error creating transactional connection factoryat org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)at java.security.AccessController.doPrivileged(Native Method)at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:411)at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:440)at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:335)at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:291)at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:57)at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:648)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:626)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:679)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5989)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5984)at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6236)at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6161)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:221)at org.apache.hadoop.util.RunJar.main(RunJar.java:136)NestedThrowablesStackTrace:java.lang.reflect.InvocationTargetExceptionat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)at org.datanucleus.store.AbstractStoreManager.(AbstractStoreManager.java:240)at org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:286)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)at java.security.AccessController.doPrivileged(Native Method)at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:411)at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:440)at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:335)at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:291)at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:57)at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:648)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:626)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:679)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5989)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5984)at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6236)at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6161)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:221)at org.apache.hadoop.util.RunJar.main(RunJar.java:136)Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the “BONECP” plugin to create a ConnectionPool gave an error : The specified datastore driver (“com.mysql.jdbc.Driver”) was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)at org.datanucleus.store.rdbms.ConnectionFactoryImpl.(ConnectionFactoryImpl.java:85)… 54 moreCaused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver (“com.mysql.jdbc.Driver”) was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)… 56 more
把驱动程序放在/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hive/lib
SERVER[po-master1] E0103: Could not load service classes, Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’at org.apache.oozie.service.Services.loadServices(Services.java:309)at org.apache.oozie.service.Services.init(Services.java:213)at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4236)at org.apache.catalina.core.StandardContext.start(StandardContext.java:4739)at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)at org.apache.catalina.core.StandardService.start(StandardService.java:525)at org.apache.catalina.core.StandardServer.start(StandardServer.java:759)at org.apache.catalina.startup.Catalina.start(Catalina.java:595)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)Caused by: org.apache.openjpa.persistence.PersistenceException: Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:102)at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1518)at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:531)at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:456)at org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:120)at org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)at org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)at org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:644)at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:203)at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:156)at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:227)at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:154)at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:60)at org.apache.oozie.service.JPAService.getEntityManager(JPAService.java:514)at org.apache.oozie.service.JPAService.init(JPAService.java:215)at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)at org.apache.oozie.service.Services.setService(Services.java:372)at org.apache.oozie.service.Services.loadServices(Services.java:305)… 26 moreCaused by: org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’at org.apache.commons.dbcp.BasicDataSource.createConnectionFactory(BasicDataSource.java:1429)at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1371)at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)at org.apache.openjpa.lib.jdbc.DelegatingDataSource.getConnection(DelegatingDataSource.java:110)at org.apache.openjpa.lib.jdbc.DecoratingDataSource.getConnection(DecoratingDataSource.java:87)at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:91)… 46 moreCaused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driverat org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680)at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526)at org.apache.commons.dbcp.BasicDataSource.createConnectionFactory(BasicDataSource.java:1420)… 51 more
把mysql-connector-java.jar,mysql-connector-java-5.1.39.jar驱动程序放在:/var/lib/oozie
[main]: Query for candidates of org.apache.hadoop.hive.metastore.model.MDatabase and subclasses resulted in no possible candidatesRequired table missing : “DBS
“ in Catalog “” Schema “”. DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable “datanucleus.autoCreateTables”org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : “DBS
“ in Catalog “” Schema “”. DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable “datanucleus.autoCreateTables”at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485)at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3380)at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190)at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841)at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605)at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954)at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:679)at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:408)at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:947)at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:370)at org.datanucleus.store.query.Query.executeQuery(Query.java:1744)at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)at org.datanucleus.store.query.Query.execute(Query.java:1654)at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:185)at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.(MetaStoreDirectSql.java:136)at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:340)at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:291)at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:57)at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:648)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:626)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:679)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5989)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5984)at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6236)at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6161)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:221)at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
SERVER[po-master1] E0103: Could not load service classes, Cannot create PoolableConnectionFactory (Table 'oozie.VALIDATE_CONN' doesn't exist)
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot create PoolableConnectionFactory (Table ‘oozie.VALIDATE_CONN’ doesn’t exist)at org.apache.oozie.service.Services.loadServices(Services.java:309)at org.apache.oozie.service.Services.init(Services.java:213)at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4236)at org.apache.catalina.core.StandardContext.start(StandardContext.java:4739)at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)at org.apache.catalina.core.StandardService.start(StandardService.java:525)at org.apache.catalina.core.StandardServer.start(StandardServer.java:759)at org.apache.catalina.startup.Catalina.start(Catalina.java:595)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
报这个错误需要修改hive的配置。搜索autoCreateSchema 改为true
SERVER[po-master1] E0103: Could not load service classes, Cannot create PoolableConnectionFactory (Table ‘oozie.VALIDATE_CONN’ doesn’t exist)org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot create PoolableConnectionFactory (Table ‘oozie.VALIDATE_CONN’ doesn’t exist)at org.apache.oozie.service.Services.loadServices(Services.java:309)at org.apache.oozie.service.Services.init(Services.java:213)at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4236)at org.apache.catalina.core.StandardContext.start(StandardContext.java:4739)at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)at org.apache.catalina.core.StandardService.start(StandardService.java:525)at org.apache.catalina.core.StandardServer.start(StandardServer.java:759)at org.apache.catalina.startup.Catalina.start(Catalina.java:595)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
点击界面上的Oozie 点击操作,创建Oozie数据库表
最后导入环境变量就可以测试了export ZK_HOME=/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/zookeeper/export HBASE_HOME=/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hbase/export HADOOP_HOME=/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/export HIVE_HOME=/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hive/export SQOOP_HOME=/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/sqoop/export OOZIE_HOME=/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/oozie/export PATH=$PATH:$HOME/bin:$JAVA_HOME:$JAVA_HOME/bin:/usr/bin/:$HADOOP_HOME/bin:$HIVE_HOME/bin:$SQOOP_HOME/bin:$OOZIE_HOME/bin:$ZK_HOME/bin:$HBASE_HOME/bin
安装完毕...
网友评论