本文基于hive-1.2.2源码
Metastore模块在metastore目录下
入口文件
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
流程分析
1.由main函数开始
public static void main(String[] args) throws Throwable {
...
startMetaStoreThreads(conf, startLock, startCondition, startedServing);
startMetaStore(cli.port, ShimLoader.getHadoopThriftAuthBridge(), conf, startLock,
startCondition, startedServing);
...
}
可见执行了2个方法,先启动一些线程,然后启动MetaStore
2.startMetaStoreThreads是启动一些compactor线程,这里略过
3.主流程在startMetaStore中,可以发现
3.1.MetaStore使用thtift来处理RPC连接,支持TCompactProtocol和TBinaryProtocol协议
3.2.MetaStore可以开启sasl支持,sasl实现是基于kerberos,使用MetaStore节点的keytab登录
3.3.MetaStore默认处理类为HMSHandler(即baseHandler)
public static void startMetaStore(int port, HadoopThriftAuthBridge bridge,
HiveConf conf, Lock startLock, Condition startCondition,
AtomicBoolean startedServing) throws Throwable {
...
boolean useFramedTransport = conf.getBoolVar(ConfVars.METASTORE_USE_THRIFT_FRAMED_TRANSPORT);
boolean useCompactProtocol = conf.getBoolVar(ConfVars.METASTORE_USE_THRIFT_COMPACT_PROTOCOL);
useSasl = conf.getBoolVar(HiveConf.ConfVars.METASTORE_USE_THRIFT_SASL);
...
if (useCompactProtocol) {
protocolFactory = new TCompactProtocol.Factory();
inputProtoFactory = new TCompactProtocol.Factory(maxMessageSize, maxMessageSize);
} else {
protocolFactory = new TBinaryProtocol.Factory();
inputProtoFactory = new TBinaryProtocol.Factory(true, true, maxMessageSize, maxMessageSize);
}
HMSHandler baseHandler = new HiveMetaStore.HMSHandler("new db based metaserver", conf,
false);
IHMSHandler handler = newRetryingHMSHandler(baseHandler, conf);
...
saslServer = bridge.createServer(
conf.getVar(HiveConf.ConfVars.METASTORE_KERBEROS_KEYTAB_FILE),
conf.getVar(HiveConf.ConfVars.METASTORE_KERBEROS_PRINCIPAL));
...
4.跟入代码可以发现
4.1.thrift相关定义在
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api
这个包是自动生成的,其中ThriftHiveMetastore.java定义了Metastore对外暴露的接口
4.2.Metastore的数据库操作是在ObjectStore.java文件中,里面实现了thrift暴露的各种接口,Metastore操作数据库是通过jdo进行,使用datanucleus框架
4.3.jdo定义在
metastore/src/model/org/apache/hadoop/hive/metastore/model
其中package.jdo定义了对象和数据库的映射
鉴权
private void drop_database_core(RawStore ms,
final String name, final boolean deleteData, final boolean cascade)
throws NoSuchObjectException, InvalidOperationException, MetaException,
IOException, InvalidObjectException, InvalidInputException {
...
firePreEvent(new PreDropDatabaseEvent(db, this));
...
}
...
private void firePreEvent(PreEventContext event) throws MetaException {
for (MetaStorePreEventListener listener : preListeners) {
try {
listener.onEvent(event);
} catch (NoSuchObjectException e) {
throw new MetaException(e.getMessage());
} catch (InvalidOperationException e) {
throw new MetaException(e.getMessage());
}
}
}
1.1.HiveMetaStore.java中,有一些关于事件的操作,如drop_database_core中,有firePreEvent,Metastore的鉴权是基于类似的事件来做的,由MetaStorePreEventListener抛出InvalidOperationException来实现鉴权
1.2.事件的定义在
metastore/src/java/org/apache/hadoop/hive/metastore/events
1.3.HiveMetaStore的鉴权配置请参考 Storage Based Authorization in the Metastore Server
Metric
public void alter_database(final String dbName, final Database db)
throws NoSuchObjectException, TException, MetaException {
startFunction("alter_database" + dbName);
...
}
...
private String startFunction(String function, String extraLogInfo) {
incrementCounter(function);
logInfo((getIpAddress() == null ? "" : "source:" + getIpAddress() + " ") +
function + extraLogInfo);
try {
Metrics.startScope(function);
} catch (IOException e) {
LOG.debug("Exception when starting metrics scope"
+ e.getClass().getName() + " " + e.getMessage(), e);
}
return function;
}
HiveMetaStore.java中,很多操作中都有startFunction,该方法将向metric中添加操作执行信息
其他
1.Warehouse
metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java
这个类是和hive实际存储(一般是hdfs)打交道的
2.HiveMetaStoreClient
metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
这个类是自带的MetaStore客户端
网友评论