Shardingsphere-jdbc 配置文件加载
在进行jdbc的功能演示时,有个疑问
不同的功能对应不同的配置文件,不同的配置文件是怎么被加载的,最后生成什么
分析配置文件加载流程
- 生成
YamlRootConfiguration
- 解析config
2.1 解析数据源连接信息
2.2 解析规则信息,生成RuleConfiguration
- 通过
ShardingSphereDataSourceFactory
生成ShardingSphereDataSource
YamlRootConfiguration
public final class YamlRootConfiguration implements YamlConfiguration {
private String schemaName;
private Map<String, Map<String, Object>> dataSources = new HashMap<>();//多数据源
private Collection<YamlRuleConfiguration> rules = new LinkedList<>();//有多少个配置文件就有多少个YamlRuleConfiguration
private YamlModeConfiguration mode;//控制什么?
private Properties props = new Properties();//配置文件中配置的props
}
对应配置文件中的配置
dataSources:
ds_0:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.mysql.jdbc.Driver
jdbcUrl: jdbc:mysql://localhost:3306/demo_ds_0?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: root1234
ds_1:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.mysql.jdbc.Driver
jdbcUrl: jdbc:mysql://localhost:3306/demo_ds_1?serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password: root1234
通过比较配置文件和YamlRootConfiguration
可以发现,配置文件中的配置可以和YamlRootConfiguration
映射起来的,也就是配置文件中的内容可以被解析出来存入YamlRootConfiguration
中待下一步使用
解析数据源并启动
public Map<String, DataSource> swapToDataSources(final Map<String, Map<String, Object>> yamlDataSources) {
return DataSourceConverter.getDataSourceMap(yamlDataSources.entrySet().stream().collect(Collectors.toMap(Entry::getKey, entry -> swapToDataSourceConfiguration(entry.getValue()))));
}
public static Map<String, DataSource> getDataSourceMap(final Map<String, DataSourceConfiguration> dataSourceConfigMap) {
return dataSourceConfigMap.entrySet().stream().collect(Collectors.toMap(Entry::getKey,
entry -> entry.getValue().createDataSource(), (oldValue, currentValue) -> oldValue, LinkedHashMap::new));
}
@SneakyThrows(ReflectiveOperationException.class)
public DataSource createDataSource() {
DataSource result = (DataSource) Class.forName(dataSourceClassName).getConstructor().newInstance();
Method[] methods = result.getClass().getMethods();
Map<String, Object> allProps = new HashMap<>(props);
allProps.putAll((Map) customPoolProps);
for (Entry<String, Object> entry : allProps.entrySet()) {
if (SKIPPED_PROPERTY_NAMES.contains(entry.getKey())) {
continue;
}
try {
Optional<Method> setterMethod = findSetterMethod(methods, entry.getKey());
if (setterMethod.isPresent() && null != entry.getValue()) {
setDataSourceField(setterMethod.get(), result, entry.getValue());
}
} catch (final IllegalArgumentException ex) {
throw new ShardingSphereConfigurationException("Incorrect configuration item: the property %s of the dataSource, because %s", entry.getKey(), ex.getMessage());
}
}
return JDBCParameterDecoratorHelper.decorate(result);//连接池配置
}
public HikariDataSource decorate(final HikariDataSource dataSource) {
Map<String, String> urlProps = new ConnectionUrlParser(dataSource.getJdbcUrl()).getQueryMap();
addJDBCProperty(dataSource, urlProps, "useServerPrepStmts", Boolean.TRUE.toString());
addJDBCProperty(dataSource, urlProps, "cachePrepStmts", Boolean.TRUE.toString());
addJDBCProperty(dataSource, urlProps, "prepStmtCacheSize", "200000");
addJDBCProperty(dataSource, urlProps, "prepStmtCacheSqlLimit", "2048");
addJDBCProperty(dataSource, urlProps, "useLocalSessionState", Boolean.TRUE.toString());
addJDBCProperty(dataSource, urlProps, "rewriteBatchedStatements", Boolean.TRUE.toString());
addJDBCProperty(dataSource, urlProps, "cacheResultSetMetadata", Boolean.FALSE.toString());
addJDBCProperty(dataSource, urlProps, "cacheServerConfiguration", Boolean.TRUE.toString());
addJDBCProperty(dataSource, urlProps, "elideSetAutoCommits", Boolean.TRUE.toString());
addJDBCProperty(dataSource, urlProps, "maintainTimeStats", Boolean.FALSE.toString());
addJDBCProperty(dataSource, urlProps, "netTimeoutForStreamingResults", "0");
addJDBCProperty(dataSource, urlProps, "tinyInt1isBit", Boolean.FALSE.toString());
addJDBCProperty(dataSource, urlProps, "useSSL", Boolean.FALSE.toString());
addJDBCProperty(dataSource, urlProps, "serverTimezone", "UTC");
HikariDataSource result = new HikariDataSource(dataSource);
dataSource.close();
return result;
}
- 从配置文件中单独抽出数据源的连接信息
- 一个数据源对应一个
DataSourceConfiguration
- 在
DataSourceConfiguration
启动数据源 - 对启动的
dataSource
进行加工,放入一些全局配置(这里我认为可以优化成动态配置)
解析规则
public Collection<RuleConfiguration> swapToRuleConfigurations(final Collection<YamlRuleConfiguration> yamlRuleConfigs) {
Collection<RuleConfiguration> result = new LinkedList<>();
Collection<Class<?>> ruleConfigTypes = yamlRuleConfigs.stream().map(YamlRuleConfiguration::getRuleConfigurationType).collect(Collectors.toList());
for (Entry<Class<?>, YamlRuleConfigurationSwapper> entry : OrderedSPIRegistry.getRegisteredServicesByClass(YamlRuleConfigurationSwapper.class, ruleConfigTypes).entrySet()) {
result.addAll(swapToRuleConfigurations(yamlRuleConfigs, entry.getKey(), entry.getValue()));
}
return result;
}
private Collection<RuleConfiguration> swapToRuleConfigurations(final Collection<YamlRuleConfiguration> yamlRuleConfigs,
final Class<?> ruleConfigType, final YamlRuleConfigurationSwapper swapper) {
return yamlRuleConfigs.stream().filter(
each -> each.getRuleConfigurationType().equals(ruleConfigType)).map(each -> (RuleConfiguration) swapper.swapToObject(each)).collect(Collectors.toList());
}
public static <T extends OrderedSPI<?>> Map<Class<?>, T> getRegisteredServicesByClass(final Class<T> orderedSPIClass, final Collection<Class<?>> types) {
Collection<T> registeredServices = getRegisteredServices(orderedSPIClass);
Map<Class<?>, T> result = new LinkedHashMap<>(registeredServices.size(), 1);//为啥设成1?
for (T each : registeredServices) {
types.stream().filter(type -> each.getTypeClass() == type).forEach(type -> result.put(type, each));
}
return result;
}
public static <T extends OrderedSPI<?>> Map<Class<?>, T> getRegisteredServicesByClass(final Class<T> orderedSPIClass, final Collection<Class<?>> types) {
Collection<T> registeredServices = getRegisteredServices(orderedSPIClass);
Map<Class<?>, T> result = new LinkedHashMap<>(registeredServices.size(), 1);
for (T each : registeredServices) {
types.stream().filter(type -> each.getTypeClass() == type).forEach(type -> result.put(type, each));
}
return result;
}
public static <T extends OrderedSPI<?>> Collection<T> getRegisteredServices(final Class<T> orderedSPIClass) {
return getRegisteredServices(orderedSPIClass, Comparator.naturalOrder());//使用常数大小来排序
}
public static <T extends OrderedSPI<?>> Collection<T> getRegisteredServices(final Class<T> orderedSPIClass, final Comparator<Integer> comparator) {
Map<Integer, T> result = new TreeMap<>(comparator);//使用treemap来达到排序目的
for (T each : ShardingSphereServiceLoader.getSingletonServiceInstances(orderedSPIClass)) {
Preconditions.checkArgument(!result.containsKey(each.getOrder()), "Found same order `%s` with `%s` and `%s`", each.getOrder(), result.get(each.getOrder()), each);
result.put(each.getOrder(), each);
}
return result.values();
}
解析流程
- 提取
YamlRuleConfiguration
中的RuleConfigurationType
- 根据
RuleConfigurationType
将已实现的YamlRuleConfigurationSwapper
按顺序转化为标准配置 - 最后生成
ShardingRuleConfiguration
集合
YamlRuleConfigurationSwapper
SPI 名称 | 详细说明 |
---|---|
YamlRuleConfigurationSwapper | 用于将 YAML 配置转化为标准用户配置 |
已知实现类 | 详细说明 |
---|---|
ReadwriteSplittingRuleAlgorithmProviderConfigurationYamlSwapper | 用于将基于算法的读写分离配置转化为读写分离标准配置 |
DatabaseDiscoveryRuleAlgorithmProviderConfigurationYamlSwapper | 用于将基于算法的数据库发现配置转化为数据库发现标准配置 |
ShardingRuleAlgorithmProviderConfigurationYamlSwapper | 用于将基于算法的分片配置转化为分片标准配置 |
EncryptRuleAlgorithmProviderConfigurationYamlSwapper | 用于将基于算法的加密配置转化为加密标准配置 |
ReadwriteSplittingRuleConfigurationYamlSwapper | 用于将读写分离的 YAML 配置转化为读写分离标准配置 |
DatabaseDiscoveryRuleConfigurationYamlSwapper | 用于将数据库发现的 YAML 配置转化为数据库发现标准配置 |
AuthorityRuleConfigurationYamlSwapper | 用于将权限规则的 YAML 配置转化为权限规则标准配置 |
ShardingRuleConfigurationYamlSwapper | 用于将分片的 YAML 配置转化为分片标准配置 |
EncryptRuleConfigurationYamlSwapper | 用于将加密的 YAML 配置转化为加密标准配置 |
ShadowRuleConfigurationYamlSwapper | 用于将影子库的 YAML 配置转化为影子库标准配置 |
可以在configuration.cn.md
看到每个swapper的作用
问题: 为什么要按顺序来注册swapper?
ShardingRuleConfiguration
public final class ShardingRuleConfiguration implements SchemaRuleConfiguration, DistributedRuleConfiguration {
private Collection<ShardingTableRuleConfiguration> tables = new LinkedList<>();
private Collection<ShardingAutoTableRuleConfiguration> autoTables = new LinkedList<>();
private Collection<String> bindingTableGroups = new LinkedList<>();
private Collection<String> broadcastTables = new LinkedList<>();
private ShardingStrategyConfiguration defaultDatabaseShardingStrategy;
private ShardingStrategyConfiguration defaultTableShardingStrategy;
private KeyGenerateStrategyConfiguration defaultKeyGenerateStrategy;
private String defaultShardingColumn;
private Map<String, ShardingSphereAlgorithmConfiguration> shardingAlgorithms = new LinkedHashMap<>();
private Map<String, ShardingSphereAlgorithmConfiguration> keyGenerators = new LinkedHashMap<>();
}
对应的配置内容
rules:
- !SHARDING
tables:
t_order:
actualDataNodes: ds_${0..1}.t_order_${0..1}
tableStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: t_order_inline
keyGenerateStrategy:
column: order_id
keyGeneratorName: snowflake
t_order_item:
actualDataNodes: ds_${0..1}.t_order_item_${0..1}
tableStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: t_order_item_inline
keyGenerateStrategy:
column: order_item_id
keyGeneratorName: snowflake
bindingTables:
- t_order,t_order_item
broadcastTables:
- t_address
defaultDatabaseStrategy:
standard:
shardingColumn: user_id
shardingAlgorithmName: database_inline
defaultTableStrategy:
none:
shardingAlgorithms:
database_inline:
type: INLINE
props:
algorithm-expression: ds_${user_id % 2}
t_order_inline:
type: INLINE
props:
algorithm-expression: t_order_${order_id % 2}
t_order_item_inline:
type: INLINE
props:
algorithm-expression: t_order_item_${order_id % 2}
keyGenerators:
snowflake:
type: SNOWFLAKE
props:
worker-id: 123
通过比较配置文件和ShardingRuleConfiguration
可以发现,配置文件中的配置可以和ShardingRuleConfiguration
映射起来的
生成ShardingSphereDataSource
public static DataSource createDataSource(final String schemaName, final ModeConfiguration modeConfig,
final Map<String, DataSource> dataSourceMap, final Collection<RuleConfiguration> configs, final Properties props) throws SQLException {
return new ShardingSphereDataSource(Strings.isNullOrEmpty(schemaName) ? DefaultSchema.LOGIC_NAME : schemaName, modeConfig, dataSourceMap, configs, props);
}
将解析出来的dataSource
和RuleConfiguration
通过调用ShardingSphereDataSource
的构造方法来生成
public ShardingSphereDataSource(final String schemaName, final ModeConfiguration modeConfig, final Map<String, DataSource> dataSourceMap,
final Collection<RuleConfiguration> ruleConfigs, final Properties props) throws SQLException {
this.schemaName = schemaName;
contextManager = createContextManager(schemaName, modeConfig, dataSourceMap, ruleConfigs, props);
}
ShardingSphereDataSource
中的成员变量contextManager
private ContextManager createContextManager(final String schemaName, final ModeConfiguration modeConfig, final Map<String, DataSource> dataSourceMap,
final Collection<RuleConfiguration> ruleConfigs, final Properties props) throws SQLException {
Map<String, Map<String, DataSource>> dataSourcesMap = Collections.singletonMap(schemaName, dataSourceMap);
Map<String, Collection<RuleConfiguration>> schemaRuleConfigs = Collections.singletonMap(
schemaName, ruleConfigs.stream().filter(each -> each instanceof SchemaRuleConfiguration).collect(Collectors.toList()));
Collection<RuleConfiguration> globalRuleConfigs = ruleConfigs.stream().filter(each -> each instanceof GlobalRuleConfiguration).collect(Collectors.toList());
ContextManagerBuilder builder = TypedSPIRegistry.getRegisteredService(ContextManagerBuilder.class, null == modeConfig ? "Memory" : modeConfig.getType(), new Properties());//工厂模式生成对应的ContextManagerBuilder
return builder.build(modeConfig, dataSourcesMap, schemaRuleConfigs, globalRuleConfigs, props, null == modeConfig || modeConfig.isOverwrite());
}
@Override
public ContextManager build(final ModeConfiguration modeConfig, final Map<String, Map<String, DataSource>> dataSourcesMap,
final Map<String, Collection<RuleConfiguration>> schemaRuleConfigs, final Collection<RuleConfiguration> globalRuleConfigs,
final Properties props, final boolean isOverwrite) throws SQLException {
MetaDataContexts metaDataContexts = new MetaDataContextsBuilder(dataSourcesMap, schemaRuleConfigs, globalRuleConfigs, props).build(null);
TransactionContexts transactionContexts = createTransactionContexts(metaDataContexts);
ContextManager result = new MemoryContextManager();
result.init(metaDataContexts, transactionContexts);
return result;
}
网友评论