对于数据存储层高并发问题,最先想到的可能就是读写分离,在网站访问量大并且读写不平均的情况下,将存储分为master,slave两台,所有的写都路由到master上,所有的读都路由到slave上,然后master和slave同步。如果一台salve不够,可以加多台,比如一台master,3台slave。在写库的数据库发生变动时,会同步到所有从库,只是同步过程中,会有一定的延迟(除非业务中出现,立即写立即读,否则稍微的延迟是可以接受的)。在此我们讨论一下通过何种方式方式实现读写分离:
方案一:
放在代理层,比如MySQL-Proxy,这样针对整个应用程序都是透明的。 mysql官方不建议实际生产中使用
缺点:降低性能, 不支持事务
方案二:
使用AbstractRoutingDataSource+aop+annotation在dao层决定数据源。
如果采用了mybatis, 可以将读写分离放在ORM层,比如mybatis可以通过mybatis plugin拦截sql语句,所有的insert/update/delete都访问master库,所有的select 都访问salve库,这样对于dao层都是透明。 plugin实现时可以通过注解或者分析语句是读写方法来选定主从库。不过这样依然有一个问题, 也就是不支持事务, 所以我们还需要重写一下DataSourceTransactionManager, 将read-only的事务扔进读库, 其余的有读有写的扔进写库。
方案三:
使用AbstractRoutingDataSource+aop+annotation在service层决定数据源,可以支持事务.
缺点:类内部方法通过this.xx()方式相互调用时,aop不会进行拦截,需进行特殊处理。
那么现在主要来看一下方案二实现方式:
方案二实现:https://github.com/mygudou/smartbatis
- 1 实现新的mybatis plugin, 在此我们通过实现Interceptor接口来完成功能, 所有Executor接口实现的update和query方法都会被拦截到, 先来看看通过注解的形式是怎么实现选库的。 通过注解的方法选库可以用于一些对事务要求不高的场景当中,
@Intercepts({
@Signature(type = Executor.class, method = "update",
args = { MappedStatement.class, Object.class }),
@Signature(type = Executor.class, method = "query",
args = { MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class })
})
@Slf4j
public class AnnotationInterceptor implements Interceptor {
private static final Map<String,DataSourceType> cache = new ConcurrentHashMap<String, DataSourceType>();
@Override
public Object intercept(Invocation invocation) throws Throwable {
Object[] args = invocation.getArgs();
MappedStatement mappedStatement = (MappedStatement) args[0];
String id = mappedStatement.getId();
DataSourceType curDataSourceType = DataSourceType.WRITE;
if(cache.containsKey(id))
curDataSourceType = cache.get(id);
else {
Method method = getMappedInterfaceMethod(id);
if (method != null && method.isAnnotationPresent(DataSource.class)) {
curDataSourceType = method.getAnnotation(DataSource.class).type();
log.debug("@@ROUTING_DATASOURCE {}", curDataSourceType);
}
cache.put(id,curDataSourceType);
}
CurrentDataSourceHoler.setCurrentDataSource(curDataSourceType);
log.debug("@@CURRENT_DATASOURCE {}", CurrentDataSourceHoler.getCurrentDataSource());
return invocation.proceed();
}
@Override
public Object plugin(Object target) {
if(target instanceof Executor)
return Plugin.wrap(target,this);
else
return target;
}
@Override
public void setProperties(Properties properties) {}
private Method getMappedInterfaceMethod(String id){
String[] items = id.split("\\.");
ArrayList<String> nameList = new ArrayList<String>(Arrays.asList(items));
if (nameList.size() < 2)
return null;
String methodName = nameList.get(nameList.size()-1);
nameList.remove(nameList.size()-1);
String className = StringUtils.join(nameList,".");
Method method = ReflectUtil.getMethodByName(ReflectUtil.getClass(className),methodName);
return method;
}
}
- 2 下面这种形式是分析method方法, 要注意的是自增id返回的请求也是用写库
@Intercepts({
@Signature(type = Executor.class, method = "update", args = {
MappedStatement.class, Object.class }),
@Signature(type = Executor.class, method = "query", args = {
MappedStatement.class, Object.class, RowBounds.class,
ResultHandler.class }) })
public class DynamicPlugin implements Interceptor {
protected static final Logger logger = LoggerFactory.getLogger(DynamicPlugin.class);
private static final String REGEX = ".*insert\\u0020.*|.*delete\\u0020.*|.*update\\u0020.*";
private static final Map<String, DataSourceType> cacheMap = new ConcurrentHashMap<String, DataSourceType>();
@Override
public Object intercept(Invocation invocation) throws Throwable {
boolean synchronizationActive = TransactionSynchronizationManager.isSynchronizationActive();
if(!synchronizationActive) {
Object[] objects = invocation.getArgs();
MappedStatement ms = (MappedStatement) objects[0];
DataSourceType dynamicDataSourceGlobal = null;
if((dynamicDataSourceGlobal = cacheMap.get(ms.getId())) == null) {
//读方法
if(ms.getSqlCommandType().equals(SqlCommandType.SELECT)) {
//!selectKey 为自增id查询主键(SELECT LAST_INSERT_ID() )方法,使用主库
if(ms.getId().contains(SelectKeyGenerator.SELECT_KEY_SUFFIX)) {
dynamicDataSourceGlobal = DataSourceType.WRITE;
} else {
BoundSql boundSql = ms.getSqlSource().getBoundSql(objects[1]);
String sql = boundSql.getSql().toLowerCase(Locale.CHINA).replaceAll("[\\t\\n\\r]", " ");
if(sql.matches(REGEX)) {
dynamicDataSourceGlobal = DataSourceType.WRITE;
} else {
dynamicDataSourceGlobal = DataSourceType.READ;
}
}
}else{
dynamicDataSourceGlobal = DataSourceType.WRITE;
}
logger.warn("设置方法[{}] use [{}] Strategy, SqlCommandType [{}]..", ms.getId(), dynamicDataSourceGlobal.name(), ms.getSqlCommandType().name());
cacheMap.put(ms.getId(), dynamicDataSourceGlobal);
}
CurrentDataSourceHoler.setCurrentDataSource(dynamicDataSourceGlobal);
}
return invocation.proceed();
}
@Override
public Object plugin(Object target) {
if (target instanceof Executor) {
return Plugin.wrap(target, this);
} else {
return target;
}
}
@Override
public void setProperties(Properties properties) {
//
}
}
- 3 接下来我们需要重写datasource, 将读库和写库都包含在内, 变成一个动态库
public class DynamicDataSource extends AbstractRoutingDataSource {
@Getter @Setter
private Object writeDataSource;
@Getter@Setter
private List<Object> readDataSourceList;
private int readDataSourceSize;
private AtomicInteger counter = new AtomicInteger(0);
@Override
public void afterPropertiesSet(){
if (writeDataSource == null){
throw new IllegalArgumentException("Property 'writeDataSource' is required");
}
setDefaultTargetDataSource(writeDataSource);
Map<Object,Object> dataSourceMap = new HashMap<Object,Object>();
dataSourceMap.put(DataSourceType.WRITE.name(),writeDataSource);
if (readDataSourceList == null){
readDataSourceSize = 0;
}else{
for(int i = 0;i < readDataSourceList.size();i++){
dataSourceMap.put(DataSourceType.READ.name()+i,readDataSourceList.get(i));
}
readDataSourceSize = readDataSourceList.size();
}
setTargetDataSources(dataSourceMap);
super.afterPropertiesSet();
}
@Override
protected Object determineCurrentLookupKey() {
DataSourceType dataSourceType = CurrentDataSourceHoler.getCurrentDataSource();
if(dataSourceType == DataSourceType.READ && readDataSourceSize > 0){
int curentValue = counter.incrementAndGet();
if(curentValue >= Integer.MAX_VALUE)
counter.set(0);
int index = curentValue % readDataSourceSize;
return DataSourceType.READ.name()+index;
}
return DataSourceType.WRITE.name();
}
@Override
public <T> T unwrap(Class<T> aClass) throws SQLException {
return null;
}
@Override
public boolean isWrapperFor(Class<?> aClass) throws SQLException {
return false;
}
}
-4 我们可以看出现在的逻辑是一个数据库请求会在mybatis中的plugin中选定CurrentDataSource, 显然CurrentDataSource应该放在一个threadLocal中, 保证线程同步
public class CurrentDataSourceHoler {
private static final ThreadLocal<DataSourceType> currentDataSource = new ThreadLocal<DataSourceType>();
static {
setCurrentDataSource(DataSourceType.WRITE);
}
public static void setCurrentDataSource(DataSourceType dataSourceType){
currentDataSource.set(dataSourceType);
}
public static DataSourceType getCurrentDataSource(){
return currentDataSource.get();
}
public static void clearDataSource() {
currentDataSource.remove();
}
}
- 5 接下来就应该在Spring层面去配置事务了, 显然只有只读的事务才可以用读库, 读写都有的事务是要用写库的
public class DynamicDataSourceTransactionManager extends DataSourceTransactionManager {
/**
* 只读事务到读库,读写事务到写库
* @param transaction
* @param definition
*/
@Override
protected void doBegin(Object transaction, TransactionDefinition definition) {
//设置数据源
boolean readOnly = definition.isReadOnly();
if(readOnly) {
CurrentDataSourceHoler.setCurrentDataSource(DataSourceType.READ);
} else {
CurrentDataSourceHoler.setCurrentDataSource(DataSourceType.WRITE);
}
super.doBegin(transaction, definition);
}
/**
* 清理本地线程的数据源
* @param transaction
*/
@Override
protected void doCleanupAfterCompletion(Object transaction) {
super.doCleanupAfterCompletion(transaction);
CurrentDataSourceHoler.clearDataSource();
}
}
网友评论