server:本demo开发工具采用springSTS
前提读写分离库已经搭建好
1.首先新建一个springboot项目。
2.项目新建成功之后,个人习惯在springboot入口写一个配置文件类与Application平级。如图
下面逐一说明一下注解的含义。
@EnableWebMvc 说明启用了spring mvc
@Configuration 让spring boot 项目启动时识别当前配置类(让spring容器知道这个类是一个xml的配置类)
@ComponentScan 扫描注解
@MapperScan(basePackages = "com.wz.mail.mapper") 扫描dao
3.说明一下spring boot中的配置文件 application.properties 个人比较喜欢使用 application.yum(好处是比较有层级感)配置文件中的内容如下
## context-path代表项目名称 端口 以及超时时间
server:
context-path: /mail-producer
port: 8001
session:
timeout: 900
## Spring配置:
spring:
http:
encoding:
charset: UTF-8
## 序列化将时间默认序列化为该格式的时间;not_null如果有null默认过滤
jackson:
date-format: yyyy-MM-dd HH:mm:ss
time-zone: GMT+8
default-property-inclusion: NON_NULL
##此处采用druid数据源 主从配置基本一样 master slave 数据库ip要区分
druid:
type: com.alibaba.druid.pool.DruidDataSource
master:
url: jdbc:mysql://localhost/mail?characterEncoding=UTF-8&autoReconnect=true&zeroDateTimeBehavior=convertToNull&useUnicode=true
driver-class-name: com.mysql.jdbc.Driver
username: root
password: root
initialSize: 5
minIdle: 1
#maxIdle: 10
maxActive: 100
maxWait: 60000
timeBetweenEvictionRunsMillis: 60000
minEvictableIdleTimeMillis: 300000
validationQuery: SELECT 1 FROM DUAL
testWhileIdle: true
testOnBorrow: false
testOnReturn: false
poolPreparedStatements: true
maxPoolPreparedStatementPerConnectionSize: 20
filters: stat,wall,log4j
useGlobalDataSourceStat: true
slave:
url: jdbc:mysql://localhost:3306/mail?characterEncoding=UTF-8&autoReconnect=true&zeroDateTimeBehavior=convertToNull&useUnicode=true
driver-class-name: com.mysql.jdbc.Driver
username: root
password: root
initialSize: 5
minIdle: 1
#maxIdle: 10
maxActive: 100
maxWait: 60000
timeBetweenEvictionRunsMillis: 60000
minEvictableIdleTimeMillis: 300000
validationQuery: SELECT 1 FROM DUAL
testWhileIdle: true
testOnBorrow: false
testOnReturn: false
poolPreparedStatements: true
maxPoolPreparedStatementPerConnectionSize: 20
filters: stat,wall,log4j
useGlobalDataSourceStat: true
##指定mybatis的配置文件
mybatis:
mapper-locations: classpath:com/wz/mail/mapping/*.xml
4.现在我们配置了两个数据源 ,再启动项目的时候得把这两个数据源都加载进来
(1)需要把这两个数据源先注入进来
package com.wz.mail.config;
import java.sql.SQLException;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.jdbc.DataSourceBuilder;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.boot.web.servlet.ServletRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import com.alibaba.druid.support.http.StatViewServlet;
import com.alibaba.druid.support.http.WebStatFilter;
@Configuration//上边有介绍
@EnableTransactionManagement //开启事物spring提供的注解
public class DataSourceConfiguration {
private static Logger LOGGER = LoggerFactory.getLogger(DataSourceConfiguration.class);
//默认去找application.yum中的druid.type相当于将配置文件中的该值赋值给dataSourceType
@Value("${druid.type}")
private Class<? extends DataSource> dataSourceType;
@Bean(name = "masterDataSource")
@Primary//优先选择主数据源(原因可写可读)
@ConfigurationProperties(prefix = "druid.master") //意思是从application.yum中找druid.master开头所有的信息都要放到要创建的masterDataSource并且交给spring管理
public DataSource masterDataSource() throws SQLException{
DataSource masterDataSource = DataSourceBuilder.create().type(dataSourceType).build();
LOGGER.info("========MASTER: {}=========", masterDataSource);
return masterDataSource;
}
@Bean(name = "slaveDataSource")
@ConfigurationProperties(prefix = "druid.slave")
public DataSource slaveDataSource(){
DataSource slaveDataSource = DataSourceBuilder.create().type(dataSourceType).build();
LOGGER.info("========SLAVE: {}=========", slaveDataSource);
return slaveDataSource;
}
//druid监控界面需要用的到servlet
@Bean
public ServletRegistrationBean druidServlet() {
ServletRegistrationBean reg = new ServletRegistrationBean();
reg.setServlet(new StatViewServlet());
reg.addUrlMappings("/druid/*");
reg.addInitParameter("allow", "localhost");
reg.addInitParameter("deny","/deny");
LOGGER.info(" druid console manager init : {} ", reg);
return reg;
}
@Bean
public FilterRegistrationBean filterRegistrationBean() {
FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
filterRegistrationBean.setFilter(new WebStatFilter());
filterRegistrationBean.addUrlPatterns("/*");
filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico, /druid/*");
LOGGER.info(" druid filter register : {} ", filterRegistrationBean);
return filterRegistrationBean;
}
}
现在该启动项目了只要出现两个数据源中的log说明数据源启动成功日志如下:
2017-07-27 22:35:06.844 INFO 14424 --- [ main] c.w.mail.config.DataSourceConfiguration : ========MASTER: {
CreateTime:"2017-07-27 22:35:06",
ActiveCount:0,
PoolingCount:0,
CreateCount:0,
DestroyCount:0,
CloseCount:0,
ConnectCount:0,
Connections:[
]
}=========
2017-07-27 22:35:07.178 INFO 14424 --- [ main] c.w.mail.config.DataSourceConfiguration : ========SLAVE: {
CreateTime:"2017-07-27 22:35:07",
ActiveCount:0,
PoolingCount:0,
CreateCount:0,
DestroyCount:0,
CloseCount:0,
ConnectCount:0,
Connections:[
]
}=========
浏览器输入http://localhost:8001/mail-producer/druid
成功访问的druid监控台
接下来该mybatis来整合数据源,经典的SqlSessionFactory ,将这两个数据源交给SqlSessionFactory 来管理。然后怎么区分哪个是主数据源还是从数据源呢?
首先实现读写分离就意味着有两个数据源,当写操作时对主库使用,当读操作时对从库使用。也就是说我们再启动数据库连接池时要启动两个。
但我们在真正使用的时候,可以在方法上加自定义注解的形式来区分读还是写。
思路:
首先配置两个数据源后(已经配置如上)要区分两个数据源。分别是主数据源和从数据源。
可以通过mybatis配置文件把两个数据源注入到应用中。但是我们要想实现读写分离,也就
是什么情况下用写,什么情况下用读,这里需要自己定义一个标识来区分。要实现一个即时
切换主从数据源的标识并且能保证线程安全的基础下操作数据源(原因是并发会影响数据源
的获取分不清主从,造成在从库进行写操作,影响mysql(mariadb)数据库的机制,导致
服务器异常。这里使用threadocal来解决这个问题)
然后需要自定义注解,在方法上有注解则为只读,没有则为写操作
package com.bhz.mail.config.database;
public class DataBaseContextHolder {
//区分主从数据源
public enum DataBaseType {
MASTER, SLAVE
}
//线程局部变量
private static final ThreadLocal<DataBaseType> contextHolder = new ThreadLocal<DataBaseType>();
//往线程里边set数据类型
public static void setDataBaseType(DataBaseType dataBaseType) {
if(dataBaseType == null) throw new NullPointerException();
contextHolder.set(dataBaseType);
}
//从容器中获取数据类型
public static DataBaseType getDataBaseType(){
return contextHolder.get() == null ? DataBaseType.MASTER : contextHolder.get();
}
//清空容器中的数据类型
public static void clearDataBaseType(){
contextHolder.remove();
}
}
将这两种数据源交给SqlSessionFactory 来管理。接下来写一个mybatis的配置类相当于传统的mybatis.xml
先配置数据源,在注入到SqlSessionFactory (强依赖关系有先有后)
怎样确保mybatis配置类中先加载数据源在注入SqlSessionFactory 呢?代码如下:
package com.wz.mail.config;
import javax.annotation.Resource;
import javax.sql.DataSource;
import org.apache.ibatis.session.SqlSessionFactory;
import org.aspectj.apache.bcel.util.ClassLoaderRepository;
import org.aspectj.apache.bcel.util.ClassLoaderRepository.SoftHashMap;
import org.mybatis.spring.boot.autoconfigure.MybatisAutoConfiguration;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
/**
*
* @author wz
*
*/
@Configuration
@AutoConfigureAfter({DataSourceConfiguration.class})//这个文件在DataSourceConfiguration加载完成之后再加载MybatisConfiguration
public class MybatisConfiguration extends MybatisAutoConfiguration {
@Resource(name="masterDataSource")
private DataSource masterDataSource;
@Resource(name="slaveDataSource")
private DataSource slaveDataSource;
@Bean(name="sqlSessionFactory")
public SqlSessionFactory sqlSessionFactory() throws Exception {
//放入datasource 需要mybatis的AbstractRoutingDataSource 实现主从切换
return super.sqlSessionFactory(roundRobinDataSourceProxy());
}
public AbstractRoutingDataSource roundRobinDataSourceProxy(){
ReadWriteSplitRoutingDataSource proxy = new ReadWriteSplitRoutingDataSource();
//proxy.
SoftHashMap targetDataSource = new ClassLoaderRepository.SoftHashMap();
targetDataSource.put(DataBaseContextHolder.DataBaseType.MASTER, masterDataSource);
targetDataSource.put(DataBaseContextHolder.DataBaseType.SLAVE, slaveDataSource);
//默认数据源
proxy.setDefaultTargetDataSource(masterDataSource);
//装入两个主从数据源
proxy.setTargetDataSources(targetDataSource);
return proxy;
}
}
package com.wz.mail.config;
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
//mybatis动态代理类
class ReadWriteSplitRoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DataBaseContextHolder.getDataBaseType();
}
}
自定义只读注解,含义就是将默认的主数据源修改为只读数据源
package com.bhz.mail.config.database;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Target({ElementType.METHOD, ElementType.TYPE})//该注解应用在方法上
@Retention(RetentionPolicy.RUNTIME)//在运行时运行
public @interface ReadOnlyConnection {
}
package com.wz.mail.config;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.Ordered;
import org.springframework.stereotype.Component;
@Aspect
@Component
public class ReadOnlyConnectionInterceptor implements Ordered {
public static final Logger LOGGER = LoggerFactory.getLogger(ReadOnlyConnectionInterceptor.class);
@Around("@annotation(readOnlyConnection)")//在注解上加入切入点语法,实现方法
public Object proceed(ProceedingJoinPoint proceedingJoinPoint, ReadOnlyConnection readOnlyConnection) throws Throwable {
try{
LOGGER.info("---------------set database connection read only---------------");
DataBaseContextHolder.setDataBaseType(DataBaseContextHolder.DataBaseType.SLAVE);
Object result = proceedingJoinPoint.proceed();//让这个方法执行完毕
return result;
} finally {
DataBaseContextHolder.clearDataBaseType();
LOGGER.info("---------------clear database connection---------------");
}
}
@Override
public int getOrder() {
return 0;
}
}
代码已经OK,将注解写到只读方法上。@ReadOnlyConnection
开始测试begin 日志打印如下
2017-07-30 21:35:13.499 INFO 8604 --- [nio-8001-exec-1] c.w.m.c.ReadOnlyConnectionInterceptor : ---------------set database connection 2 read only---------------
2017-07-30 21:35:13.735 INFO 8604 --- [nio-8001-exec-1] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} inited
2017-07-30 21:35:13.761 INFO 8604 --- [nio-8001-exec-1] c.w.m.c.ReadOnlyConnectionInterceptor : ---------------clear database connection---------------
测试总共多少条:2
由日志可以看出使用的是只读数据源并且使用之后清空容器里的数据源。
网友评论
这个包的依赖是啥?