2019獨角獸企業重金招聘Python工程師標準>>>
由于項目中數據量較大,訪問量也較高,故在數據庫的設計上,采用讀寫分離思想,達到性能要求!
?
簡單科普一下實現讀寫分離的思路
- 配置及加載數據庫信息,即加載數據源
- 繼承類AbstractRoutingDataSource,并重寫方法determineCurrentLookupKey(),在這個方法中決定使用哪個數據源,實現切換數據庫。這點非常重要。
- 使用AOP切面,截獲讀寫操作,修改數據庫讀寫標志。
?
廢話不多,上代碼
- 配置及加載數據庫信息
(1)在yml文件中填寫數據庫配置信息,這里是mysql、1主2從:datasource:#從庫數量readSize: 2type: com.alibaba.druid.pool.DruidDataSource # 使用druid數據源#主庫write:url: jdbc:mysql://localhost:3306/lingguan_xiaowang?useUnicode=true&characterEncoding=utf-8&useSSL=falseusername: usernamepassword: passworddriver-class-name: com.mysql.jdbc.Driverfilters: statmaxActive: 20initialSize: 1maxWait: 60000minIdle: 1timeBetweenEvictionRunsMillis: 60000minEvictableIdleTimeMillis: 300000validationQueryTimeout: 900000validationQuery: SELECT SYSDATE() from dualtestWhileIdle: truetestOnBorrow: falsetestOnReturn: falsepoolPreparedStatements: truemaxOpenPreparedStatements: 20# 從庫read1:url: jdbc:mysql://localhost:3306/slave1?useUnicode=true&characterEncoding=utf-8&useSSL=falseusername: usernamepassword: passworddriver-class-name: com.mysql.jdbc.Driverfilters: statmaxActive: 20initialSize: 1maxWait: 60000minIdle: 1timeBetweenEvictionRunsMillis: 60000minEvictableIdleTimeMillis: 300000validationQueryTimeout: 900000validationQuery: SELECT SYSDATE() from dualtestWhileIdle: truetestOnBorrow: falsetestOnReturn: falsepoolPreparedStatements: truemaxOpenPreparedStatements: 20read2:url: jdbc:mysql://localhost:3306/slave2?useUnicode=true&characterEncoding=utf-8&useSSL=falseusername: usernamepassword: passworddriver-class-name: com.mysql.jdbc.Driverfilters: statmaxActive: 20initialSize: 1maxWait: 60000minIdle: 1timeBetweenEvictionRunsMillis: 60000minEvictableIdleTimeMillis: 300000validationQueryTimeout: 900000validationQuery: SELECT SYSDATE() from dualtestWhileIdle: truetestOnBorrow: falsetestOnReturn: falsepoolPreparedStatements: truemaxOpenPreparedStatements: 20
(2)通過讀取yml配置文件,構造動態數據源DataSource
?import com.gome.store.util.mybatis.DataSourceType; import com.gome.store.util.mybatis.MyAbstractRoutingDataSource; import org.apache.ibatis.session.SqlSessionFactory; import org.mybatis.spring.SqlSessionFactoryBean; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.autoconfigure.jdbc.DataSourceBuilder; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Primary; import org.springframework.core.env.Environment; import org.springframework.core.io.support.PathMatchingResourcePatternResolver; import org.springframework.jdbc.datasource.DataSourceTransactionManager; import org.springframework.transaction.annotation.EnableTransactionManagement;import javax.sql.DataSource; import java.util.HashMap; import java.util.Map;/*** @Description 解析數據庫的配置項* @Author wangjie<https://my.oschina.net/xiaowangqiongyou>* @Date 2017/7/31*/ @Configuration @EnableTransactionManagement public class DataSourceConfiguration {protected Logger logger = LoggerFactory.getLogger(this.getClass());@Value("${datasource.type}")private Class<? extends DataSource> dataSourceType;@Autowiredprivate Environment env;@Value("${datasource.readSize}")private String dataSourceSize;@Bean(name = "writeDataSource")@Primary@ConfigurationProperties(prefix = "datasource.write")public DataSource writeDataSource() {logger.info("-------------------- writeDataSource init ---------------------");return DataSourceBuilder.create().type(dataSourceType).build();}/*** 有多少個從庫就要配置多少個** @author wangjie* @date 2017/7/31* @params*/@Bean(name = "readDataSource1")@ConfigurationProperties(prefix = "datasource.read1")public DataSource readDataSourceOne() {logger.info("-------------------- readDataSourceOne init ---------------------");return DataSourceBuilder.create().type(dataSourceType).build();}@Bean(name = "readDataSource2")@ConfigurationProperties(prefix = "datasource.read2")public DataSource readDataSourceTwo() {logger.info("-------------------- readDataSourceTwo init ---------------------");return DataSourceBuilder.create().type(dataSourceType).build();}@Beanpublic MyAbstractRoutingDataSource dataSource(@Qualifier("writeDataSource") DataSource writeDataSource,@Qualifier("readDataSource1") DataSource readDataSourceOne,@Qualifier("readDataSource2") DataSource readDataSourceTwo) {int size = Integer.parseInt(dataSourceSize);Map<Object, Object> targetDataSources = new HashMap<>();// 寫targetDataSources.put(DataSourceType.write.getType(), writeDataSource);// 讀targetDataSources.put(DataSourceType.read.getType() + 0, readDataSourceOne);targetDataSources.put(DataSourceType.read.getType() + 1, readDataSourceTwo);MyAbstractRoutingDataSource dataSource = new MyAbstractRoutingDataSource(size);dataSource.setTargetDataSources(targetDataSources);// 該方法是AbstractRoutingDataSource的方法dataSource.setDefaultTargetDataSource(writeDataSource);// 默認的datasource設置為myTestDbDataSourcereturn dataSource;}@Beanpublic SqlSessionFactory sqlSessionFactory(MyAbstractRoutingDataSource ds) throws Exception {SqlSessionFactoryBean fb = new SqlSessionFactoryBean();fb.setDataSource(ds);// 指定數據源(這個必須有,否則報錯)// 下邊兩句僅僅用于*.xml文件,如果整個持久層操作不需要使用到xml文件的話(只用注解就可以搞定),則不加fb.setTypeAliasesPackage(env.getProperty("mybatis.typeAliasesPackage"));// 指定基包fb.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(env.getProperty("mybatis.mapperLocations")));//return fb.getObject();}/*** 配置事務管理器*/@Beanpublic DataSourceTransactionManager transactionManager(MyAbstractRoutingDataSource dataSource) throws Exception {return new DataSourceTransactionManager(dataSource);} }
注釋:
? ? ? ? ·?@Primary:指定在同一個接口有多個實現類可以注入的時候,默認選擇哪一個,而不是讓@Autowire注解報錯(一般用于多數據源的情況下)
????????·?@Qualifier:指定名稱的注入,當一個接口有多個實現類的時候使用(在本例中,有兩個DataSource類型的實例,需要指定名稱注入)
????????·?通過動態數據源構造SqlSessionFactory和事務管理器(如果不需要事務,后者可以去掉)
? - 編寫相應的控制數據源類型類
(1)數據源類型/*** @Description* @Author wangjie<https://my.oschina.net/xiaowangqiongyou>* @Date 2017/7/31*/ public enum DataSourceType {read("read", "從庫"), write("write", "主庫");private String type;private String name;DataSourceType(String type, String name) {this.type = type;this.name = name;}public String getType() {return type;}public String getName() {return name;} }
(2)數據源標志類/*** @Description 本地線程全局變量,用來存放當前操作是讀還是寫* @Author wangjie<https://my.oschina.net/xiaowangqiongyou>* @Date 2017/7/31*/ public class DataSourceContextHolder {private static final ThreadLocal<String> local = new ThreadLocal<>();public static ThreadLocal<String> getLocal() {return local;}/** 讀可能是多個庫* @author wangjie* @date 2017/7/31* @params*/public static void read() {local.set(DataSourceType.read.getType());}/** 寫只有一個庫* @author wangjie* @date 2017/7/31* @params*/public static void write() {local.set(DataSourceType.write.getType());}public static String getJdbcType() {return local.get();}/** 清除數據源類型* @author wangjie* @date 2017/7/31* @params*/public static void clearDataSourceType() {local.remove();} }
? - 繼承AbstractRoutingDataSource?
import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource; import org.springframework.util.StringUtils;import java.util.concurrent.atomic.AtomicInteger;/*** @Description 多數據源切換* @Author wangjie<https://my.oschina.net/xiaowangqiongyou>* @Date 2017/7/31*/ public class MyAbstractRoutingDataSource extends AbstractRoutingDataSource {protected Logger logger = LoggerFactory.getLogger(this.getClass());private final int dataSourceNumber;private AtomicInteger count = new AtomicInteger(0);public MyAbstractRoutingDataSource(int dataSourceNumber) {this.dataSourceNumber = dataSourceNumber;}@Overrideprotected Object determineCurrentLookupKey() {Object resultObject = null;String typeKey = DataSourceContextHolder.getJdbcType();// 只對主庫開啟事務,如果typeKey為空表示獲取主庫的datasourceif (StringUtils.isEmpty(typeKey)) {resultObject = DataSourceType.write.getType();} else {// 讀簡單負載均衡int number = count.getAndAdd(1);int lookupKey = number % dataSourceNumber;resultObject = DataSourceType.read.getType() + lookupKey;}logger.info("determineCurrentLookupKey:" + resultObject);return resultObject;} }
? - AOP的實現
import com.gome.store.util.mybatis.DataSourceContextHolder; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.aspectj.lang.reflect.MethodSignature; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.stereotype.Component; import org.springframework.transaction.annotation.Transactional;/*** @Description aop 攔截 進行切換數據源* 如果service層 增加了@Transactional ,導致數據源MyAbstractRoutingDataSource的determineCurrentLookupKey()方法會在執行DataSourceAop攔截之前就進行全局事務綁定* 從而導致獲取 DataSourceContextHolder.getJdbcType(); 一直都是空值* @Author wangjie<https://my.oschina.net/xiaowangqiongyou>* @Date 2017/7/31*/ @Aspect @Component public class DataSourceAop {private static Logger logger = LoggerFactory.getLogger(DataSourceAop.class);// @Before("execution(* com.ggj.encrypt.modules.*.dao..*.find*(..)) or execution(* com.ggj.encrypt.modules.*.dao..*.get*(..)) or execution(* com.ggj.encrypt.modules.*.dao..*.select*(..))")@Before("execution(* com.gome.store.dao..*.query*(..)) or execution(* com.gome.store.dao..*.get*(..)) or execution(* com.gome.store.dao..*.select*(..))")public void setReadDataSourceType() {DataSourceContextHolder.read();logger.info("dataSource切換到:Read");}// 另一種方式 // @Around("@annotation(org.springframework.transaction.annotation.Transactional)") // public void setWriteDataSourceType(ProceedingJoinPoint joinPoint) throws Throwable { // Transactional datasource = ((MethodSignature)joinPoint.getSignature()).getMethod().getAnnotation(Transactional.class); // if(datasource.readOnly()){ // DataSourceContextHolder.read(); // logger.info("dataSource切換到:Read"); // }else{ // DataSourceContextHolder.write(); // logger.info("dataSource切換到:write"); // } // joinPoint.proceed(); // } }
?
-
測試
@Override public User queryUsersByUsername(String userName) {return userDao.queryByName(userName); }@Transactional() @Override public void updateUserInfoById(User user) {userDao.updateBySelective(user);userDao.updateById(user.getId()); }
?
寄語
????????這里只是簡單的實現了數據庫的讀寫分離,很多細節問題沒有具體實現,比如某個DB宕掉了時的處理,負載均衡只是簡單的采用輪訓方式,性能的統計,sql語句黑白名單等等問題。
?