0. 背景
对record表按月分表,提高性能,mysql单表超过百万级,性能开始逐渐下滑,比较好方式就是分表。
接下来操作之。
1. 添加pom依赖
<!-- 分库分表 --><dependency><groupId>org.apache.shardingsphere</groupId><artifactId>sharding-jdbc-spring-boot-starter</artifactId><version>4.1.1</version></dependency><dependency><groupId>org.apache.shardingsphere</groupId><artifactId>sharding-jdbc-spring-namespace</artifactId><version>4.1.1</version></dependency><!-- 分布式事务 --><dependency><groupId>org.apache.shardingsphere</groupId><artifactId>sharding-transaction-xa-core</artifactId><version>4.1.1</version></dependency>
2. 修改yml配置
原来是record表,按时间分表,每月一张表,形如record_202411、record_202412
注意,原datasource节点在spring下:
spring:# 数据库配置datasource:type: com.alibaba.druid.pool.DruidDataSourcename: gproduceinitialSize: 10minIdle: 10maxActive: 50maxWait: 60000timeBetweenEvictionRunsMillis: 60000minEvictableIdleTimeMillis: 300000validationQuery: SELECT 1 FROM DUALtestWhileIdle: truetestOnBorrow: falsetestOnReturn: falsepoolPreparedStatements: truemaxPoolPreparedStatementPerConnectionSize: 20filters: stat,wallconnectionProperties: druid.stat.mergeSql=trueremoveAbandoned: trueremoveAbandonedTimeout: 120driverClassName: com.mysql.cj.jdbc.Driverurl: jdbc:mysql://127.0.0.1:3306/gproduce?useUnicode=true&characterEncoding=utf-8&serverTimezone=UTCusername: rootpassword: 123456
分表后,datasource需要放到spring.shardingsphere下:
spring:servlet:multipart:max-file-size: 100MBmax-request-size: 200MBshardingsphere: # 数据库配置sharding:default-data-source-name: ds0 #注意:不分库时,默认的数据库源default-database-strategy: #默认的分库策略inline:sharding-column: idalgorithm-expression: ds0tables: #数据分片规则配置record: # 逻辑表名称actual-data-nodes: ds0.record$->{1..2} # 由数据源名 + 表名组成(参考 Inline 语法规则)table-strategy: # 分表策略standard: # 行表达式分片策略precise-algorithm-class-name: com.tajhzl.gproduce.config.MonthPreciseShardingAlgorithm # 精确分片算法类名称,用于=和INsharding-column: time # 分片列名称range-algorithm-class-name: com.tajhzl.gproduce.config.MonthRangeShardingAlgorithm # 范围分片算法类名称,用于BETWEENkey-generator: # key生成器column: idtype: SNOWFLAKE # SnowflakeShardingKeyGeneratorprops:worker: # SNOWFLAKE算法的worker.idid: 100max: # SNOWFLAKE算法的max.tolerate.time.difference.millisecondstolerate:time:difference:milliseconds: 20datasource:names: ds0ds0:type: com.alibaba.druid.pool.DruidDataSourcename: gproduceinitialSize: 5minIdle: 5maxActive: 14maxWait: 60000timeBetweenEvictionRunsMillis: 60000minEvictableIdleTimeMillis: 300000validationQuery: SELECT 1 FROM DUALtestWhileIdle: truetestOnBorrow: falsetestOnReturn: falsepoolPreparedStatements: truemaxPoolPreparedStatementPerConnectionSize: 20filters: statconnectionProperties: druid.stat.mergeSql=trueremoveAbandoned: trueremoveAbandonedTimeout: 120driverClassName: com.mysql.cj.jdbc.Driverurl: jdbc:mysql://127.0.0.1:3306/gproduce?useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC&allowMultiQueries=trueusername: rootpassword: 123456
3. 添加分片策略类
精确分片算法,插入时用:
/*** 精确分片(数据插入时)*/
@Slf4j
public class MonthPreciseShardingAlgorithm implements PreciseShardingAlgorithm<Date> {@Overridepublic String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Date> shardingValue) {// 逻辑表名String logicTableName = shardingValue.getLogicTableName();// 获取分片键值,时间类型的值(LocalDateTime等)会自动转为java.sql.Timestamp,可以直接用java.util.Date接收Object object = shardingValue.getValue();String yyyyMM = "202401";try {SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd");// 注意月份是MMDate date = simpleDateFormat.parse(object.toString());try {yyyyMM = LocalDate.fromDateFields(date).toString("yyyyMM", Locale.CHINA);} catch (Exception e) {log.warn("@@ 分表按创建时间异常,直接使用默认表");}} catch (ParseException e1) {e1.printStackTrace();}// 返回要进入的表名称return logicTableName + "_" + yyyyMM;}
}
范围分片,查询时:
/*** 范围分片(数据查询)*/
@Slf4j
public class MonthRangeShardingAlgorithm implements RangeShardingAlgorithm<Date> {@Overridepublic Collection<String> doSharding(Collection<String> availableTargetNames, RangeShardingValue<Date> shardingValue) {Collection<String> tableSet = Sets.newConcurrentHashSet();String logicTableName = shardingValue.getLogicTableName();// 获取分片键值,时间类型的值(LocalDateTime等)会自动转为java.sql.Timestamp,可以直接用java.util.Date接收Range<Date> dates = shardingValue.getValueRange();// 获取范围小值Object lowDateObj = dates.lowerEndpoint();// 获取范围大值Object upperDateObj = dates.upperEndpoint();try {Date upperDate = TimeUtils.strToDate(upperDateObj.toString(), "yyyy-MM-dd HH:mm:ss");Date lowDate = TimeUtils.strToDate(lowDateObj.toString(), "yyyy-MM-dd HH:mm:ss");AtomicInteger i = new AtomicInteger(0);// 比较两个时间while (DateUtils.addDays(lowDate, i.get()).compareTo(upperDate) <= 0) {// 往后加一个月tableSet.add(logicTableName + "_" + LocalDate.fromDateFields(DateUtils.addDays(lowDate, i.getAndAdd(1))).toString("yyyyMM", Locale.CHINA));}} catch (ParseException e) {log.warn("@@ 分表按范围分片异常:{}", e.getMessage());}return tableSet;}
}
4. 具体使用
后续在调用record表相关的数据库操作时,如果是insert则必须传参time。如果是select/update/delete必须传参time范围。
shardingsphere会自动按时间去找响应的表操作或者汇总数据。