markdown复制## 1. 问题场景还原:多线程计数器的经典陷阱
上周排查一个线上事故时,发现某电商平台的实时库存计数器在秒杀活动中出现严重偏差。日志显示实际售出300件商品,但数据库计数器只增加了287次。这种count数据不一致问题,本质上是个典型的"多线程并发更新丢失"场景。
假设我们有个简单的商品表:
```sql
CREATE TABLE product (
id BIGINT PRIMARY KEY,
name VARCHAR(100),
stock_count INT -- 库存计数器
);
当10个线程同时执行UPDATE product SET stock_count = stock_count - 1 WHERE id = 1时,你以为的线性执行过程:
code复制线程1: 读取stock_count=100 → 计算99 → 写入99
线程2: 读取stock_count=99 → 计算98 → 写入98
...
实际可能发生的交错执行:
code复制线程1: 读取stock_count=100
线程2: 读取stock_count=100
线程1: 写入99
线程2: 写入99 -- 线程2的更新覆盖了线程1
关键点:MySQL的UPDATE语句本身是原子的,但"读取-计算-写入"这个组合操作不是原子操作
在MyBatis-Plus的Service层,我们可能这样写:
java复制public void deductStock(Long productId) {
Product product = productMapper.selectById(productId);
product.setStockCount(product.getStockCount() - 1);
productMapper.updateById(product);
}
这里存在三个致命问题:
即使使用REPEATABLE READ隔离级别,也无法避免丢失更新。因为:
默认生成的UPDATE语句:
sql复制UPDATE product SET name=?, stock_count=? WHERE id=?
这会导致:
java复制public void deductStock(Long productId) {
// 使用SELECT...FOR UPDATE显式加锁
Product product = productMapper.selectByIdWithLock(productId);
product.setStockCount(product.getStockCount() - 1);
productMapper.updateById(product);
}
自定义Mapper方法:
xml复制<select id="selectByIdWithLock" resultType="Product">
SELECT * FROM product WHERE id=#{id} FOR UPDATE
</select>
压测结果(100并发):
实体类添加版本号字段:
java复制@Version
private Integer version;
更新逻辑:
java复制public void deductStock(Long productId) {
Product product = productMapper.selectById(productId);
product.setStockCount(product.getStockCount() - 1);
int affected = productMapper.updateById(product);
if (affected == 0) {
throw new OptimisticLockException("库存更新冲突");
}
}
压测结果:
java复制@Update("UPDATE product SET stock_count = stock_count - 1 WHERE id = #{id}")
int deductStockDirect(@Param("id") Long productId);
压测结果:
java复制public void deductStock(Long productId) {
String lockKey = "product:" + productId;
try {
boolean locked = redisTemplate.opsForValue()
.setIfAbsent(lockKey, "1", 10, TimeUnit.SECONDS);
if (!locked) throw new RuntimeException("获取锁失败");
// 业务逻辑
productMapper.deductStockDirect(productId);
} finally {
redisTemplate.delete(lockKey);
}
}
压测结果:
sql复制-- 先扣减redis库存
-- 异步同步到数据库
UPDATE product SET stock_count = stock_count - #{count}
WHERE id = #{id} AND stock_count >= #{count}
java复制// 使用MQ异步处理扣减
public void deductStock(Long productId) {
mqTemplate.send("stock-deduct", productId);
}
根据业务场景选择:
java复制@Retryable(value = Exception.class, maxAttempts = 3)
public void safeDeductStock(Long productId) {
int affected = productMapper.deductStockWithVersion(
productId,
expectedVersion);
if (affected == 0) {
throw new RetryableException("版本不一致");
}
}
// Mapper方法
@Update("UPDATE product SET stock_count=stock_count-1, version=version+1
WHERE id=#{id} AND version=#{version}")
int deductStockWithVersion(@Param("id") Long id, @Param("version") int version);
<foreach>标签xml复制<update id="batchDeductStock">
UPDATE product
SET stock_count = CASE id
<foreach collection="list" item="item">
WHEN #{item.id} THEN stock_count - #{item.count}
</foreach>
END
WHERE id IN
<foreach collection="list" item="item" open="(" separator="," close=")">
#{item.id}
</foreach>
</update>
错误示范:
java复制@Transactional
public void deductStock(Long productId) {
Product product = productMapper.selectById(productId); // 快照读
// ...其他业务逻辑
productMapper.deductStockDirect(productId); // 当前读
}
问题点:
典型错误流程:
解决方案:
java复制@CacheEvict(cacheNames = "product", key = "#productId")
public void deductStock(Long productId) {
// ...
}
建议基准:
测试脚本示例(JMeter):
code复制Thread Group: 100 threads, ramp-up 10s
Loop Controller: forever
HTTP Request: POST /stock/deduct?productId=1
当库存服务集群化时,还需要考虑:
一个简单的分布式计数器实现:
java复制public class DistributedCounter {
private final RedissonClient redisson;
public boolean tryDeduct(String key, int delta) {
RAtomicLong counter = redisson.getAtomicLong(key);
return counter.compareAndSet(counter.get(), counter.get() - delta);
}
}
实际项目中,建议直接使用Redis的INCRBY命令实现原子扣减:
java复制redisTemplate.opsForValue().increment("product:stock:"+productId, -1);
这种方案虽然性能极高(可达10000+ TPS),但需要处理Redis与数据库的双写一致性问题。我们通常采用定时任务补偿+报警机制来保证最终一致性。
最后分享一个真实案例:某次大促前,我们通过将热点商品的库存加载到Redis,采用Lua脚本实现原子扣减,数据库只作为备份存储,最终扛住了5万QPS的秒杀流量。关键点在于:
code复制