在公共卫生事件频发的当下,一套高效的数字化疫情防控系统已成为刚需。传统纸质记录方式存在数据滞后、易出错、难以追溯等痛点,特别是在疫情爆发期,人工处理海量信息往往力不从心。我们团队基于SpringBoot+Vue技术栈开发的这套系统,经过三个版本迭代,目前已在多个社区落地应用,日均处理疫情数据超过10万条。
系统采用经典的前后端分离架构,这种设计模式让我们的团队能够并行开发——后端工程师专注业务逻辑实现的同时,前端团队可以同步构建用户界面。在实际部署中,我们特别注重接口的幂等性设计和数据一致性保障,确保在高并发上报场景下(如全员核酸检测时)系统依然稳定运行。
关键设计原则:所有数据修改操作必须记录操作日志,核心业务表增加version字段实现乐观锁,防止并发更新导致的数据覆盖。
report_id采用雪花算法生成,避免自增ID暴露数据规模。我们在生产环境中发现,原始设计的location_info字段仅存储经纬度字符串,无法高效支持地理查询。优化后的方案如下:
sql复制ALTER TABLE epidemic_report
ADD COLUMN geo_point POINT SRID 4326 AFTER location_info,
ADD SPATIAL INDEX idx_geo(geo_point);
-- 上报时自动转换坐标
UPDATE epidemic_report
SET geo_point = ST_GeomFromText(CONCAT('POINT(',
SUBSTRING_INDEX(location_info,',',1),' ',
SUBSTRING_INDEX(location_info,',',-1),')'))
WHERE geo_point IS NULL;
这种改造后,可以快速执行"查找3公里范围内的上报记录"这类空间查询:
sql复制SELECT * FROM epidemic_report
WHERE ST_Distance_Sphere(geo_point, ST_GeomFromText('POINT(116.404 39.915)')) < 3000
初期版本在物资分配时出现超卖问题,我们通过多种方案对比测试,最终采用分布式锁+数据库事务的方案:
java复制@Transactional
public MaterialAllocationResult allocateMaterial(Long materialId, int requestNum) {
// 获取Redisson分布式锁
RLock lock = redissonClient.getLock("material_lock:" + materialId);
try {
lock.lock(5, TimeUnit.SECONDS);
Material material = materialMapper.selectById(materialId);
if (material.getStockQuantity() - material.getAllocatedNum() >= requestNum) {
material.setAllocatedNum(material.getAllocatedNum() + requestNum);
materialMapper.updateById(material);
return new MaterialAllocationResult(true, "分配成功");
}
return new MaterialAllocationResult(false, "库存不足");
} finally {
lock.unlock();
}
}
踩坑记录:单纯依赖数据库乐观锁在集群环境下仍会出现超卖,必须结合分布式锁才能彻底解决。但要注意锁粒度不宜过大,否则会影响系统吞吐量。
SpringBoot 2.7.x的选择经过严格验证:
缓存方案采用多级策略:
java复制@Cacheable(value = "noticeCache",
key = "#priorityLevel + ':' + #pageNum",
cacheManager = "caffeineCacheManager")
public Page<Notice> getNoticesByPriority(int priorityLevel, int pageNum) {
// 数据库查询逻辑
}
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCaffeine(Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.MINUTES)
.maximumSize(1000));
return cacheManager;
}
通过Chrome Performance分析发现,初始渲染耗时主要来自ECharts组件。采用以下优化措施:
优化前后对比:
| 指标 | 优化前 | 优化后 |
|---|---|---|
| 首屏加载时间 | 2.8s | 1.2s |
| FPS平均值 | 45 | 58 |
| 内存占用 | 210MB | 150MB |
从最初的Session认证升级为JWT+双Token方案:
mermaid复制sequenceDiagram
participant Client
participant Server
Client->>Server: 登录(用户名+密码)
Server->>Client: access_token(30min过期)+refresh_token(7天过期)
loop 正常访问
Client->>Server: 携带access_token访问API
Server->>Client: 返回业务数据
end
alt token过期
Client->>Server: 用refresh_token申请新access_token
Server->>Client: 返回新access_token
end
关键实现代码:
java复制public class JwtTokenProvider {
public String generateToken(UserDetails userDetails) {
Map<String, Object> claims = new HashMap<>();
claims.put("roles", userDetails.getAuthorities().stream()
.map(GrantedAuthority::getAuthority)
.collect(Collectors.toList()));
return Jwts.builder()
.setClaims(claims)
.setSubject(userDetails.getUsername())
.setIssuedAt(new Date())
.setExpiration(new Date(System.currentTimeMillis() + jwtExpirationInMs))
.signWith(SignatureAlgorithm.HS512, jwtSecret)
.compact();
}
public boolean validateToken(String token) {
try {
Jwts.parser().setSigningKey(jwtSecret).parseClaimsJws(token);
return true;
} catch (SignatureException ex) {
log.error("Invalid JWT signature");
} catch (MalformedJwtException ex) {
log.error("Invalid JWT token");
}
return false;
}
}
对身份证号、手机号等字段采用AES加密存储:
java复制@Converter
public class CryptoConverter implements AttributeConverter<String, String> {
private static final String ALGORITHM = "AES/CBC/PKCS5Padding";
private static final byte[] KEY = "16字节长度的密钥".getBytes();
private static final byte[] IV = "16字节初始向量".getBytes();
@Override
public String convertToDatabaseColumn(String attribute) {
try {
Cipher cipher = Cipher.getInstance(ALGORITHM);
cipher.init(Cipher.ENCRYPT_MODE, new SecretKeySpec(KEY, "AES"), new IvParameterSpec(IV));
return Base64.getEncoder().encodeToString(cipher.doFinal(attribute.getBytes()));
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
@Override
public String convertToEntityAttribute(String dbData) {
try {
Cipher cipher = Cipher.getInstance(ALGORITHM);
cipher.init(Cipher.DECRYPT_MODE, new SecretKeySpec(KEY, "AES"), new IvParameterSpec(IV));
return new String(cipher.doFinal(Base64.getDecoder().decode(dbData)));
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
}
javascript复制const fetchHeatmapData = async () => {
try {
const res = await axios.get('/api/epidemic/heatmap', {
params: {
level: zoomLevel.value,
bounds: mapBounds.value
}
});
heatmapLayer.setData(res.data);
} catch (err) {
console.error('获取热力图数据失败:', err);
}
};
onMounted(() => {
const timer = setInterval(fetchHeatmapData, 5 * 60 * 1000);
return () => clearInterval(timer);
});
java复制@GetMapping("/heatmap")
public List<HeatmapPoint> getHeatmapData(
@RequestParam int level,
@RequestParam String bounds) {
Geometry area = geometryParser.parse(bounds);
return epidemicReportMapper.selectHeatmapData(
area,
LocalDateTime.now().minusDays(7),
level > 10 ? 500 : 1000);
}
xml复制<select id="selectHeatmapData" resultType="HeatmapPoint">
SELECT
ST_X(geo_point) as lng,
ST_Y(geo_point) as lat,
COUNT(*) as count
FROM epidemic_report
WHERE
ST_Within(geo_point, ST_GeomFromText(#{area}, 4326))
AND submit_time >= #{startTime}
GROUP BY
FLOOR(ST_X(geo_point)/#{precision}),
FLOOR(ST_Y(geo_point)/#{precision})
</select>
使用Spring StateMachine实现物资流转状态管理:
java复制@Configuration
@EnableStateMachineFactory
public class MaterialStateMachineConfig extends EnumStateMachineConfigurerAdapter<MaterialState, MaterialEvent> {
@Override
public void configure(StateMachineStateConfigurer<MaterialState, MaterialEvent> states) throws Exception {
states.withStates()
.initial(MaterialState.IN_STOCK)
.states(EnumSet.allOf(MaterialState.class));
}
@Override
public void configure(StateMachineTransitionConfigurer<MaterialState, MaterialEvent> transitions) throws Exception {
transitions
.withExternal()
.source(MaterialState.IN_STOCK)
.target(MaterialState.RESERVED)
.event(MaterialEvent.RESERVE)
.and()
.withExternal()
.source(MaterialState.RESERVED)
.target(MaterialState.DISTRIBUTED)
.event(MaterialEvent.DISTRIBUTE)
.action(distributeAction());
}
@Bean
public Action<MaterialState, MaterialEvent> distributeAction() {
return context -> {
Long materialId = (Long) context.getMessageHeader("materialId");
materialService.confirmDistribution(materialId);
};
}
}
关键业务指标埋点示例:
java复制@RestController
@RequestMapping("/api/epidemic")
public class EpidemicReportController {
private final Counter reportCounter = Counter.build()
.name("epidemic_report_total")
.help("Total number of epidemic reports")
.labelNames("type")
.register();
@PostMapping
public ResponseEntity<?> submitReport(@RequestBody ReportDTO dto) {
reportCounter.labels(dto.getReportType()).inc();
// 处理上报逻辑
return ResponseEntity.ok().build();
}
}
Grafana监控面板配置建议:
采用MDC实现请求链路追踪:
java复制@Slf4j
@Aspect
@Component
public class LoggingAspect {
@Around("execution(* com..controller.*.*(..))")
public Object logRequest(ProceedingJoinPoint joinPoint) throws Throwable {
String traceId = UUID.randomUUID().toString();
MDC.put("traceId", traceId);
try {
log.info("Start request: {}", joinPoint.getSignature());
Object result = joinPoint.proceed();
log.info("Complete request");
return result;
} finally {
MDC.clear();
}
}
}
日志收集方案对比:
| 方案 | 优点 | 缺点 |
|---|---|---|
| ELK | 功能全面,可视化强大 | 资源消耗大 |
| Loki | 轻量级,成本低 | 查询功能相对简单 |
| 商业日志服务 | 开箱即用 | 费用高昂 |
初期部署方案:
dockerfile复制FROM openjdk:11-jre
COPY target/epidemic-system.jar /app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
高可用方案改进:
Deployment示例:
yaml复制apiVersion: apps/v1
kind: Deployment
metadata:
name: epidemic-backend
spec:
replicas: 3
selector:
matchLabels:
app: epidemic-backend
template:
metadata:
labels:
app: epidemic-backend
spec:
containers:
- name: backend
image: registry.example.com/epidemic-system:1.2.0
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: epidemic-config
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: "1"
memory: 1Gi
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
HPA自动扩缩配置:
yaml复制apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: epidemic-backend-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: epidemic-backend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
在三个大版本迭代过程中,有几个关键决策深刻影响了系统走向:
技术债务管理:初期为了快速上线,直接使用了MyBatis的动态SQL拼接。当业务复杂后,我们引入了QueryDSL,类型安全的查询构建使代码可维护性大幅提升。
缓存策略调整:从最初的"全缓存"到现在的"分层缓存",我们总结出核心原则:
前后端协作模式:采用OpenAPI规范定义接口契约后,联调效率提升40%。我们使用Swagger Codegen自动生成TypeScript客户端代码,确保类型安全。
性能优化拐点:当QPS超过500后,数据库连接池配置成为瓶颈。通过以下调整实现性能突破:
yaml复制spring:
datasource:
hikari:
maximum-pool-size: 20
minimum-idle: 5
connection-timeout: 30000
idle-timeout: 600000
max-lifetime: 1800000
这套系统目前每天稳定处理超过50万次请求,在最近一次全员核酸检测中峰值QPS达到1200。我们在架构设计上的提前规划,特别是读写分离和缓存策略的实施,使系统平稳度过了流量高峰。