这个旅游信息咨询网站项目采用前后端分离架构,前端使用Vue.js框架构建用户界面,后端基于Spring Boot实现业务逻辑。系统主要解决旅游信息分散、查询效率低下的痛点,整合景点介绍、路线规划、用户评价等核心功能模块。
从技术选型来看,Spring Boot+Vue的组合在2023年Statista的开发者调研中占比达到42%,成为中后台管理系统的主流选择。这种架构的优势在于:后端Spring Boot通过自动配置和起步依赖简化了Java EE开发,而Vue的响应式数据绑定和组件化开发则大幅提升了前端开发效率。
景点信息管理采用树形分类结构,支持多级地域划分(国家-省份-城市)。数据表设计包含:
sql复制CREATE TABLE `scenic_spot` (
`id` bigint NOT NULL AUTO_INCREMENT,
`name` varchar(100) NOT NULL COMMENT '景点名称',
`location` point NOT NULL COMMENT '地理坐标',
`cover_img` varchar(255) COMMENT '封面图URL',
`description` text COMMENT '详细描述',
`open_time` varchar(100) COMMENT '开放时间',
`ticket_info` varchar(255) COMMENT '门票信息',
`status` tinyint DEFAULT 1 COMMENT '状态(0下架1上架)',
PRIMARY KEY (`id`),
SPATIAL KEY `idx_location` (`location`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
特别注意:空间索引的建立需要使用MySQL 5.7+版本,这是实现周边景点快速检索的关键。
基于用户行为的混合推荐策略:
推荐服务伪代码:
java复制public List<ScenicSpot> recommend(Long userId) {
// 获取基础推荐列表
List<ScenicSpot> cfItems = cfRecommender.recommend(userId);
List<ScenicSpot> contentItems = contentRecommender.recommend(userId);
// 融合推荐结果
List<ScenicSpot> hybridList = hybridStrategy.merge(
cfItems, contentItems, getRecentViews(userId));
// 商业规则过滤
return businessRuleFilter.apply(hybridList);
}
Elasticsearch索引设计关键点:
json复制{
"settings": {
"analysis": {
"analyzer": {
"pinyin_analyzer": {
"tokenizer": "my_pinyin"
}
}
}
},
"mappings": {
"properties": {
"name": {
"type": "text",
"analyzer": "ik_max_word",
"fields": {
"pinyin": {
"type": "text",
"analyzer": "pinyin_analyzer"
}
}
},
"location": {
"type": "geo_point"
}
}
}
}
搜索接口性能优化方案:
WebSocket消息协议设计:
protobuf复制message CommentMsg {
string msgId = 1;
int64 userId = 2;
string avatar = 3;
string content = 4;
int64 spotId = 5;
int64 timestamp = 6;
}
消息存储采用MongoDB分片集群,按景点ID进行哈希分片。为防止消息风暴,实现以下控制策略:
Docker Compose核心服务配置:
yaml复制version: '3.8'
services:
app:
image: travel-app:${TAG}
deploy:
resources:
limits:
cpus: '2'
memory: 2G
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
redis:
image: redis:6-alpine
command: redis-server --save 60 1 --loglevel warning
volumes:
- redis_data:/data
volumes:
redis_data:
JVM参数配置(针对8G内存服务器):
code复制-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:InitiatingHeapOccupancyPercent=45
-XX:MetaspaceSize=256m
-XX:MaxMetaspaceSize=512m
-Xms4g -Xmx4g
Nginx优化配置片段:
code复制worker_processes auto;
worker_rlimit_nofile 100000;
events {
worker_connections 4000;
use epoll;
multi_accept on;
}
http {
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
现象:景点库存超卖
解决方案:
java复制@Transactional
public boolean bookTicket(Long spotId, Integer num) {
ScenicSpot spot = spotMapper.selectForUpdate(spotId);
if (spot.getInventory() < num) {
return false;
}
int rows = spotMapper.reduceInventory(spotId, num, spot.getVersion());
return rows > 0;
}
java复制public boolean bookWithLock(Long spotId) {
String lockKey = "spot:" + spotId;
try {
Boolean locked = redisTemplate.opsForValue()
.setIfAbsent(lockKey, "1", 10, TimeUnit.SECONDS);
if (Boolean.TRUE.equals(locked)) {
// 处理业务逻辑
}
} finally {
redisTemplate.delete(lockKey);
}
}
javascript复制const SpotDetail = () => import('./views/SpotDetail.vue')
javascript复制configureWebpack: {
optimization: {
splitChunks: {
chunks: 'all',
maxSize: 244 * 1024 // 244KB
}
}
}
html复制<img v-lazy="imageUrl" alt="景点图片">
java复制@RestControllerAdvice
public class XssFilter implements WebMvcConfigurer {
@Override
public void addArgumentResolvers(List<HandlerMethodArgumentResolver> resolvers) {
resolvers.add(new StringEscapeResolver());
}
private static class StringEscapeResolver implements HandlerMethodArgumentResolver {
// 实现参数过滤逻辑
}
}
java复制@Column(columnDefinition = "varchar(255) comment '手机号'")
@Convert(converter = AesEncryptConverter.class)
private String phone;
Prometheus关键指标:
yaml复制- job_name: 'spring_app'
metrics_path: '/actuator/prometheus'
scrape_interval: 15s
static_configs:
- targets: ['app:8080']
Grafana监控面板包含:
ELK栈配置要点:
properties复制# Logstash grok 模式
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{NUMBER:pid} --- \[%{DATA:thread}\] %{DATA:class} : %{GREEDYDATA:msg}"
}
日志归档策略:
python复制# 使用PySpark分析用户行为
df = spark.read.parquet("hdfs://user_behavior/*.parquet")
result = df.groupBy("spot_id").agg(
count("*").alias("view_count"),
avg("stay_duration").alias("avg_stay")
)