这个音乐点歌系统是一个基于Vue3前端框架和SpringBoot后端服务的移动端应用,专为Android平台设计。它实现了用户在线浏览歌曲、点歌、收藏以及实时演唱的核心功能。系统采用前后端分离架构,前端使用Vue3组合式API开发,后端基于SpringBoot构建RESTful API,数据库选用MySQL存储歌曲和用户数据。
我在开发过程中发现,音乐类应用的实时性要求特别高,尤其是在处理音频流传输时,普通的HTTP协议难以满足需求。为此,系统引入了WebSocket协议来实现实时通信,并采用FFmpeg进行音频转码处理,确保不同格式的音频文件都能流畅播放。
前端采用Vue3作为主要框架,相比Vue2有以下优势:
关键依赖库:
javascript复制// 典型播放器组件示例
import { ref } from 'vue'
import { Howl } from 'howler'
const player = ref(null)
function initPlayer(src) {
player.value = new Howl({
src: [src],
html5: true,
format: ['mp3', 'aac'],
onend: () => console.log('播放结束')
})
}
后端采用SpringBoot 2.7.x,主要模块包括:
音频处理特别配置:
java复制@Configuration
public class WebConfig implements WebMvcConfigurer {
@Override
public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
configurer.mediaType("m3u8", MediaType.valueOf("application/x-mpegURL"))
.mediaType("ts", MediaType.valueOf("video/MP2T"));
}
}
音乐播放是系统的核心功能,我们实现了两种播放模式:
音频处理流程:
bash复制# FFmpeg转码示例
ffmpeg -i input.mp3 -c:a aac -b:a 128k -hls_time 10 -hls_list_size 0 output.m3u8
通过WebSocket实现的实时对讲功能:
关键代码片段:
javascript复制// 音频采集
const audioContext = new AudioContext()
const stream = await navigator.mediaDevices.getUserMedia({ audio: true })
const source = audioContext.createMediaStreamSource(stream)
const processor = audioContext.createScriptProcessor(1024, 1, 1)
source.connect(processor)
processor.connect(audioContext.destination)
processor.onaudioprocess = (e) => {
const audioData = e.inputBuffer.getChannelData(0)
socket.emit('audio-chunk', audioData)
}
针对移动端网络特点采取的优化措施:
实测数据对比:
| 优化措施 | 首屏时间(3G) | 首屏时间(4G) |
|---|---|---|
| 无优化 | 4.8s | 2.1s |
| 懒加载 | 3.2s | 1.5s |
| 全部优化 | 1.9s | 0.8s |
采用分级缓存方案:
缓存更新机制:
javascript复制// 缓存策略实现
const cacheStrategy = {
checkCache(url) {
return caches.match(url)
.then(response => response || fetchAndCache(url))
},
fetchAndCache(url) {
return fetch(url)
.then(res => {
const clone = res.clone()
caches.open('audio-cache').then(cache => cache.put(url, clone))
return res
})
}
}
在低端Android设备上遇到的常见问题:
解决方案:
java复制// Android低延迟配置
AudioAttributes attributes = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_VOICE_COMMUNICATION)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build();
AudioFormat format = new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setSampleRate(44100)
.setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
.build();
AudioTrack track = new AudioTrack.Builder()
.setAudioAttributes(attributes)
.setAudioFormat(format)
.setBufferSizeInBytes(minBufferSize)
.setPerformanceMode(AudioTrack.PERFORMANCE_MODE_LOW_LATENCY)
.build();
不同Android版本的兼容性处理:
应对策略:
为防止音频资源被盗用,采取的措施:
加密流程示例:
java复制// 服务端加密处理
public ResponseEntity<byte[]> getEncryptedAudio(String id) {
String key = "your-encryption-key";
byte[] audioData = audioService.getAudioData(id);
byte[] iv = new byte[16];
new SecureRandom().nextBytes(iv);
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
cipher.init(Cipher.ENCRYPT_MODE, new SecretKeySpec(key.getBytes(), "AES"), new IvParameterSpec(iv));
byte[] encrypted = cipher.doFinal(audioData);
ByteArrayOutputStream output = new ByteArrayOutputStream();
output.write(iv);
output.write(encrypted);
return ResponseEntity.ok()
.header("Content-Type", "audio/mpeg")
.header("Content-Disposition", "attachment; filename=\"encrypted.mp3\"")
.body(output.toByteArray());
}
API安全方案:
Spring Security配置:
java复制@Configuration
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable()
.authorizeRequests()
.antMatchers("/api/auth/**").permitAll()
.antMatchers("/ws/**").permitAll()
.anyRequest().authenticated()
.and()
.addFilter(new JwtAuthenticationFilter(authenticationManager()))
.addFilter(new JwtAuthorizationFilter(authenticationManager()))
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS);
}
}
采用Docker Compose部署方案:
yaml复制version: '3'
services:
app:
build: .
ports:
- "8080:8080"
environment:
- SPRING_PROFILES_ACTIVE=prod
depends_on:
- db
- redis
db:
image: mysql:8.0
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=music_app
volumes:
- db_data:/var/lib/mysql
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
db_data:
Spring Boot Actuator监控端点:
properties复制# application-prod.properties
management.endpoints.web.exposure.include=health,info,metrics,prometheus
management.metrics.export.prometheus.enabled=true
management.endpoint.health.show-details=always
前端监控使用Sentry:
javascript复制import * as Sentry from "@sentry/vue";
Sentry.init({
app,
dsn: "your-dsn",
integrations: [
new Sentry.BrowserTracing({
routingInstrumentation: Sentry.vueRouterInstrumentation(router),
}),
],
tracesSampleRate: 0.2,
});
基于当前版本的改进计划:
AI修音技术预研:
python复制# 伪代码示例
def auto_tune(audio):
# 1. 音高检测
pitches = detect_pitch(audio)
# 2. 音高校正
corrected = []
for pitch in pitches:
nearest = round_to_scale(pitch)
corrected.append(adjust_pitch(pitch, nearest))
# 3. 平滑处理
smoothed = smooth_transitions(corrected)
# 4. 重新合成
return synthesize_audio(audio, smoothed)
在开发这个系统的过程中,我发现移动端音频处理有几个关键点需要特别注意:首先是延迟控制,必须确保从用户发声到听到回放的延迟低于200ms;其次是电量优化,持续使用麦克风和扬声器非常耗电;最后是兼容性测试,不同Android设备的音频子系统实现差异很大。建议在项目初期就建立完整的真机测试矩阵,覆盖主流品牌和Android版本。