在Web应用开发中,大文件上传一直是比较头疼的问题。特别是当用户需要上传整个文件夹时,传统的单文件上传方式往往会遇到各种限制和性能瓶颈。最近我在一个企业内部文档管理系统项目中,就遇到了需要支持用户批量上传文件夹的需求。
这个需求的核心痛点在于:
经过技术选型,我们最终决定采用.NET Core + 分块上传的方案来解决这个问题。下面我就详细分享这个方案的具体实现过程。
整个上传流程分为三个主要部分:
code复制[前端] --> [分块] --> [上传] --> [.NET Core后端] --> [重组] --> [存储]
前端需要完成以下几个关键任务:
<input type="file" webkitdirectory>获取文件夹内容javascript复制// 获取文件夹内容示例
document.getElementById('folderInput').addEventListener('change', function(e) {
const files = e.target.files;
// 处理文件列表...
});
后端需要提供以下API端点:
/api/upload/init - 初始化上传会话/api/upload/chunk - 上传分块/api/upload/complete - 完成上传每个请求都需要包含以下元数据:
首先创建一个新的ASP.NET Core Web API项目:
bash复制dotnet new webapi -n FileUploadDemo
添加必要的NuGet包:
bash复制dotnet add package Microsoft.AspNetCore.Http.Features
创建FileUploadController.cs:
csharp复制[ApiController]
[Route("api/[controller]")]
public class FileUploadController : ControllerBase
{
private readonly IWebHostEnvironment _env;
public FileUploadController(IWebHostEnvironment env)
{
_env = env;
}
[HttpPost("init")]
public IActionResult InitUpload([FromBody] UploadInfo info)
{
// 验证并初始化上传会话
// 返回上传令牌等元数据
}
[HttpPost("chunk")]
public async Task<IActionResult> UploadChunk(IFormFile file, [FromForm] ChunkInfo info)
{
// 接收并存储分块
}
[HttpPost("complete")]
public IActionResult CompleteUpload([FromBody] CompleteInfo info)
{
// 合并所有分块
// 返回最终文件信息
}
}
关键的分块接收逻辑:
csharp复制[HttpPost("chunk")]
public async Task<IActionResult> UploadChunk(IFormFile file, [FromForm] ChunkInfo info)
{
var tempPath = Path.Combine(_env.ContentRootPath, "Temp");
Directory.CreateDirectory(tempPath);
var chunkPath = Path.Combine(tempPath, $"{info.FileId}.{info.ChunkNumber}");
using (var stream = new FileStream(chunkPath, FileMode.Create))
{
await file.CopyToAsync(stream);
}
return Ok(new { info.ChunkNumber });
}
当所有分块上传完成后,需要将它们合并为完整文件:
csharp复制private void MergeFileChunks(string fileId, string fileName, int totalChunks)
{
var tempPath = Path.Combine(_env.ContentRootPath, "Temp");
var finalPath = Path.Combine(_env.ContentRootPath, "Uploads", fileName);
using (var finalStream = new FileStream(finalPath, FileMode.Create))
{
for (int i = 1; i <= totalChunks; i++)
{
var chunkPath = Path.Combine(tempPath, $"{fileId}.{i}");
var chunkBytes = System.IO.File.ReadAllBytes(chunkPath);
finalStream.Write(chunkBytes, 0, chunkBytes.Length);
System.IO.File.Delete(chunkPath);
}
}
}
html复制<div class="upload-container">
<input type="file" id="folderInput" webkitdirectory directory multiple>
<button id="uploadBtn">开始上传</button>
<div class="progress-container">
<div class="progress-bar"></div>
<div class="progress-text">0%</div>
</div>
</div>
javascript复制class FolderUploader {
constructor() {
this.chunkSize = 2 * 1024 * 1024; // 2MB
this.parallelUploads = 3;
this.uploadQueue = [];
this.activeUploads = 0;
}
async uploadFolder(files) {
// 初始化上传会话
const session = await this.initUploadSession(files);
// 创建上传队列
for (const file of files) {
await this.prepareFileChunks(file, session);
}
// 开始上传
this.processUploadQueue();
}
async prepareFileChunks(file, session) {
const chunkCount = Math.ceil(file.size / this.chunkSize);
for (let i = 0; i < chunkCount; i++) {
const start = i * this.chunkSize;
const end = Math.min(start + this.chunkSize, file.size);
const chunk = file.slice(start, end);
this.uploadQueue.push({
file,
chunk,
chunkNumber: i + 1,
totalChunks: chunkCount,
sessionId: session.id
});
}
}
async processUploadQueue() {
while (this.uploadQueue.length > 0 && this.activeUploads < this.parallelUploads) {
const item = this.uploadQueue.shift();
this.activeUploads++;
try {
await this.uploadChunk(item);
} catch (error) {
console.error('上传失败:', error);
this.uploadQueue.unshift(item); // 重新加入队列
} finally {
this.activeUploads--;
this.processUploadQueue();
}
}
}
async uploadChunk(item) {
const formData = new FormData();
formData.append('file', item.chunk, item.file.name);
formData.append('fileId', item.sessionId + '_' + item.file.name);
formData.append('chunkNumber', item.chunkNumber);
formData.append('totalChunks', item.totalChunks);
const response = await fetch('/api/upload/chunk', {
method: 'POST',
body: formData
});
if (!response.ok) {
throw new Error('上传失败');
}
this.updateProgress(item);
}
updateProgress(item) {
// 更新UI进度显示
}
}
为了实现断点续传,我们需要在后端记录已接收的分块:
csharp复制[HttpPost("init")]
public IActionResult InitUpload([FromBody] UploadInfo info)
{
var tempPath = Path.Combine(_env.ContentRootPath, "Temp");
var filePattern = $"{info.FileId}.*";
var existingChunks = Directory.GetFiles(tempPath, filePattern)
.Select(f => int.Parse(Path.GetExtension(f).TrimStart('.')))
.ToList();
return Ok(new {
existingChunks,
// 其他元数据...
});
}
前端可以根据返回的已存在分块列表跳过已经上传的部分。
javascript复制async function compressChunk(chunk) {
const stream = chunk.stream();
const compressedStream = stream.pipeThrough(new CompressionStream('gzip'));
return await new Response(compressedStream).blob();
}
csharp复制private bool IsFileTypeAllowed(string fileName)
{
var allowedExtensions = new[] { ".pdf", ".docx", ".xlsx" };
var extension = Path.GetExtension(fileName).ToLower();
return allowedExtensions.Contains(extension);
}
问题现象:某些分块上传失败导致整个文件无法合并
解决方案:
javascript复制async function uploadWithRetry(item, maxRetries = 3) {
let attempts = 0;
while (attempts < maxRetries) {
try {
await uploadChunk(item);
return;
} catch (error) {
attempts++;
if (attempts >= maxRetries) throw error;
await new Promise(resolve => setTimeout(resolve, 1000 * attempts));
}
}
}
问题现象:上传大文件时服务器内存飙升
解决方案:
csharp复制// 在Program.cs中配置
builder.WebHost.ConfigureKestrel(serverOptions => {
serverOptions.Limits.MaxRequestBodySize = 1073741824; // 1GB
});
builder.Services.Configure<FormOptions>(x => {
x.MultipartBodyLengthLimit = 1073741824; // 1GB
});
问题现象:不同操作系统路径处理不一致
解决方案:
csharp复制var safeFileName = Path.GetInvalidFileNameChars()
.Aggregate(fileName, (current, c) => current.Replace(c, '_'));
IIS部署:
xml复制<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
</system.webServer>
Kestrel配置:
csharp复制builder.WebHost.ConfigureKestrel(serverOptions => {
serverOptions.Limits.MaxRequestBodySize = 1073741824; // 1GB
});
如果需要记录上传历史,可以集成数据库:
csharp复制public class UploadRecord
{
public int Id { get; set; }
public string FileName { get; set; }
public string FilePath { get; set; }
public long FileSize { get; set; }
public DateTime UploadTime { get; set; }
public string UploadedBy { get; set; }
}
// 在控制器中注入DbContext
private readonly AppDbContext _context;
// 完成上传后保存记录
await _context.UploadRecords.AddAsync(new UploadRecord {
FileName = fileName,
FilePath = filePath,
FileSize = new FileInfo(filePath).Length,
UploadTime = DateTime.UtcNow,
UploadedBy = User.Identity.Name
});
await _context.SaveChangesAsync();
在多服务器环境下,需要考虑:
csharp复制// Azure Blob Storage示例
public async Task UploadToBlobStorage(Stream stream, string blobName)
{
var blobServiceClient = new BlobServiceClient(connectionString);
var containerClient = blobServiceClient.GetBlobContainerClient("uploads");
var blobClient = containerClient.GetBlobClient(blobName);
await blobClient.UploadAsync(stream, true);
}
在实际项目中实现这个方案时,我总结了以下几点经验:
分块大小选择:经过测试,2-5MB的分块大小在大多数网络环境下表现最佳。太小的分块会增加HTTP请求开销,太大的分块则容易因网络波动失败。
进度显示优化:不要每次分块上传完成都更新UI,这会导致界面卡顿。可以使用requestAnimationFrame来节流进度更新。
内存管理:在处理超大文件时,一定要使用流式处理。我们曾经遇到过因为缓冲整个文件导致服务器内存溢出的问题。
文件命名策略:使用GUID作为临时文件名前缀,避免文件名冲突。特别是当多个用户同时上传同名文件时。
清理机制:实现一个后台服务定期清理未完成的上传临时文件,避免磁盘空间被占满。
客户端验证:在上传前先在客户端检查文件大小和类型,可以节省带宽并提供即时反馈。
日志记录:详细记录上传过程中的关键事件,这对排查问题非常有帮助。但要注意不要记录敏感信息。
测试策略:特别要测试网络不稳定的情况,模拟慢速网络和中断恢复场景。我们使用Fiddler来模拟各种网络条件。