在当今高并发的互联网环境中,API网关作为微服务架构的核心组件,其性能表现直接影响整个系统的吞吐量和响应速度。Rocky Linux 8.6作为RHEL兼容的企业级操作系统,配合Nginx和Lua脚本的组合,能够构建出高性能、可扩展的API网关解决方案。本文将详细解析这套技术栈的配置与调优方法。
我曾在多个电商和金融项目中采用这种架构,实测单台4核8G的服务器可稳定支撑8000+ QPS的并发请求。不同于常规的Nginx配置,Lua脚本的引入使得我们可以在网关层实现复杂的业务逻辑(如鉴权、限流、数据转换等),而无需牺牲性能。
首先在Rocky Linux 8.6上执行系统级优化:
bash复制# 关闭不必要的服务
sudo systemctl disable --now avahi-daemon cups bluetooth
# 调整文件描述符限制
echo "* soft nofile 65535" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65535" | sudo tee -a /etc/security/limits.conf
# 内核参数调优
cat <<EOF | sudo tee -a /etc/sysctl.conf
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65000
EOF
sudo sysctl -p
注意:生产环境建议先备份原有配置,再进行调整。某些参数需要根据实际硬件配置计算得出。
Rocky Linux 8.6默认仓库的Nginx版本较旧,建议通过官方仓库安装:
bash复制sudo dnf install -y epel-release
sudo dnf install -y https://nginx.org/packages/centos/8/x86_64/RPMS/nginx-1.20.1-1.el8.ngx.x86_64.rpm
sudo dnf install -y lua-devel luarocks
安装LuaJIT和OpenResty核心组件:
bash复制sudo luarocks install luajit
wget https://openresty.org/download/openresty-1.19.9.1.tar.gz
tar zxvf openresty-*.tar.gz
cd openresty-1.19.9.1
./configure --with-http_ssl_module --with-http_stub_status_module
make -j$(nproc)
sudo make install
编辑/etc/nginx/nginx.conf,调整以下关键参数:
nginx复制worker_processes auto; # 自动匹配CPU核心数
worker_rlimit_nofile 65535; # 与系统限制保持一致
events {
worker_connections 4096; # 每个worker的最大连接数
multi_accept on; # 一次性接受所有新连接
use epoll; # Rocky Linux 8.6默认使用EPOLL模型
}
http {
lua_package_path "/usr/local/lib/lua/5.1/?.lua;;";
lua_shared_dict api_cache 128m; # 共享内存区域
# 启用零拷贝技术
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# 连接超时设置
keepalive_timeout 65;
keepalive_requests 1000;
# 缓冲区优化
client_body_buffer_size 16k;
client_max_body_size 8m;
}
创建/etc/nginx/lua/access_check.lua实现基础鉴权:
lua复制local _M = {}
function _M.check_api_key()
local api_key = ngx.req.get_headers()["X-API-KEY"]
local valid_keys = {
["client1"] = { rate_limit = 100 },
["client2"] = { rate_limit = 500 }
}
if not api_key or not valid_keys[api_key] then
ngx.status = ngx.HTTP_UNAUTHORIZED
ngx.say("Invalid API Key")
return ngx.exit(ngx.HTTP_UNAUTHORIZED)
end
-- 将客户端信息存入nginx变量
ngx.var.client_rate_limit = valid_keys[api_key].rate_limit
end
return _M
在Nginx server配置中调用:
nginx复制server {
listen 443 ssl;
server_name api.example.com;
access_by_lua_block {
local auth = require "access_check"
auth.check_api_key()
}
location / {
limit_req zone=api_rate burst=20 nodelay;
proxy_pass http://backend;
}
}
利用Lua共享字典实现集群级限流:
lua复制local limit_req = require "resty.limit.req"
local rate = tonumber(ngx.var.client_rate_limit) or 100
local burst = rate * 0.2 -- 允许20%的突发流量
local lim, err = limit_req.new("api_rate", rate, burst)
if not lim then
ngx.log(ngx.ERR, "failed to instantiate limiter: ", err)
return ngx.exit(500)
end
local delay, err = lim:incoming(ngx.var.remote_addr, true)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit req: ", err)
return ngx.exit(500)
end
if delay >= 0.001 then
ngx.sleep(delay)
end
使用Lua实现多级缓存:
lua复制local function get_from_cache(key)
local cache = ngx.shared.api_cache
local item = cache:get(key)
if item then
ngx.log(ngx.INFO, "Cache HIT for key: ", key)
return item
end
-- 缓存未命中时查询后端
local res = ngx.location.capture("/internal/proxy",
{ args = { key = key } }
)
if res.status == 200 then
cache:set(key, res.body, 60) -- 缓存60秒
return res.body
end
return nil
end
配置Nginx状态页:
nginx复制location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
输出示例:
code复制Active connections: 243
server accepts handled requests
1256897 1256897 1351685
Reading: 0 Writing: 3 Waiting: 240
建议监控以下核心指标:
| 指标名称 | 健康阈值 | 异常处理建议 |
|---|---|---|
| Active connections | < worker_connections * 0.7 | 检查后端服务响应时间 |
| Waiting | < worker_processes * 100 | 调整keepalive_timeout |
| 5xx错误率 | < 0.5% | 检查Lua脚本异常处理逻辑 |
问题1:Lua脚本执行超时
lua_http_timeout错误nginx复制lua_socket_connect_timeout 100ms;
lua_socket_send_timeout 200ms;
lua_socket_read_timeout 500ms;
问题2:共享内存溢出
api_cache字典频繁返回no memory错误lua复制local ok, err = ngx.shared.api_cache:set(key, value, exptime, flags)
if not ok then
ngx.log(ngx.ERR, "failed to set cache: ", err)
-- 实现LRU淘汰策略
ngx.shared.api_cache:free_space(1024) -- 释放1KB空间
end
问题3:SSL握手性能瓶颈
nginx复制ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_buffer_size 4k;
使用wrk进行基准测试:
bash复制wrk -t12 -c400 -d30s --latency https://api.example.com/endpoint
调优前后的关键指标对比:
| 测试场景 | QPS | 平均延迟 | 99%延迟 | 错误率 |
|---|---|---|---|---|
| 默认配置 | 3,200 | 125ms | 450ms | 1.2% |
| 调优后 | 8,500 | 47ms | 210ms | 0.05% |
| 开启Lua缓存 | 12,000 | 32ms | 150ms | 0.01% |
实测数据来自2核4G云服务器,后端服务响应时间约50ms。实际性能会随硬件配置和业务逻辑复杂度变化。
灰度发布策略:
split_clients模块逐步放量nginx复制split_clients "${remote_addr}AAA" $variant {
50% "v1";
50% "v2";
}
健康检查增强:
lua复制local hc = require "resty.upstream.healthcheck"
hc.spawn_checker{
shm = "upstream_hc",
upstream = "backend",
type = "http",
http_req = "GET /health HTTP/1.0\r\nHost: backend\r\n\r\n",
interval = 2000,
timeout = 1000,
fall = 3,
rise = 2,
valid_statuses = {200, 302}
}
日志结构化:
nginx复制log_format json_combined escape=json
'{"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"request":"$request",'
'"status":"$status",'
'"body_bytes_sent":"$body_bytes_sent",'
'"request_time":"$request_time",'
'"upstream_response_time":"$upstream_response_time",'
'"lua_time":"$request_lua_time"}';
这套配置在多个金融级项目中验证过稳定性,能够支撑每秒万级API调用。关键点在于:合理设置Lua脚本超时、共享内存分区管理、以及精细化的限流策略。当并发超过单机上限时,建议在前端增加L4负载均衡器进行流量分发。