在现代云原生环境中,Kubernetes与Docker的协同工作模式已经成为基础设施的黄金标准。这套组合拳的实际威力往往体现在Nginx这类关键服务的部署上——它既是入口流量的守门人,又是后端服务的路由器。我们以一套典型的生产级电商系统为例:前端用Nginx处理静态资源并做负载均衡,中间层用Docker容器运行Node.js应用,底层数据库跑在StatefulSet里,全部通过k8s的Service机制打通。
关键认知:不要把Nginx单纯看作Web服务器,在k8s体系里它是七层流量治理的核心组件
Docker在这里扮演着标准化交付单元的角色。每个业务模块(比如用户服务、订单服务)都被打包成带有特定标签的镜像,包含:
实际操作时,我们会用多阶段构建优化镜像大小。比如Node.js应用的Dockerfile:
dockerfile复制FROM node:18-bullseye as builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:1.25-alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
在k8s环境中,Nginx通常以三种形态存在:
| 部署模式 | 配置方式 | 典型用途 | 性能考量 |
|---|---|---|---|
| 独立Deployment | ConfigMap挂载nginx.conf | 静态资源服务 | 需要调优worker_processes |
| Ingress Controller | Helm安装定制模板 | 集群入口路由 | 启用epoll事件驱动 |
| Sidecar容器 | 共享Pod的Volume | 日志收集/请求改写 | 需限制CPU配额 |
生产环境中常见的是第二种模式。安装Ingress-Nginx控制器的命令示例:
bash复制helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set controller.service.type=LoadBalancer
当Nginx需要反向代理到后端服务时,k8s提供了两种服务发现方式:
SERVICE_NAME_SERVICE_HOST的环境变量<service-name>.<namespace>.svc.cluster.local在Nginx配置中动态解析的示例:
nginx复制resolver kube-dns.kube-system.svc.cluster.local valid=5s;
server {
location /api {
set $upstream user-service.default.svc.cluster.local;
proxy_pass http://$upstream:8000;
}
}
一个外部请求的完整旅程:
网络性能瓶颈往往出现在第4步,建议启用topology-aware-hints特性优化路由
yaml复制apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-edge
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["nginx"]
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx:1.25-alpine
ports:
- containerPort: 80
resources:
limits:
cpu: "2"
memory: "1Gi"
requests:
cpu: "500m"
memory: "512Mi"
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
volumes:
- name: nginx-config
configMap:
name: nginx-edge-config
在ConfigMap中需要特别关注的Nginx参数:
nginx复制worker_processes auto; # 自动匹配CPU核心数
worker_rlimit_nofile 65535; # 文件描述符限制
events {
worker_connections 4096;
use epoll; # Linux内核下的事件驱动
multi_accept on;
}
http {
keepalive_timeout 30s;
keepalive_requests 100;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# 静态资源缓存设置
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
}
| 现象 | 诊断命令 | 解决方案 |
|---|---|---|
| 502 Bad Gateway | kubectl logs -l app=nginx |
检查后端服务Endpoint是否就绪 |
| DNS解析失败 | dig @10.96.0.10 service.ns.svc |
验证CoreDNS服务状态 |
| 连接超时 | kubectl describe ep <svc-name> |
检查网络策略(NetworkPolicy) |
| CPU飙高 | kubectl top pod |
调整worker_processes数量 |
建议采用Sidecar模式收集Nginx日志:
yaml复制- name: log-tailer
image: fluent/fluentd:v1.16
volumeMounts:
- name: nginx-logs
mountPath: /var/log/nginx
env:
- name: FLUENTD_CONF
value: |
<source>
@type tail
path /var/log/nginx/access.log
pos_file /var/log/nginx/access.log.pos
tag nginx.access
<parse>
@type nginx
</parse>
</source>
dockerfile复制FROM nginx:1.25-alpine
RUN chown -R nginx:nginx /var/cache/nginx && \
chmod -R 755 /var/log/nginx
USER nginx
yaml复制securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
建议的NetworkPolicy配置:
yaml复制apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginx-ingress-policy
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
这套架构经过双十一级别流量验证,某电商平台实测数据:单个Nginx Ingress Controller实例可稳定处理15,000 RPS,平均延迟小于50ms。关键点在于合理配置worker_connections与keepalive参数,并确保Pod均匀分布在不同物理节点上