系列文章: 前端工程師的 Modern Web 實踐之道 - Day 23
預計閱讀時間: 12 分鐘
難度等級: ⭐⭐⭐⭐☆
在昨天的文章中,我們建立了完整的 CI/CD 流水線,實現了從程式碼提交到自動部署的全流程自動化。今天我們將深入探討容器化部署,學習如何使用 Docker 和 Kubernetes 讓前端應用擁有更好的環境一致性、可擴展性和可維護性。
「容器化不是後端的專利嗎?前端只是靜態檔案,為什麼也需要 Docker?」這是很多前端工程師的第一反應。讓我們看看實際場景中的痛點:
真實場景中的部署困境:
容器化帶來的價值:
根據 2024 年 Cloud Native Survey 報告:
容器化不僅解決了「在我機器上可以執行」的經典問題,更是實現 DevOps 文化和雲原生架構的關鍵技術。
讓我們先看一個典型的傳統前端部署架構:
開發環境 (本地)
├── Node.js 18.17.0
├── npm 9.8.1
├── nginx (系統預設)
└── macOS / Windows
測試環境 (雲端)
├── Node.js 16.20.0
├── npm 8.19.4
├── nginx 1.18
└── Ubuntu 20.04
生產環境 (雲端)
├── Node.js 20.10.0
├── npm 10.2.3
├── nginx 1.24
└── Ubuntu 22.04
這種環境不一致帶來的問題:
容器化透過以下機制解決環境一致性問題:
Image (映像檔):
Container (容器):
核心優勢:
傳統虛擬機 vs 容器
虛擬機架構:
Hardware → Host OS → Hypervisor → [Guest OS + App] × N
啟動時間: 分鐘級
資源占用: GB 級
隔離程度: 完全隔離
容器架構:
Hardware → Host OS → Container Runtime → [App + Dependencies] × N
啟動時間: 毫秒級
資源占用: MB 級
隔離程度: 程序級隔離
讓我們從一個簡單的 React 應用開始:
# Dockerfile - 基礎版本
FROM node:20-alpine
# 設定工作目錄
WORKDIR /app
# 複製 package 檔案
COPY package*.json ./
# 安裝相依套件
RUN npm ci --only=production
# 複製原始碼
COPY . .
# 構建應用
RUN npm run build
# 安裝 serve 來提供靜態檔案服務
RUN npm install -g serve
# 暴露連接埠
EXPOSE 3000
# 啟動應用
CMD ["serve", "-s", "build", "-l", "3000"]
存在的問題:
# Dockerfile - 最佳化版本 (多階段構建)
# ============================================
# Stage 1: 構建階段
# ============================================
FROM node:20-alpine AS builder
# 設定構建時環境變數
ARG NODE_ENV=production
ARG REACT_APP_API_URL
ARG REACT_APP_VERSION
ENV NODE_ENV=${NODE_ENV}
ENV REACT_APP_API_URL=${REACT_APP_API_URL}
ENV REACT_APP_VERSION=${REACT_APP_VERSION}
WORKDIR /app
# 複製 package 檔案並安裝相依套件
# 利用 Docker 層快取機制,只有 package.json 變更時才重新安裝
COPY package*.json ./
RUN npm ci --no-audit --prefer-offline
# 複製原始碼並構建
COPY . .
RUN npm run build
# 移除開發相依套件,減少映像檔大小
RUN npm prune --production
# ============================================
# Stage 2: 生產階段
# ============================================
FROM nginx:1.25-alpine AS production
# 安裝 curl 用於健康檢查
RUN apk add --no-cache curl
# 複製自訂 nginx 設定
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginx-default.conf /etc/nginx/conf.d/default.conf
# 從構建階段複製構建產物
COPY --from=builder /app/build /usr/share/nginx/html
# 建立 nginx 執行使用者(非 root)
RUN chown -R nginx:nginx /usr/share/nginx/html && \
chmod -R 755 /usr/share/nginx/html && \
chown -R nginx:nginx /var/cache/nginx && \
chown -R nginx:nginx /var/log/nginx && \
touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/run/nginx.pid
# 切換到非特權使用者
USER nginx
# 暴露連接埠
EXPOSE 8080
# 健康檢查
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/ || exit 1
# 啟動 nginx (前台模式)
CMD ["nginx", "-g", "daemon off;"]
最佳化要點:
# nginx-default.conf
server {
listen 8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Gzip 壓縮
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript
application/x-javascript application/xml+rss
application/json application/javascript;
# 安全標頭
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
# 靜態資源快取
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# SPA 路由支援
location / {
try_files $uri $uri/ /index.html;
# 禁用 index.html 快取
add_header Cache-Control "no-cache, no-store, must-revalidate";
add_header Pragma "no-cache";
add_header Expires "0";
}
# API 代理(可選)
location /api/ {
proxy_pass http://backend-service:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
# 健康檢查端點
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# 錯誤頁面
error_page 404 /index.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
# docker-compose.yml - 完整開發環境
version: '3.9'
services:
# 前端應用
frontend:
build:
context: .
dockerfile: Dockerfile
target: production
args:
NODE_ENV: development
REACT_APP_API_URL: http://localhost:3000/api
REACT_APP_VERSION: ${GIT_COMMIT_SHA:-dev}
ports:
- "8080:8080"
volumes:
# 開發模式:掛載原始碼實現熱重載
- ./src:/app/src:ro
- ./public:/app/public:ro
environment:
- NODE_ENV=development
networks:
- app-network
depends_on:
backend:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# 後端 API (範例)
backend:
image: node:20-alpine
working_dir: /app
command: npm run dev
ports:
- "3000:3000"
volumes:
- ./backend:/app
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://user:pass@postgres:5432/mydb
networks:
- app-network
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 10s
timeout: 5s
retries: 5
# 資料庫
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: mydb
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- app-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user"]
interval: 10s
timeout: 5s
retries: 5
# Redis 快取
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- app-network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
# Nginx 反向代理(生產模擬)
nginx:
image: nginx:1.25-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
networks:
- app-network
depends_on:
- frontend
- backend
restart: unless-stopped
networks:
app-network:
driver: bridge
volumes:
postgres-data:
driver: local
redis-data:
driver: local
Docker Compose 最佳實踐:
depends_on 和 condition 確保啟動順序假設我們要將一個 React 電商前端應用部署到 Kubernetes 叢集,實現高可用性和自動擴展。
# 專案結構
frontend-app/
├── src/
├── public/
├── package.json
├── Dockerfile
├── .dockerignore
├── nginx.conf
├── nginx-default.conf
└── k8s/
├── deployment.yaml
├── service.yaml
├── ingress.yaml
├── hpa.yaml
└── configmap.yaml
# .dockerignore - 排除不必要的檔案
node_modules
npm-debug.log
.env
.env.local
.git
.gitignore
README.md
.dockerignore
.vscode
.idea
coverage
.eslintcache
build
dist
*.md
# .github/workflows/docker-build.yml
name: Build and Push Docker Image
on:
push:
branches: [main, develop]
tags:
- 'v*'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout 程式碼
uses: actions/checkout@v4
- name: 設定 Docker Buildx
uses: docker/setup-buildx-action@v3
- name: 登入 Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: 提取 Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,prefix={{branch}}-
- name: 構建並推送 Docker 映像檔
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
NODE_ENV=production
REACT_APP_API_URL=${{ secrets.API_URL }}
REACT_APP_VERSION=${{ github.sha }}
- name: 映像檔安全掃描
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
- name: 上傳安全掃描結果
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: 'trivy-results.sarif'
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app
namespace: production
labels:
app: frontend
version: v1
spec:
replicas: 3
revisionHistoryLimit: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
# 設定 Pod 親和性,分散到不同節點
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- frontend
topologyKey: kubernetes.io/hostname
# 初始化容器:等待後端服務就緒
initContainers:
- name: wait-for-backend
image: busybox:1.36
command: ['sh', '-c', 'until nc -z backend-service 3000; do echo waiting for backend; sleep 2; done']
containers:
- name: frontend
image: ghcr.io/your-org/frontend-app:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
protocol: TCP
# 環境變數
env:
- name: NODE_ENV
value: "production"
- name: REACT_APP_API_URL
valueFrom:
configMapKeyRef:
name: frontend-config
key: api-url
- name: REACT_APP_VERSION
value: "v1.0.0"
# 資源限制
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
# 存活探針 - 檢查容器是否執行中
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
# 就緒探針 - 檢查容器是否準備好接收流量
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
# 啟動探針 - 給應用更多時間啟動
startupProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 12
# 安全設定
securityContext:
runAsNonRoot: true
runAsUser: 101
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
# 掛載設定檔
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
readOnly: true
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /var/cache/nginx
# 定義 volumes
volumes:
- name: nginx-config
configMap:
name: nginx-config
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
# 優雅關閉時間
terminationGracePeriodSeconds: 30
# 映像檔拉取密鑰(私有倉庫)
# imagePullSecrets:
# - name: ghcr-secret
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: production
labels:
app: frontend
spec:
type: ClusterIP
selector:
app: frontend
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
sessionAffinity: None
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# 啟用 Gzip 壓縮
nginx.ingress.kubernetes.io/enable-compression: "true"
# 限流設定
nginx.ingress.kubernetes.io/limit-rps: "100"
# CORS 設定
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com"
spec:
tls:
- hosts:
- www.example.com
- example.com
secretName: frontend-tls
rules:
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
# k8s/hpa.yaml - 自動擴展
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: frontend-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: frontend-app
minReplicas: 3
maxReplicas: 10
metrics:
# 基於 CPU 使用率擴展
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
# 基於記憶體使用率擴展
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 60
- type: Pods
value: 2
periodSeconds: 60
selectPolicy: Max
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
- type: Pods
value: 1
periodSeconds: 60
selectPolicy: Min
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: frontend-config
namespace: production
data:
api-url: "https://api.example.com"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: production
data:
default.conf: |
server {
listen 8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript
application/x-javascript application/xml+rss
application/json application/javascript;
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
location / {
try_files $uri $uri/ /index.html;
add_header Cache-Control "no-cache";
}
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
#!/bin/bash
# scripts/deploy-to-k8s.sh
set -e
# 設定變數
NAMESPACE=${1:-production}
IMAGE_TAG=${2:-latest}
DEPLOYMENT_NAME="frontend-app"
echo "🚀 開始部署到 Kubernetes..."
echo "Namespace: $NAMESPACE"
echo "Image Tag: $IMAGE_TAG"
# 檢查 kubectl 連接
echo "📡 檢查 Kubernetes 叢集連接..."
if ! kubectl cluster-info &> /dev/null; then
echo "❌ 無法連接到 Kubernetes 叢集"
exit 1
fi
# 建立 namespace (如果不存在)
echo "📦 確保 namespace 存在..."
kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
# 應用 ConfigMaps
echo "⚙️ 應用 ConfigMaps..."
kubectl apply -f k8s/configmap.yaml -n $NAMESPACE
# 更新 Deployment 映像檔
echo "🔄 更新 Deployment 映像檔..."
kubectl set image deployment/$DEPLOYMENT_NAME \
frontend=ghcr.io/your-org/frontend-app:$IMAGE_TAG \
-n $NAMESPACE
# 應用所有 Kubernetes 資源
echo "📝 應用 Kubernetes 資源..."
kubectl apply -f k8s/ -n $NAMESPACE
# 等待 Deployment 完成
echo "⏳ 等待部署完成..."
kubectl rollout status deployment/$DEPLOYMENT_NAME -n $NAMESPACE --timeout=5m
# 檢查 Pod 狀態
echo "🔍 檢查 Pod 狀態..."
kubectl get pods -n $NAMESPACE -l app=frontend
# 檢查 Service 和 Ingress
echo "🌐 檢查 Service 和 Ingress..."
kubectl get svc,ingress -n $NAMESPACE
# 執行健康檢查
echo "🏥 執行健康檢查..."
PODS=$(kubectl get pods -n $NAMESPACE -l app=frontend -o jsonpath='{.items[*].metadata.name}')
for POD in $PODS; do
echo "檢查 Pod: $POD"
kubectl exec $POD -n $NAMESPACE -- curl -f http://localhost:8080/health || {
echo "❌ Pod $POD 健康檢查失敗"
exit 1
}
done
# 取得 Ingress URL
INGRESS_URL=$(kubectl get ingress frontend-ingress -n $NAMESPACE -o jsonpath='{.spec.rules[0].host}')
echo "✅ 部署完成!"
echo "🔗 應用 URL: https://$INGRESS_URL"
echo ""
echo "📊 查看日誌:"
echo " kubectl logs -f deployment/$DEPLOYMENT_NAME -n $NAMESPACE"
echo ""
echo "🔄 回滾(如果需要):"
echo " kubectl rollout undo deployment/$DEPLOYMENT_NAME -n $NAMESPACE"
#!/bin/bash
# scripts/rollback.sh - 回滾腳本
set -e
NAMESPACE=${1:-production}
DEPLOYMENT_NAME="frontend-app"
REVISION=${2:-0} # 0 表示回滾到上一個版本
echo "🔄 開始回滾部署..."
echo "Namespace: $NAMESPACE"
echo "Deployment: $DEPLOYMENT_NAME"
if [ "$REVISION" -eq 0 ]; then
echo "回滾到上一個版本"
kubectl rollout undo deployment/$DEPLOYMENT_NAME -n $NAMESPACE
else
echo "回滾到 revision: $REVISION"
kubectl rollout undo deployment/$DEPLOYMENT_NAME -n $NAMESPACE --to-revision=$REVISION
fi
# 等待回滾完成
echo "⏳ 等待回滾完成..."
kubectl rollout status deployment/$DEPLOYMENT_NAME -n $NAMESPACE --timeout=5m
# 檢查狀態
echo "🔍 檢查 Pod 狀態..."
kubectl get pods -n $NAMESPACE -l app=frontend
echo "✅ 回滾完成!"
# 最佳化技巧總結
# 1. 使用輕量級基礎映像檔
FROM node:20-alpine AS builder # alpine 比 debian 小 10 倍
# 2. 多階段構建
# 構建階段包含所有工具,執行階段只包含必要檔案
# 3. 合併 RUN 指令減少層數
RUN apk add --no-cache curl && \
npm ci && \
npm run build && \
npm prune --production
# 4. 清理不必要的檔案
RUN rm -rf /var/cache/apk/* \
/tmp/* \
/root/.npm
# 5. 使用 .dockerignore 排除不必要檔案
# node_modules, .git, tests 等
# 最終結果對比:
# 未最佳化: 1.2GB
# 最佳化後: 45MB (減少 96%)
# Kubernetes Pod Security Context
securityContext:
# Pod 級別安全設定
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
# Container 級別安全設定
containers:
- name: frontend
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 101
# Network Policy - 限制網路流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-network-policy
namespace: production
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Ingress
- Egress
ingress:
# 只允許來自 ingress controller 的流量
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
egress:
# 允許訪問後端 API
- to:
- podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 3000
# 允許 DNS 查詢
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Prometheus ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: frontend-monitor
namespace: production
labels:
app: frontend
spec:
selector:
matchLabels:
app: frontend
endpoints:
- port: http
path: /metrics
interval: 30s
// 前端應用整合 Prometheus metrics (可選)
// metrics-middleware.ts
import { register, Counter, Histogram } from 'prom-client';
// HTTP 請求計數器
const httpRequestCounter = new Counter({
name: 'frontend_http_requests_total',
help: 'Total number of HTTP requests',
labelNames: ['method', 'status', 'path']
});
// 回應時間直方圖
const httpRequestDuration = new Histogram({
name: 'frontend_http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'status', 'path'],
buckets: [0.1, 0.5, 1, 2, 5]
});
export function metricsMiddleware(req, res, next) {
const start = Date.now();
res.on('finish', () => {
const duration = (Date.now() - start) / 1000;
httpRequestCounter.inc({
method: req.method,
status: res.statusCode,
path: req.route?.path || req.path
});
httpRequestDuration.observe(
{
method: req.method,
status: res.statusCode,
path: req.route?.path || req.path
},
duration
);
});
next();
}
// Metrics 端點
export async function metricsHandler(req, res) {
res.set('Content-Type', register.contentType);
res.end(await register.metrics());
}
# 藍綠部署 - 使用不同的 Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app-blue
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: frontend
version: blue
template:
metadata:
labels:
app: frontend
version: blue
spec:
containers:
- name: frontend
image: ghcr.io/your-org/frontend-app:v1.0.0
# ... 其他設定
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-app-green
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: frontend
version: green
template:
metadata:
labels:
app: frontend
version: green
spec:
containers:
- name: frontend
image: ghcr.io/your-org/frontend-app:v2.0.0
# ... 其他設定
---
# 透過修改 Service selector 切換流量
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: production
spec:
selector:
app: frontend
version: blue # 改為 green 即可切換到新版本
ports:
- port: 80
targetPort: 8080
# 金絲雀發布 - 使用 Flagger (需要安裝 Flagger)
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: frontend-canary
namespace: production
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: frontend-app
service:
port: 80
targetPort: 8080
analysis:
interval: 1m
threshold: 5
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 1m
webhooks:
- name: load-test
url: http://load-tester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://frontend-service.production/"
容器化的核心價值: 容器化解決了「在我機器上可以執行」的經典問題,透過 Docker 實現環境一致性、可移植性和快速部署。多階段構建可以將映像檔大小減少 96%。
Kubernetes 編排能力: Kubernetes 提供了自動擴展、自我修復、滾動更新、服務發現等企業級功能,是實現雲原生架構的關鍵基礎設施。
實戰技術要點:
成本最佳化: 容器化後如何進一步最佳化雲端成本?考慮 Spot Instance、資源共享、閒時縮容等策略。
服務網格: 當微服務數量增加時,考慮引入 Istio 或 Linkerd 等服務網格,實現更細粒度的流量控制、安全和可觀測性。
GitOps 工作流: 使用 ArgoCD 或 Flux 實現宣告式的 Kubernetes 部署,將基礎設施即程式碼推向極致。
實作挑戰: 為你的前端專案建立完整的容器化部署: