iT邦幫忙

2025 iThome 鐵人賽

DAY 21
0
AI & Data

進擊的 n8n系列 第 21

Day 21:n8n on GKE(Private Cluster + Internal Load Balancer)部署細節

  • 分享至 

  • xImage
  •  

昨天我們介紹了 GKE on n8n 的一些部署注意事項,今天就來實作吧!

  1. Terraform:建立 GKE Private + Node SA + Workload Identity
    檔名:main.tf
terraform {
  required_version = ">= 1.6.0"
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 5.39"
    }
  }
}

provider "google" {
  project = var.project_id
  region  = var.region
}

# ----------- Node Service Account -----------
resource "google_service_account" "gke_nodes" {
  account_id   = "sa-gke-nodes"
  display_name = "GKE Node Service Account for n8n"
}

# 最小必要權限(Logging/Monitoring/Artifact Registry Pull n8n image)
resource "google_project_iam_member" "nodes_log_writer" {
  project = var.project_id
  role    = "roles/logging.logWriter"
  member  = "serviceAccount:${google_service_account.gke_nodes.email}"
}
resource "google_project_iam_member" "nodes_metric_writer" {
  project = var.project_id
  role    = "roles/monitoring.metricWriter"
  member  = "serviceAccount:${google_service_account.gke_nodes.email}"
}
resource "google_project_iam_member" "nodes_metadata_writer" {
  project = var.project_id
  role    = "roles/stackdriver.resourceMetadata.writer"
  member  = "serviceAccount:${google_service_account.gke_nodes.email}"
}
resource "google_project_iam_member" "nodes_artifact_reader" {
  project = var.project_id
  role    = "roles/artifactregistry.reader"
  member  = "serviceAccount:${google_service_account.gke_nodes.email}"
}

# ----------- GKE Private Cluster -----------
resource "google_container_cluster" "n8n" {
  name     = "n8n-private-cluster"
  location = var.zone

  networking_mode              = "VPC_NATIVE"
  remove_default_node_pool     = true
  initial_node_count           = 1
  network                      = var.vpc
  subnetwork                   = var.subnet

  release_channel {
    channel = "REGULAR"
  }

  # 啟用 Workload Identity
  workload_identity_config {
    workload_pool = "${var.project_id}.svc.id.goog"
  }

  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = false
    master_ipv4_cidr_block  = "172.16.0.0/28"
  }

  ip_allocation_policy {}

  logging_service    = "logging.googleapis.com/kubernetes"
  monitoring_service = "monitoring.googleapis.com/kubernetes"

  # 建議:限制 Master Authorized Networks
}

# ----------- Node Pool -----------
resource "google_container_node_pool" "default" {
  name     = "np-n8n"
  cluster  = google_container_cluster.n8n.name
  location = var.zone

  node_count = 2

  node_config {
    machine_type   = "e2-standard-4"
    service_account = google_service_account.gke_nodes.email
    oauth_scopes   = ["https://www.googleapis.com/auth/cloud-platform"]
    tags           = ["gke-n8n-nodes"]
  }

  management {
    auto_repair  = true
    auto_upgrade = true
  }
}

output "cluster_name" { value = google_container_cluster.n8n.name }
output "node_sa"     { value = google_service_account.gke_nodes.email }

檔名:variables.tf

variable "project_id" { type = string }
variable "region"     { type = string   default = "asia-east1" }
variable "zone"       { type = string   default = "asia-east1-b" }
variable "vpc"        { type = string   default = "default" }         # 依你環境調整
variable "subnet"     { type = string   default = "default" }         # 依你環境調整

初始化與建立

terraform init
terraform apply -auto-approve -var="project_id=<YOUR_PROJECT_ID>"

取得 kubeconfig

gcloud container clusters get-credentials n8n-private-cluster --zone asia-east1-b --project <YOUR_PROJECT_ID>
  1. Kubernetes:部署 n8n(含 Workload Identity 與 Internal LB)
    若你需要讓 n8n 以 GSA 存取 GCS/Secret Manager:
    先建立一個 Google Service Account(例如 sa-n8n-wi),並賦予相對應角色(如 roles/storage.objectViewer)。

然後把它綁定到 KSA:KSA n8n-wi。

建立 GSA 與綁定身份

# 建立 GSA(如已存在可略過)
gcloud iam service-accounts create sa-n8n-wi --display-name="n8n Workload Identity SA"

# 賦予讀取 GCS Bucket 權限
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> \
  --member="serviceAccount:sa-n8n-wi@<YOUR_PROJECT_ID>.iam.gserviceaccount.com" \
  --role="roles/storage.objectViewer"

K8s 資源(檔名:n8n.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: n8n

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: n8n-wi
  namespace: n8n
  annotations:
    iam.gke.io/gcp-service-account: "sa-n8n-wi@<YOUR_PROJECT_ID>.iam.gserviceaccount.com"

---
apiVersion: v1
kind: Secret
metadata:
  name: n8n-db-secret
  namespace: n8n
type: Opaque
stringData:
  db-password: "changeMeStrong!"
  encryption-key: "pleaseChangeMe_32bytes_min"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: n8n-main
  namespace: n8n
spec:
  replicas: 1
  selector:
    matchLabels:
      app: n8n
  template:
    metadata:
      labels:
        app: n8n
    spec:
      serviceAccountName: n8n-wi
      containers:
      - name: n8n
        image: n8nio/n8n:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5678
        env:
        # --- 基本設定 ---
        - name: N8N_PORT
          value: "5678"
        - name: N8N_PROTOCOL
          value: "http"
        - name: WEBHOOK_URL
          value: "http://n8n-ilb.n8n.svc.cluster.local/"  # 內部呼叫;若前面有 Gateway/Proxy 可改
        - name: N8N_ENCRYPTION_KEY
          valueFrom:
            secretKeyRef:
              name: n8n-db-secret
              key: encryption-key

        # --- DB (Cloud SQL - Private IP) ---
        - name: DB_TYPE
          value: "postgresdb"
        - name: DB_POSTGRESDB_HOST
          value: "<CLOUD_SQL_PRIVATE_IP_OR_DNS>"   # 例 10.XX.XX.XX 或 private DNS
        - name: DB_POSTGRESDB_PORT
          value: "5432"
        - name: DB_POSTGRESDB_DATABASE
          value: "n8n"
        - name: DB_POSTGRESDB_USER
          value: "n8n"
        - name: DB_POSTGRESDB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: n8n-db-secret
              key: db-password

        # --- 如果你有啟用 Queue Mode,請加上 Redis 變數 ---
        # - name: N8N_EXECUTIONS_MODE
        #   value: "queue"
        # - name: QUEUE_BULL_REDIS_HOST
        #   value: "redis-n8n-master"
        # - name: QUEUE_BULL_REDIS_PORT
        #   value: "6379"

        readinessProbe:
          httpGet:
            path: /
            port: 5678
          initialDelaySeconds: 15
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: 5678
          initialDelaySeconds: 30
          periodSeconds: 10
        resources:
          requests:
            cpu: "250m"
            memory: "512Mi"
          limits:
            cpu: "1000m"
            memory: "1Gi"

      # ----(可選)若沒有 Private IP,可用 Cloud SQL Auth Proxy Sidecar----
      # - name: cloud-sql-proxy
      #   image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.11.4
      #   args:
      #     - "--port=5432"
      #     - "<PROJECT:REGION:INSTANCE>"   # 例:myproj:asia-east1:n8n-postgres
      #     - "--structured-logs"
      #   securityContext:
      #     runAsNonRoot: true
      #   resources:
      #     requests:
      #       cpu: "50m"
      #       memory: "64Mi"
      #     limits:
      #       cpu: "200m"
      #       memory: "256Mi"

---
apiVersion: v1
kind: Service
metadata:
  name: n8n-ilb
  namespace: n8n
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
spec:
  type: LoadBalancer
  selector:
    app: n8n
  ports:
    - name: http
      port: 80
      targetPort: 5678

套用

# 綁定 KSA → GSA(Workload Identity)
kubectl annotate serviceaccount n8n-wi \
  --namespace n8n \
  iam.gke.io/gcp-service-account=sa-n8n-wi@<YOUR_PROJECT_ID>.iam.gserviceaccount.com \
  --overwrite

# 部署
kubectl apply -f n8n.yaml

# 觀察 ILB IP(只會給私網 IP)
kubectl -n n8n get svc n8n-ilb -w

這樣就完成基本的部署囉!我們明天見吧!


上一篇
Day 20:n8n on GKE(Private Cluster + Internal Load Balancer)部署介紹
下一篇
Day 22:Redis 佈署與 Queue Mode 設定
系列文
進擊的 n8n25
圖片
  熱門推薦
圖片
{{ item.channelVendor }} | {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言