貨櫃這麼多要怎麼設定?來製造模板快速部署貨櫃吧~
圖片來源:Docker (@Docker) / Twitter
上一篇安排了要部署的 Kubernetes resource,今天繼續轉換成 helm chart~
來製作自己的模板吧!
先建立一個 chart
helm create test-web
之後就會長出基本的 chart 結構
$ tree test-web
test-web
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml
3 directories, 10 files
charts/: 放其他依賴的 charttemplates/: 放套件的模板Chart.yaml: chart 基本資訊values.yaml: chart 設定檔接下來會依照前端、後端製作模板
完整檔案可以到 github Day 27 - test-web/ 查看
後端的 resource 有 PersistentVolumeClaim, Deployment, Service
先在 _helper.tpl 加入定義,test-web.backend 代表名稱 chart-fullname + backend
templates/_helper.tpl
{{- define "test-web.backend" -}}
  {{- printf "%s-backend" (include "test-web.fullname" .) -}}
{{- end -}}
在 templates 目錄下新增 backend 目錄,用來存放後端的模板
mkdir ./templates/backend
原本的設定: Day 27 - backend.yaml#L1
接著根據設定建立 pvc.yaml 模板~
先定義 pvc 建立條件:
enabled 啟用existingClaim 就不需要再建立templates/backend/pvc.yaml
{{- if and .Values.persistence.backend.enabled (not .Values.persistence.backend.existingClaim) }}
...
...
...
{{- end }}    
values.yaml
persistence:
  backend:
    enabled: true
    existingClaim: ""
設定變數 $backend_pvc 放便接下來使用
templates/backend/pvc.yaml
{{- $backend_pvc := .Values.persistence.backend -}}
基本 pvc 設定
templates/backend/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata.name: 使用 _helper.tpl 定義的後端名稱
templates/backend/pvc.yaml
metadata:
  name: {{ template "test-web.backend" . }}
metadata.labels: 套用 helm chart 內部的 label
templates/backend/pvc.yaml
metadata:
  labels: {{- include "test-web.labels" . | nindent 4 }}
metadata.annotations: 設定解除安裝時會保留 resource pvc
templates/backend/pvc.yaml
metadata:
  annotations:
  {{- if eq $backend_pvc.resourcePolicy "keep" }}
    helm.sh/resource-policy: keep
  {{- end }}
values.yaml
persistence:
  backend:
    resourcePolicy: keep
spec.storageClassName: 如果有設定 storageClss 就套用設定
templates/backend/pvc.yaml
spec:
  {{- if eq "-" $backend_pvc.storageClass }}
  storageClassName: ""
  {{- else }}
  storageClassName: {{ $backend_pvc.storageClass }}
  {{- end }}
values.yaml
persistence:
  backend:
    storageClass: "nfs-client"
spec.accessMode: 設定 pvc accessMode
templates/backend/pvc.yaml
spec:
  accessModes:
    - {{ $backend_pvc.accessMode }}
values.yaml
persistence:
  backend:
    accessMode: ReadWriteMany
spec.resources: 設定 pvc 請求的資源
templates/backend/pvc.yaml
spec:
  resources:
    requests:
      storage: {{ $backend_pvc.size }}
values.yaml
persistence:
  backend:
    size: 1Gi
原本的設定: Day27 - backend.yaml#L17
接著根據設定建立 deployment.yaml 模板~
定義設定檔參數路徑
templates/backend/deployment.yaml
{{- $backend := .Values.backend -}}
{{- $backend_pvc := .Values.persistence.backend -}}
Deployment 基本設定
_helper.tpl 定義的名稱app: backend
templates/backend/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "test-web.backend" . }}
  labels:
    {{- include "test-web.labels" . | nindent 4 }}
    app: backend
spec.selector, spec.template.metadata: 設定 helm chart 的 labels 再加上自定的 app: backend
templates/backend/deployment.yaml
spec:
  selector:
    matchLabels:
      {{- include "test-web.selectorLabels" . | nindent 6 }}
      app: backend
  template:
    metadata:
      labels:
        {{- include "test-web.selectorLabels" . | nindent 8 }}
        app: backend
spec.strategy: 設定部署策略
templates/backend/deployment.yaml
spec:
  replicas: 1
  strategy:
    type: {{ $backend.updateStrategy.type }}
    {{- if eq $backend.updateStrategy.type "RollingUpdate"  }}
    rollingUpdate:
      maxSurge: {{ $backend.updateStrategy.rollingUpdate.maxSurge }}
      maxUnavailable: {{ $backend.updateStrategy.rollingUpdate.maxUnavailable }}
    {{- else }}
    rollingUpdate: null
    {{- end }}
values.yaml
backend:
  updateStrategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
imagePullSecrets: 設定 pull image 使用的 secret
templates/backend/deployment.yaml
spec:
  template:
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
values.yaml
imagePullSecrets:
  - test-web-robot
initContainers: 設定 initial container,volume 會套用 subPath 設定
templates/backend/deployment.yaml
spec:
  template:
    spec:
      initContainers:
        - name: init-backend
          image: {{ $backend.init.image.repository }}:{{ $backend.init.image.tag }}
          command: ["/bin/sh"]
          args:
            - "-c"
            - "[ ! -e /app/data/todos.json ] && echo [] > /app/data/todos.json || true"
          volumeMounts:
            - name: data
              mountPath: /app/data
              subPath: {{ $backend_pvc.subPath }}
values.yaml
backend:
  init:
    image:
      repository: alpine
      tag: "3.16.2"
persistence:
  backend:
    subPath: ""
containers: 設定 container,volume 會套用 subPath 設定、內部 app 會套用外部設定的 container port
templates/backend/deployment.yaml
spec:
  template:
    spec:
      containers:
        - name: backend
          image: "{{ $backend.container.image.repository }}:{{ $backend.container.image.tag | default .Chart.AppVersion }}"
          resources:
            {{- toYaml $backend.container.resources | nindent 12 }}
          livenessProbe:
            tcpSocket:
              port: {{ $backend.container.port }}
            initialDelaySeconds: 5
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /
              port: {{ $backend.container.port }}
            initialDelaySeconds: 5
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 3
            periodSeconds: 10
          env:
            - name: HOST_PORT
              value: {{ $backend.container.port | quote }}
          ports:
            - containerPort: {{ $backend.container.port }}
              name: backend
          volumeMounts:
            - name: data
              mountPath: /app/data
              subPath: {{ $backend_pvc.subPath }}
values.yaml
backend:
  container:
    port: 80
volumes: 設定套用前面定義的 pvc
templates/backend/deployment.yaml
spec:
  template:
    spec:
      volumes:
        - name: data
        {{- if and $backend_pvc.enabled }}
          persistentVolumeClaim:
            claimName: {{ $backend_pvc.existingClaim | default (include "test-web.backend" .) }}
        {{- else }}
          emptyDir: {}
        {{- end }}
values.yaml
persistence:
  backend:
    enabled: true
    existingClaim: ""
restartPolicy: 設定重新啟動的策略
templates/backend/deployment.yaml
spec:
  template:
    spec:
      restartPolicy: Always
原本的設定: Day27 - backend.yaml#L94
接著根據設定建立 service.yaml 模板~
templates/backend/service.yaml
{{- $backend := .Values.backend -}}
{{- $backend_svc := .Values.service.backend -}}
Service 基本設定
templates/backend/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ include "test-web.backend" . }}
  labels:
    {{- include "test-web.labels" . | nindent 4 }}
selector: 篩選 helm chart 的 labels 再加上自定的 app: backend
templates/backend/service.yaml
spec:
  selector:
    {{- include "test-web.selectorLabels" . | nindent 4 }}
    app: backend
type, ports
templates/backend/service.yaml
spec:
  type: {{ $backend_svc.type }}
  ports:
    - name: http
      protocol: TCP
      port: {{ $backend.container.port }}
      targetPort: {{ $backend_svc.port }}
      {{- if eq $backend_svc.type "nodePort" }}
      nodePort: {{ $backend_svc.nodePort }}
      {{- end }}
values.yaml
service:
  backend:
    type: ClusterIP
    port: 80
    # nodePort: 30081
其他設定
templates/backend/service.yaml
spec:
  sessionAffinity: None
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
前端的 resource 有 Deployment, Service, Ingress
先在 _helper.tpl 加入定義,test-web.frontend 代表名稱 chart-fullname + frontend
templates/_helper.tpl
{{- define "test-web.frontend" -}}
  {{- printf "%s-frontend" (include "test-web.fullname" .) -}}
{{- end -}}
在 templates 目錄下新增 frontend 目錄,用來存放後端的模板
mkdir ./templates/frontend
原本的設定: Day27 - frontend.yaml#L1
接著根據設定建立 deployment.yaml 模板~
定義設定檔參數路徑
templates/frontend/deployment.yaml
{{- $frontend := .Values.frontend -}}
{{- $frontend_pvc := .Values.persistence.frontend -}}
Deployment 基本上和後端的雷同,部分省略~
templates/frontend/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "test-web.frontend" . }}
  labels:
    {{- include "test-web.labels" . | nindent 4 }}
    app: frontend
spec:
  selector:
    matchLabels:
      {{- include "test-web.selectorLabels" . | nindent 6 }}
      app: frontend
  replicas: 1
  strategy:
    type: {{ $frontend.updateStrategy.type }}
    {{- if eq $frontend.updateStrategy.type "RollingUpdate"  }}
    rollingUpdate:
      maxSurge: {{ $frontend.updateStrategy.rollingUpdate.maxSurge }}
      maxUnavailable: {{ $frontend.updateStrategy.rollingUpdate.maxUnavailable }}
    {{- else }}
    rollingUpdate: null
    {{- end }}
  template:
    metadata:
      labels:
        {{- include "test-web.selectorLabels" . | nindent 8 }}
        app: frontend
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      restartPolicy: Always
values.yaml
frontend:
  updateStrategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
containers: 設定 container,env 會自動帶入後端 Service 的名稱
spec:
  template:
    spec:
      containers:
        - name: frontend
          image: "{{ $frontend.container.image.repository }}:{{ $frontend.container.image.tag | default .Chart.AppVersion }}"
          resources:
            {{- toYaml $frontend.container.resources | nindent 12 }}
          livenessProbe:
            tcpSocket:
              port: {{ $frontend.container.port }}
            initialDelaySeconds: 5
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /
              port: {{ $frontend.container.port }}
            initialDelaySeconds: 5
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 3
            periodSeconds: 10
          env:
            - name: BACKEND_URL
              value: "{{ include "test-web.backend" . }}"
          ports:
            - containerPort: {{ $frontend.container.port }}
              name: frontend
values.yaml
frontend:
  container:
    image:
      repository: harbor.example.domain.com/test-web/test-frontend
      tag: "dev"
    port: 80
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
      limits:
        cpu: 100m
        memory: 100Mi
原本的設定: Day27 - frontend.yaml#L61
接著根據設定建立 service.yaml 模板~
定義設定檔參數路徑
templates/backend/service.yaml
{{- $backend := .Values.backend -}}
{{- $backend_svc := .Values.service.backend -}}
Service 基本上和後端的雷同,只有將 selector 替換成 app: frontend
templates/backend/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ include "test-web.frontend" . }}
  labels:
    {{- include "test-web.labels" . | nindent 4 }}
spec:
  selector:
    {{- include "test-web.selectorLabels" . | nindent 4 }}
    app: frontend
  type: {{ $frontend_svc.type }}
  sessionAffinity: None
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
  ports:
    - name: http
      protocol: TCP
      port: {{ $frontend.container.port }}
      targetPort: {{ $frontend_svc.port }}
      {{- if eq $frontend_svc.type "nodePort" }}
      nodePort: {{ $frontend_svc.nodePort }}
      {{- end }}
values.yaml
service:
  frontend:
    type: ClusterIP
    port: 80
    # nodePort: 30080
原本的設定: Day27 - frontend.yaml#L81
接著根據設定建立 ingress.yaml 模板~
先定義 ingress 建立條件:
enabled 啟用templates/backend/ingress.yaml
{{- if .Values.ingress.enabled -}}
...
...
...
{{- end }}
values.yaml
ingress:
  enabled: true
調用 _helper.tpl 參數 char 名稱、前端名稱
templates/backend/ingress.yaml
{{- $fullName := include "test-web.fullname" . -}}
{{- $frontendName := include "test-web.frontend" . -}}
定義設定檔參數路徑
templates/backend/ingress.yaml
{{- $ingress := .Values.ingress -}}
{{- $frontend_svc := .Values.service.frontend -}}
依據 kubernetes 版本設定 apiVersion
templates/backend/ingress.yaml
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
Ingress 基本設定
templates/backend/ingress.yaml
kind: Ingress
metadata:
  name: {{ $fullName }}
  labels:
    {{- include "test-web.labels" . | nindent 4 }}{{ toYaml $ingress.labels | nindent 4 }}
rules: 取用設定檔 hosts array,paths bacekend 自動對應到前端 Service 名稱、port
templates/backend/ingress.yaml
spec:
  rules:
    {{- range $ingress.hosts }}
    - host: {{ .host | quote }}
      http:
        paths:
          {{- range .paths }}
          - path: {{ .path }}
            {{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
            pathType: {{ .pathType }}
            {{- end }}
            backend:
              {{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
              service:
                name: {{ $frontendName }}
                port:
                  number: {{ $frontend_svc.port }}
              {{- else }}
              serviceName: {{ $frontendName }}
              servicePort: {{ $frontend_svc.port }}
              {{- end }}
          {{- end }}
    {{- end }}
values.yaml
ingress:
  labels:
    environment: production
    method: traefik
  hosts:
    - host: test-web.example.domain.com
      paths:
        - path: /
          pathType: Prefix
可以使用指令 --debug, --dry-run 測試
$ helm install --debug --dry-run --namespace test-web test-web ./test-web
確認無誤後安裝~
$ helm install --namespace test-web test-web ./test-web
NAME: test-web
LAST DEPLOYED: Tue Oct 11 19:51:58 2022
NAMESPACE: test-web-helm
STATUS: deployed
REVISION: 1
TEST SUITE: None
最後目錄結構如下
$ tree test-web/
test-web/
├── charts
├── Chart.yaml
├── templates
│   ├── backend
│   │   ├── deployment.yaml
│   │   ├── pvc.yaml
│   │   └── service.yaml
│   ├── frontend
│   │   ├── deployment.yaml
│   │   ├── ingress.yaml
│   │   └── service.yaml
│   └── _helpers.tpl
└── values.yaml
4 directories, 9 files
最後來打包推上 Harbor 吧~
helm plugin install https://github.com/chartmuseum/helm-push
$ helm repo add --username admin test-web https://harbor.example.domain.com/chartrepo/test-web
Password: 
"test-web " has been added to your repositories
$ helm cm-push test-web/ test-web

成功後會在 Helm Chart 頁面看到剛剛推送的 chart
終於弄完 helm chart...
一開始看到 template 根本像亂碼... 