iT邦幫忙

第 12 屆 iThome 鐵人賽

DAY 14
1

Day 14 Kubernetes 持久戰 - GlusterFS + Heketi 安裝基礎操作篇

本日重點與方向 (TAG): kubernetes、k8s、PV、persistent volume、PVC、persistent volume claim、Pod use PVC、GlusterFS、Heketi、SC、StorageClass
今天將會介紹使用 Bare Metal 進行 Kubernetes 環境中持久化儲存的軟體功能測試,主要會是以 GlusterFS (存儲管理)、Heketi (GlusterFS 操縱介面) 兩個開源專案進行驗證,並對於先前組建的 kubernetes 功能進行整合,如果你中途裝一裝有一些異常與失敗的話,基本上就是參考安裝筆記最下面的重建儲存叢集,還有搭配一下前幾天的重建 kubernetes 的做法,GlusterFS 配置上基本上就是的碟硬要乾淨一些(據說是要給整個分割區或硬碟),Heketi 部分基本上不會出問題,也就是模板先改起來改一下帳號密碼的參數,之後就弄好啦,有副本需求相關的資源型態設定,他的設計的概念是基於 StorageClass 的配置,當然還有一些進階一點的去看官方網站 吧 (雖然官方已經 Archived 了) ,詳細的操作還是去看看輔助文章那邊的吧。

本文參考

https://blog.csdn.net/chengyuming123/java/article/details/86539986

本次使用設備資訊

Network Switch

  • 數量: 1
  • 型號: D-Link 1210-28 (L2 Switch)

Bare Metal

Master Node

  • 數量: 1
  • Ubuntu: 16.04 / 18.04
  • Docker Version: 19.03
  • CPU: E3-1230_V3 ^ 1
  • RAM: 16GB
  • OS_Disk: 120 GB (SSD)
  • Data_Disk: 500 GB (HDD)
  • Network: 1Gbps

Worker Node

  • 數量: 2
  • Ubuntu: 16.04 / 18.04
  • Docker Version: 19.03
  • CPU: E3-1230_v3
  • RAM: 8 GB
  • OS_Disk: 120 GB (SSD)
  • Data_Disk: 250 GB (HDD)
  • Network: 1Gbps

GlusterFS + Heketi on Kubernetes

環境需求

  1. Ubuntu 18.04
  2. Kubernetes 1.15.6 (Base on Docker)
  3. 2 Disk (OS/Data)
  4. 3 個 kubernetes 可用的部署節點 (master 可污染)

節點狀態 (設定參考)

名稱 角色 服務區網 IP OS_Disk Data_Disk Data_Disk 掛載點
sdn-k8s-b2-1 Master 10.0.0.224 120G (SSD) 500G (HDD) /dev/sdb
sdn-k8s-b2-2 Worker 10.0.0.225 120G (SSD) 250G (HDD) /dev/sdb
sdn-k8s-b2-3 Worker 10.0.0.226 120G (SSD) 250G (HDD) /dev/sdb

處理節點上的資料碟等待備用

  1. 觀看 Data_Disk 掛載點
  • 指令
sudo fdisk -l
  • 系統資訊
  1. 清除 Data_Disk 上的資料
  • 指令
sudo wipefs -a <Disk-Mount-Point>
  • 強制清除
sudo wipefs -af <Disk-Mount-Point>
  • Example
sudo wipefs -a
/dev/sdb

安裝 Heketi 環境

  1. 下載 binary 檔 (版本)
wget https://github.com/heketi/heketi/releases/download/v10.0.0/heketi-v10.0.0.linux.amd64.tar.gz
  1. 解壓縮/移至系統路徑下
tar -zxvf heketi-v10.0.0.linux.amd64.tar.gz heketi/
cp heketi/heketi /usr/bin/
cp heketi/heketi-cli /usr/bin/

安裝 GlusterFS 的客戶端 (DaemonSet Pod 中版本為 7.1)

  1. GlusterFS Client 安裝
sudo add-apt-repository ppa:gluster/glusterfs-7
sudo apt-get update
sudo apt-get install glusterfs-client
  1. 驗證安裝環境
  • 指令
glusterfs --version

  • 系統回饋
root@sdn-k8s-b2-1:~# glusterfs --version
glusterfs 7.7
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
  1. 修復 Glusterfs 功能 (ref)
  • 測試環境指令
mount.glusterfs -V
  • 系統異常回覆
root@sdn-k8s-b2-1:~# mount.glusterfs -V
/sbin/mount.glusterfs: 650: shift: can't shift that many

  • 修復方式
nano /sbin/mount.glusterfs
  • 修改內容
原內容
#!/bin/sh
修改後
#!/bin/bash

  1. 修復不知名的錯誤 MDFK
  • 後面 GlusterFS 的 DaemonSet Pod 起不來的問題)
rm -rf /etc/glusterfs /var/lib/glusterd

啟動 kernel 模組功能

  1. 啟動模組
  • 指令
sudo modprobe dm_thin_pool
sudo modprobe dm_mirror
sudo modprobe dm_snapshot
  1. 檢查啟動的模組
  • 指令
lsmod | grep dm_thin_pool
lsmod | grep dm_mirror
lsmod | grep dm_snapshot
  • 回饋資訊
ubuntu@sdn-k8s-b2-1:~$ lsmod | grep dm_thin_pool
dm_thin_pool           69632  0
dm_persistent_data     73728  1 dm_thin_pool
dm_bio_prison          20480  1 dm_thin_pool

ubuntu@sdn-k8s-b2-1:~$ lsmod | grep dm_mirror
dm_mirror              24576  0
dm_region_hash         20480  1 dm_mirror
dm_log                 20480  2 dm_region_hash,dm_mirror

ubuntu@sdn-k8s-b2-1:~$ lsmod | grep dm_snapshot
dm_snapshot            40960  0
dm_bufio               28672  2 dm_persistent_data,dm_snapshot

Gluster 對主機 Port 使用

Port 埠口 功能
2222 GlusterFS pod 的 sshd
24007 Gluster Daemon
24008 GlusterFs 管理
49152-49251 每個 brick 可能會用到的 port
  • 檢查指令
netstat -tupln | grep <port>

污染 kubernetes 節點成為 可部署 Gluster 的節點 (單節點污染)

kubectl taint nodes --all node-role.kubernetes.io/master-

在 Master 節點進行主服務安裝

下載 Gluster 官方實作 kubernetes 介接專案

git clone https://github.com/gluster/gluster-kubernetes.git
  • 切入專案
cd gluster-kubernetes/deploy

修改 Gluster 部署 yaml 檔

nano kube-templates/glusterfs-daemonset.yaml
  • 修改項目
# 註解
- "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
# 添加
- "systemctl status glusterd.service"
  • 修改完成後的檔案
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
        # readiness/liveness probe validate gluster-blockd as well
        - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
          value: "1"
        - name: GB_GLFS_LRU_COUNT
          value: "15"
        - name: TCMU_LOGDIR
          value: "/var/log/glusterfs/gluster-block"
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-dev-disk
          mountPath: "/dev/disk"
        - name: glusterfs-dev-mapper
          mountPath: "/dev/mapper"
        - name: glusterfs-misc
          mountPath: "/var/lib/misc/glusterfsd"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
          readOnly: true
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        - name: kernel-modules
          mountPath: "/usr/lib/modules"
          readOnly: true
        securityContext:
          capabilities: {}
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        - name: kernel-modules
          mountPath: "/usr/lib/modules"
          readOnly: true
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            #- "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"  //注释
            - "systemctl status glusterd.service"  //添加
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            #- "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"  //注释
            - "systemctl status glusterd.service"   //添加
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-dev-disk
        hostPath:
          path: "/dev/disk"
      - name: glusterfs-dev-mapper
        hostPath:
          path: "/dev/mapper"
      - name: glusterfs-misc
        hostPath:
          path: "/var/lib/misc/glusterfsd"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-ssl
        hostPath:
          path: "/etc/ssl"
      - name: kernel-modules
        hostPath:
          path: "/usr/lib/modules"
  1. 設定 Gluster 叢集的描述檔案 topology.json,設定後提供給予 Heketi 使用。
nano topology.json
  • 修改項目
  • manage: kubernetes 節點名稱
  • storage: 節點區域網路 IP
  • devices: Data_Disk 在系統上的掛載點
  • 修改後貼上
{
"clusters": [
{
  "nodes": [
    {
      "node": {
        "hostnames": {
          "manage": [
            "sdn-k8s-b2-1"
          ],
          "storage": [
            "10.0.0.224"
          ]
        },
        "zone": 1
      },
      "devices": [
        "/dev/sdb1"
      ]
    },
    {
      "node": {
        "hostnames": {
          "manage": [
            "sdn-k8s-b2-2"
          ],
          "storage": [
            "10.0.0.225"
          ]
        },
        "zone": 1
      },
      "devices": [
        "/dev/sdb1"
      ]
    },
    {
      "node": {
        "hostnames": {
          "manage": [
            "sdn-k8s-b2-3"
          ],
          "storage": [
            "10.0.0.226"
          ]
        },
        "zone": 1
      },
      "devices": [
        "/dev/sdb1"
      ]
    }
  ]
}
]
}

設定 Heketi 的 API 資訊驗證的模板

  • 複製配置模板
cp heketi.json.template heketi.json
  • 編輯配置
    • use_auth: true
    • jwt > admin > key: admin-key
    • jwt > user > key: user-key
{
	"_port_comment": "Heketi Server Port Number",
	"port" : "8080",

	"_use_auth": "Enable JWT authorization. Please enable for deployment",
	"use_auth" : false,

	"_jwt" : "Private keys for access",
	"jwt" : {
		"_admin" : "Admin has access to all APIs",
		"admin" : {
			"key" : ""
		},
		"_user" : "User only has access to /volumes endpoint",
		"user" : {
			"key" : ""
		}
	},

	"_glusterfs_comment": "GlusterFS Configuration",
	"glusterfs" : {

		"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
		"executor" : "${HEKETI_EXECUTOR}",

		"_db_comment": "Database file name",
		"db" : "/var/lib/heketi/heketi.db",

		"kubeexec" : {
			"rebalance_on_expansion": true
		},

		"sshexec" : {
			"rebalance_on_expansion": true,
			"keyfile" : "/etc/heketi/private_key",
			"port" : "${SSH_PORT}",
			"user" : "${SSH_USER}",
			"sudo" : ${SSH_SUDO}
		}
	},

	"backup_db_to_kube_secret": false
}

修復 gk-deploy 版本差異 (ref)

  • 修改指令
nano ./gk-deploy 
  • 修改功能
刪除參數
--show-all

部署環境

  • 準備 kubernetes 環境

  • 執行部署指令

./gk-deploy -g --admin-key <admin-key> --user-key <user-key> topology.json 
  • 獲取 Gluster ID

驗證部署狀態

  • Node Label
kubectl get node --show-labels | grep -E “NAME|node”

  • GlusterFS 的 DaemonSet Pod

  • Heketi 的 Deployment

部署失敗維修

  • 獲得 glusterfs DaemonSet Pod 中的版本
kubectl exec <pod> -- glusterfs --version
  • pod 一直起不來的話 Not Ready 異常問題(ref)
在異常節點之上
-----
rm -rf /etc/glusterfs /var/lib/glusterd

開搞 Heketi API 的部分進行測試

  • 查看 Heketi 狀態
export HEKETI_CLI_SERVER=$(kubectl get svc/heketi --template 'http://{{.spec.clusterIP}}:{{(index .spec.ports 0).port}}')
curl $HEKETI_CLI_SERVER/hello
  • 系統回饋
Hello from Heketi

  • 設定系統變數 (ref)
echo "export HEKETI_CLI_SERVER=http://$(kubectl get svc heketi -n heketi -o go-template='{{.spec.clusterIP}}'):8080" >> /etc/profile.d/heketi.sh
echo "alias heketi-cli='heketi-cli --user admin --secret <heketi-admin-key>'" >> ~/.bashrc
source /etc/profile.d/heketi.sh
source ~/.bashrc
echo $HEKETI_CLI_SERVER
  • 修復一下 Heketi API 權限問題
export HEKETI_CLI_USER=admin
export HEKETI_CLI_KEY=<admin-password>

測試環境

  • Heketi 列出叢集資訊
heketi-cli cluster list
heketi-cli cluster info <glusterfs-cluster-id>
  • Heketi 列出叢集中的節點/掛載資訊
heketi-cli topology info
  • Heketi 列出節點資訊
heketi-cli node list
heketi-cli node info <glusterfs-node-id>
  • Heketi 調用 GlusterFS 建立分割區
    • --size=n => n GB
    • --replica=3 => 副本數量
heketi-cli volume create --size=2 --replica=3
  • Heketi 觀看分割區狀態
heketi-cli volumn list
heketi-cli volumn info <gluster-volume-id>
  • Heketi 刪除分割區
heketi-cli volume delete <gluster-volume-id>

測試環境 on Kubernetes (ref)

  • 生成 key
echo -n "mypassword" | base64
  • 組建 Secret (使用 key)
apiVersion: v1
kind: Secret
type: kubernetes.io/glusterfs
metadata:
  name: heketi-secret
  namespace: heketi
data:
   # base64 encoded password.
   key: aXRyaS1kZXYtYWRtaW4=
  • 部署 Secret 到 kubernetes 之中
kubectl create -f heketi-secret.yaml

  • 組建 StorageClass

    • resturl: 系統提供回饋,修改即可使用
    • clusteridL 對應 Gluster 的叢集 id
    • restuser: admin
    • restuserkey: 部署時的 admin-key
  • 撰寫 yaml 檔案

storageclass-gluster-heketi.yaml
-----
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi-storageclass
parameters:
  resturl: "http://10.100.158.73:8080"
  clusterid: "b3a19fbbde8a5703ec5423fa4745a274"
  restauthenabled: "true"
  restuser: "admin"
  secretName: "heketi-secret"
  secretNamespace: "heketi"
  volumetype: "replicate:3"
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
  • 部署
kubectl apply -f storageclass-gluster-heketi.yaml
  • 組建 PVC
pvc-gluster-heketi.yaml 
-----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-heketi
spec:
  storageClassName: gluster-heketi-storageclass
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  • 部署
kubectl apply -f pvc-gluster-heketi.yaml 
  • 部署使用 GlusterFS 的 Pod
pod-use-pvc-glusterfs-heketi.yaml
-----
apiVersion: v1
kind: Pod 
metadata: 
  name: ubuntu-use-pvc 
spec:
  containers: 
  - name: pod-use-pvc 
    image: ubuntu 
    command: 
    - sleep 
    - "600000"
    imagePullPolicy: IfNotPresent 
    volumeMounts: 
    - name: gluster-volume 
      mountPath: "/testSpeed" 
      readOnly: false 
  volumes: 
  - name: gluster-volume 
    persistentVolumeClaim: 
      claimName: pvc-gluster-heketi 
  • 部署
kubectl apply -f pod-use-pvc-glusterfs-heketi.yaml

磁碟壞掉卡住修復

  1. 修復法 1 (參考)
  2. 修復法 2
    1. kubeadm reset (重置節點)
      kubeadm reset
      
    2. 節點重開機
      init 6
      
    3. 清除硬碟
      wipefs /dev/<mount-point>
      

上一篇
Day 13 Kubernetes 戰力檢視-Rancher Dashboard 安裝基礎操作篇
下一篇
Day 15 Kubernetes 耐久戰技大考驗 - Dbench on Kubernetes 安裝基礎操作篇
系列文
基於付費公有雲與開源機房自建私有雲之雲端應用服務測試兼叢集與機房託管服務實戰之勇者崎嶇波折且劍還掉在路上的試煉之路30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言