iT邦幫忙

2021 iThome 鐵人賽

DAY 5
0

2021 鐵人賽 DAY5

昨天已經簡單介紹過Prometheus了,今天要來將他裝在我們的叢集裡,我們是用helm來安裝,關於helm的介紹可以去看其他文章,這裡就不多講,直接來安裝Prometheus。

Prometheus

Installation and Deploy

  1. Install helm

https://helm.sh/docs/intro/install/

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm
  1. Initialize a Helm Chart Repository(prometheus)

https://github.com/prometheus-community/helm-charts

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm search repo prometheus
  1. Pull Helm Chart and Configuration
helm pull prometheus-community/kube-prometheus-stack
tar -xvf kube-prometheus-stack-12.12.1.tgz
cd kube-prometheus-stack
vim values.yaml
  • prometheus storageSpec

    要修改storageClassName欄位,修改成自己所建立的Storage Class

## Deploy a Prometheus instance
##
prometheus:
  prometheusSpec:
    storageSpec:
      volumeClaimTemplate:
        spec:
          storageClassName: {Your Storage Class}
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 50Gi
  • alertmanager storage

    要修改storageClassName欄位,修改成自己所建立的Storage Class

## Configuration for alertmanager
## ref: https://prometheus.io/docs/alerting/alertmanager/
##
alertmanager:
  alertmanagerSpec:
    storage:
      volumeClaimTemplate:
        spec:
          storageClassName: prometheus-storage-class
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 50Gi
  1. create storageclass for prometheus
cd ..
mkdir prometheus-storage-class
cd prometheus-storage-class
  • create rbac, storageclass, nfs-client-provisioner
cat <<EOF >./rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF

cat <<EOF >./storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: prometheus-storage-class
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
EOF

cat <<EOF >./nfs-client-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.1.236
            - name: NFS_PATH
              value: /var/nfsshare/
      volumes:
        - name: nfs-client-root
          nfs:
            server: {IP}
            path: /var/nfsshare/
EOF
  • apply rbac, storageclass, nfs-client-provisioner
kubectl apply -f rbac.yaml
kubectl apply -f storageclass.yaml
kubectl apply -f nfs-client-provisioner.yaml
  1. Install NFS and Configuration

    此步驟為安裝nfs,並使用nfs來當作我們的storage class,如果已經有storage class的可以略過這步驟

  • NFS node(k8s master)
apt-get install nfs-kernel-server nfs-common
mkdir /var/nfsshare
chmod -R 777 /var/nfsshare/
echo "/var/nfsshare    *(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports 
/etc/init.d/nfs-kernel-server restart
  • k8s worker nodes
apt-get install nfs-common
  1. Create k8s namespace for prometheus
kubectl create ns mornitor
  1. Package the configurated chart and Install
helm package kube-prometheus-stack
helm install kube-prometheus-stack-12.12.1.tgz --name-template prometheus -n mornitor
  1. Port-forward the grafana service
kubectl port-forward --address=0.0.0.0 svc/prometheus-grafana -n mornitor 30001:80

之後便可以到你主機的IP:30001去到grafana的介面上。

grafana default account
account: admin
password: prom-operator

  1. Port-forward prometheus server
kubectl port-forward --address=0.0.0.0 svc/prometheus-kube-prometheus-prometheus -n mornitor 30002:9090

之後便可以到你主機的IP:30002去到Prometheus的介面上。

又一天:)


上一篇
DAY4 Kubernetes叢集資源監-Prometheus 前言
下一篇
Day 6 ELK Stack on k8s 介紹
系列文
從雲端開始的菜鳥任務30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言