昨天已經簡單介紹過Prometheus了,今天要來將他裝在我們的叢集裡,我們是用helm來安裝,關於helm的介紹可以去看其他文章,這裡就不多講,直接來安裝Prometheus。
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm search repo prometheus
helm pull prometheus-community/kube-prometheus-stack
tar -xvf kube-prometheus-stack-12.12.1.tgz
cd kube-prometheus-stack
vim values.yaml
prometheus storageSpec
要修改storageClassName欄位,修改成自己所建立的Storage Class
## Deploy a Prometheus instance
##
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: {Your Storage Class}
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
alertmanager storage
要修改storageClassName欄位,修改成自己所建立的Storage Class
## Configuration for alertmanager
## ref: https://prometheus.io/docs/alerting/alertmanager/
##
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: prometheus-storage-class
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
cd ..
mkdir prometheus-storage-class
cd prometheus-storage-class
cat <<EOF >./rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF >./storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prometheus-storage-class
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
EOF
cat <<EOF >./nfs-client-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.0.1.236
- name: NFS_PATH
value: /var/nfsshare/
volumes:
- name: nfs-client-root
nfs:
server: {IP}
path: /var/nfsshare/
EOF
kubectl apply -f rbac.yaml
kubectl apply -f storageclass.yaml
kubectl apply -f nfs-client-provisioner.yaml
此步驟為安裝nfs,並使用nfs來當作我們的storage class,如果已經有storage class的可以略過這步驟
apt-get install nfs-kernel-server nfs-common
mkdir /var/nfsshare
chmod -R 777 /var/nfsshare/
echo "/var/nfsshare *(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports
/etc/init.d/nfs-kernel-server restart
apt-get install nfs-common
kubectl create ns mornitor
helm package kube-prometheus-stack
helm install kube-prometheus-stack-12.12.1.tgz --name-template prometheus -n mornitor
kubectl port-forward --address=0.0.0.0 svc/prometheus-grafana -n mornitor 30001:80
之後便可以到你主機的IP:30001去到grafana的介面上。
grafana default account
account: admin
password: prom-operator
kubectl port-forward --address=0.0.0.0 svc/prometheus-kube-prometheus-prometheus -n mornitor 30002:9090
之後便可以到你主機的IP:30002去到Prometheus的介面上。
又一天:)