哇哈哈,倒數8天囉,看著比較早開賽的大大門陸續都30天了~
小弟也想早點完賽說~
有參賽的朋友們大家都加油囉~
今天是各種Set,
k8s到day24為止(明、後天)
day25~day29又回頭介紹ansible
有彈性,可擴展的
在定票系統,遇到不同明星的演唱會,像五月天要開演會,replicas就要設大一點
要看你的Application是Stateful 或者 Stateless
如果是Stateless就比較簡單
在Deployment設定replica (建議)
定義ReplicaSet
Bare Pods : 直接用PodSpec來建立的Pod,這些Pod在Node重啟後不會自動啟用,不建議使用
Job : 在Node重啟後,會建立新Pod繼續跑
DaemonSet : 調度某些pod死終服務這些Node,若Node離開cluster,被DaemonSet調度的pods都會被刪除
參考:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
$ kubectl scale --replicas=4 controllers/nginx-deployment.yaml
$ kubectl get deployments # 看DESIRED跟CURRENT是否變成4
$ kubectl expose deplyment nginx-deployment --type=NodePort
# 可以改成用LoadBalancer service取代,讓負載分到各pod
$ kubectl expose deplyment nginx-deployment --type=LoadBalancer --port=8080 --target-port=8080 --name nginx-load-balancer
# 看LoadBalancer是否有作用,可以觀查 客戶的ip 是否被分給各 pod
$ kubectl describe services nginx-load-balancer
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale
# pod數1~10個,全部pod的cpu使用率平均50%
$ kubectl autoscale deployment wordpress --cpu-percent=50 --min=1 --max=10
# 當然也可以寫在yaml檔
$ kubectl apply -f ./wordpress-deployment.yaml
# 模擬負載
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
Hit enter for command prompt
$ while true; do wget -q -O- http://wordpress.default.svc.cluster.local; done
# 檢查HPA狀態
(Horizontal Pod Autoscaler,自動維持,所設定的副本數量、cpu使用量)
$ kubectl get hpa
自動擴展Deployment,Replication Controller,ReplicaSet
hpa-example.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hpa-example
spec:
replicas: 3
template:
metadata:
labels:
app: hpa-example
spec:
containers:
- name: hpa-example
image: gcr.io/google_containers/hpa-example
ports:
- name: http-port
containerPort: 80
resources:
requests: # 要求的cpu資源
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: hpa-example
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: http-port
protocol: TCP
selector:
app: hpa-example
type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler # 水平擴展
metadata:
name: hpa-example-autoscaler
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hpa-example
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50 # 目標pod的cpu使用率50%
$ kubectl create -f hpa.yaml
$ kubectl get hpa
$ kubectl run -it load-generator --image=busybox /bin/sh
# 在busybox模擬負載
while true; do wget -q -0- http://hpa-example.default.svc.cluster.local:31001; done
# 再開一個terminal,$ kubectl get pod # 觀察pod是否增加
Daemon Sets用來確保pod resource(pod副本)在(1~n個)node上運行
當node被加到cluster,新的pod會自動開始
當node被移除,pod不會被重新調用(be rescheduled)到其他node
參考
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
會在cluster裡的node,建立一個fluentd的log agent
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
$ kubectl create -f https://k8s.io/examples/controllers/daemonset.yaml
# 查詢狀態
$ kubectl describe daemonset fluentd-elasticsearch
# 查pods跟node
$ kubectl get pods -o wide
# 刪除
$ kubectl delete -f daemonset.yaml
kubernetes cluster是由許多元件組起來的,
這些基本元件跑在namespace kube-system上面
kubernetes的proxy負責將流量路由到cluster裡面的load balancer
看起來像這樣
proxy # 所以他在LB的更外層
|
V
load balancer
事實上確是每個NODE都要跑一個proxy instance
現行很多cluster用DaemonSet的API Object來做這個工作
如果你的cluster是用DaemonSet來當kubernetes proxy,可以用指令查
$ kubectl get daemonSets --namespace=kube-system kube-proxy
NAME DESIRE CURRENT READY NODE-SELECTOR AGE
kube-proxy ...略
參考:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
https://jimmysong.io/kubernetes-handbook/concepts/statefulset.html
對於k8s管理stateful applications,是一個很復雜的議題
Deployments和ReplicaSets適合用於stateless service
StatefulSet的Pod的Domain Name的格式:
statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local
Headless Servic和StatefulSet要在相同的namespace
.cluster.local是cluster domain
StatefulSet 可用 Headless Service 來控制 Pod 的domain
格式為 $(service's name).$(namespace).svc.cluster.local
storage使用volumeClaimTemplates,來達到持繼儲存
使用Headless Service來達到Pod的PodName和HostName不變
註:Headless Service:沒有Cluster IP的Service
apiVersion: v1
kind: Service # headless service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None # headless service沒有cluster ip
selector:
app: nginx
--- # 下面是stateful set
apiVersion: apps/v1
kind: StatefulSet # 這一行
metadata:
name: web # StatefulSet's Name
spec:
serviceName: "nginx" # service的名稱
replicas: 2
selector:
matchLabels:
app: nginx # 要用的pod,使用label找
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates: # pod重新調度,storage的資料還在
# volumeClaimTemplates使用PersistentVolume Provisionerd的PersistentVolumes
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
$ kubectl create -f web.yaml
service/nginx created # 建service
statefulset.apps/web created # 建stateful set
$ kubectl get service nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP None <none> 80/TCP 12s
$ kubectl get statefulset web
NAME DESIRED CURRENT AGE
web 2 1 20s