今天終於要結束「博大精深」的k8s了
這次的學習中,其實也google到一些好像很厲害的工具,
有些好像要coco、有些完全看不懂,有些則是建在某特定公有雲上面,所以就沒深入學習了
(例如:terraform)
個人覺得k8s是好東西,未來5年應該都會很劣
在薪水不變的前提下,但徵才的要求卻會一直跟上最新技術 = =
明年的面試可能就不再問docker了
建議把基礎打好
因為只是初學,短時間做做筆記,傷了大家的眼睛,很不好意思~
倒數的時候真怕哪一天忘了PO文,科科~
文章都打完了,來欣賞大大們寫的文章~
(去年的都還沒看完咧)
今年年底小弟最想學CSS切版、dot net core,各位大大呢?
除了kubernetes官網文件外,還有一些中文資訊可參考
再來就把其餘的談一談吧,明天開始回到ansible的介紹囉~
參考:
詳細的安裝方式就不介紹了,重點放在網路的設定
希望對網路不好的人有點幫助
1、wifi設定檔(用wifi連外網,cluster裡的Node的連線用網路線)
# 設定ssid、password,然後reboot
# /boot/divice-init.yaml
2、網路線的網卡 eth0,用static ip
cluster裡的Node的連線用網路線
# /etc/network/interfaces.d/eth0
allow-hotplug eth0
iface eth0 inet static
address 10.0.0.1
netmask 255.255.255.0 # /24
broadcast 10.0.0.255
gateway 10.0.0.1
3、在Master Node上,安裝DHCP
用來分配IP給其他Node
$ apt-get install isc-dhcp-server
# 設定domain,內部使用可以隨意設
option domain-name "cluster.home"
# 外網的DNS
option domain-name-servers 8.8.8.8, 168.95.1.1
# 子網路(cluster 內部 Node 在用的子網路)
subnet 10.0.0.0 netmask 255.255.255.0{
range 10.0.0.1 10.0.0.10;
option subnet-mask 255.255.255.0;
option broadcast-address 10.0.0.255;
option routers 10.0.0.1;
}
default-lease-time 600;
max-lease-time 7200;
authoritative;
# sudo systemctl restart dhcp
# 這樣其他Node應該就能跟Master Node上面的DHCP Server要到IP了
# 可查看/var/lib/dhcp/dhcpd.leases
# 其他Node的名字要改一下,每個Node都不同,/boot/divice-init.yaml
# 例如:k8s-m1、k8s-n1、k8s-n2
4、設定NAT
$ iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
$ iptables -A FORWARD -i wlan0 -o eth0 -m state
--state RELATED,ESTABLISHED -j ACCEPT
$ iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT
5、可以每個Node的/etc/hosts,也加一份ip : domain name(或host name)
10.0.0.1 k8s-m1
10.0.0.2 k8s-n1
10.0.0.3 k8s-n2
6、設定ssh
$HOME/.ssh/id_rsa.pub # Master Node(k8s-m1)的publice key,是字串
加到 Worker Node(k8s-n1、k8s-n2)的/home/private/.ssh/authorized_keys
7、kubelet kubeadm kubectl kubernetes-cni的安裝就略過囉
8、用kubeadm建cluster(關鍵步驟喔~)
$ kubeadm init --pod-network-cide 10.244.0.0/16 \ # 要分配給pod的網段
--api-advertise-address # 類似佈告欄吧、API、控制窗口
# 成功的話會有 kubeadm join --token=xxxx 的指定,抄下來喔
$ kubeadm join --token=<token> 10.0.0.1
9、安裝CNI(設定cluster在用的網路,這樣pod與pod間才能互通有無)
$ curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 把所有的"amd64"換成"amd" # PI的架構,如果你是用PC就不用改
# 把所有的"vxlan"換成"host-gw" # 路由模式改用host-gw
# 存檔後,跑起來
$ kubectl apply -f kube-flannel.yaml
# 會多出2個Ojbect
# ConfigMap 放資料、放設定檔的地方
$ kubectl describe --namespace=kube-system configmaps/kube-flannel-cfg
# Daemonsets 常駐 Flannel 的服務
$ kubectl describe --namespace=kube-system daemonsets/kube-flannel-ds
# 建一個proxy/tunner
$ kubectl proxy
$ minikube dashboard
$ minikube dashboard --url
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
$ kubectl create -f addServiceAccount.yaml
$ kubectl -n kube-system get secret | grep admin-user
Name: admin-user-token-9t284
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=8c9dac3b-d85b-11e8-850e-08002779c3bb
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTl0Mjg0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4YzlkYWMzYi1kODViLTExZTgtODUwZS0wODAwMjc3OWMzYmIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.YV6gdyibPgdpaFRU42qGkH-G4up6b6ftal3RXD2Ntue2RsMfvcB9-ytPpvqHAR6guTCquzADj4r2o3d6bSj16yMk64zOlccqwl8IKwDJry7KQkffnkJ4G291xbAxfEwYrCXvtdDh7tVvv1QslYNlNhlC7c23xiNJGTfSivULPh_WorlUwDvHmLlTvnXS3R1fLWIkBqANJAT1aW7aTMzfJO0m92FC111KGccAJEYuHsJxORwvg0cxZv0CUTlktZdX0v5-HRuqdpQ3bxvHQJ9k1eTcdTdeef2B1I9rRpfxr8nmAuBzni2VCSqgRS8UPkBJZnFhLYaBdhLAdSB6kTJEFA
這個token可以拿來login
# 如果被問密碼,可能會有admin的密碼(我的minikube是放ca跟key,沒有密碼)
$ kubectl config view
# 我在macOS的Docker CE 有含 minikube,所以在macOS底下的:
~/.kube/config # 也有minikube的ca、key,不知道能不能用?
# 用開browser
http://localhost:8001/ui
或
http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Dashboard UI只有一個replica,也是透過deployment管理,名字叫:kubernetes-dashboard
$ kubectl get deployments --namespace=kube-system kubernetes-dashboard
# 為dashboard提供load balancer
$ kubectl get services --namespace=kube-system kubernetes-dashboard
Resource Monitor有很多工具
官網文件也有收錄幾個工具,在TASK/Monitor,Log,and Debug底下
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/
Kubelet(蠻簡單的,建議大家使用)
https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/
靠著kubelet闖天下
cAdvisor(也可以用在docker喔)
https://github.com/google/cadvisor
在cluster跑一個agent
預設的port是4194
官方說明文件
https://github.com/google/cadvisor/tree/master/deploy/kubernetes
Prometheus(也可以用在docker喔)
https://prometheus.io/docs/prometheus/latest/getting_started/
https://github.com/prometheus/prometheus/
好像不好裝,但功能強大
kubernetes-job-monitor
讓管理員看正在跑的jobs
https://github.com/pietervogelaar/kubernetes-job-monitor
這個比較好裝
# 把Master裡的admin.conf放在secrets
$ cat /etc/kubernetes/admin.conf | base64 | tr -d '\n'
# 建secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: kubeconfig
type: Opaque
data:
config: thebase64encodedlinehere # 把 admin.conf的base64編碼放這裡
$ kubectl apply -f secret.yaml
# deploy kubernetes-job-monitor
$ kubectl apply -f https://raw.githubusercontent.com/pietervogelaar/kubernetes-job-monitor/master/.kubernetes/kubernetes-job-monitor.yaml
(Container Cluster Monitoring and Performance Analysis)
官網安裝手冊
https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md
# grafana-service.yaml
# influxdb-service.yaml
# 如果沒用 addon,要把kubernetes.io/cluster-service:'true'註解掉
$ kubectl create -f deploy/kube-config/influxdb/
$ kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml
$ minikube service monitoring-grafana --namespace=kube-system --url
資料庫使用influxDB
每個node的Pod ->cAdvisor
每個node的kubelet --> Heapster Pod--> influxDB Pod --> Grafana Pod
https://github.com/kubernetes-incubator/metrics-server
安裝
$ kubectl create -f deploy/1.8+/
使用
$ kubectl top node # 可以看到CPU、Memroy使用率
The Metrics API
api路徑: /apis/metrics.k8s.io/
這是很重要的議題,到底要怎麼自動化更新憑證,我也是搞不懂
可能如果家裡有光世代固定ip的,可以註冊個免費的domain後,再試試Let's encrypt
當機器一多的時候,憑證的還要安裝、展期,就會很燒時間
所以我們要好好的「管理SSL/TLS憑證」
私底下我們可以用免費的Let's encrypt來練習
Issuers(發憑證的CA們) <--(require)-- cert-manager --> yourdomain.com(裝憑證) --> Kubernetes Secrets(Signed keypair)
$ brew install kubernetes-helm
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
或者
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
先準備rbac-config.yml
參考:https://github.com/helm/helm/blob/master/docs/rbac.md
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
$ kubectl create -f rbac-config.yml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
# --service-account # 使用RBAC,Role-based access control
# https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions
$HELM_HOME has been configured at /home/xxx/.helm
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
# 看Tiller有沒有在跑
# kubectl get pods --namespace kube-system
# 升級Tiller
$ helm init --upgrade
# 重置Tiller
helm reset
(例如一個web server,上面放一個網站)
helm install --name my-nginx-ingress stable/nginx-ingress \
--set controller.kind=DaemonSet \
--set controller.service.type=NodePort \
--set controller.hostNetwork=true
$ kubectl create -f xxx.yml # 略
參考:
官方文件
https://cert-manager.readthedocs.io/en/latest/
$ helm install \
--name cert-manager \
--namespace kube-system \
stable/cert-manager
$ helm del --purge cert-manager
Kubernetes v1.8以前是用kube-lego,v1.8以後改用cert-manager囉
異動好快,要花大量時間update知識,但薪水卻不會upgrade
再來就是重頭戲啦,怎麼自動更新Let's Encrypt憑證
爬文時發現Ken Chen大大了一篇,寫得非常棒,所請大家直接看K大的文章喔:
-在kubernetes上使用cert-manager自動更新Let’s Encrypt TLS憑證
https://medium.com/@kenchen_57904/在kubernetes上使用cert-manager自動更新lets-encrypt-tls憑證-834b65d43c96
google最近也推出了憑證管理,不過應該只能用在自家的GKE上面,供大家參考
https://cloud.google.com/load-balancing/docs/ssl-certificates?fbclid=IwAR3knNBV1mFi5s6PU9wve7HsZpJCqI9UK2L3K4h4rvoTbizwz642lXX-Usg#managed-certs
工具External DNS
幫助自動建立需要的DNS records,在你的 external DNS server
例如: Amazon Route 53
把你的Domain Name給Route 53管理,
Route 53會把你的Domain Name轉換成IP,
讓user可以DNS Query找到你的主機
http://www.runpc.com.tw/content/content.aspx?id=109820
https://aws.amazon.com/tw/getting-started/tutorials/get-a-domain/
支援Google CloudDNS、Route53、AzureDNS、DigitalOcean...
例如:
ingress rules:
host1.example.com => pod1
host2.example.com => pod1
Pod
External DNS ===> AWS Route53
External DNS會把上面的對應傳到AWS Route53,
Amazon再利用他一堆的DNS,這樣以後user就可以直接對到正確的pod
利用自架的DNS Server,CoreDNS,適合架在公司內部,for kubernetes專用
另外,Kyle Bai大大有寫一篇,建議大家參考喔
以 ExternalDNS 自動同步 Kubernetes Ingress 與 Service DNS 資源紀錄
CoreDNS專案:
https://github.com/coredns/coredns
就是記錄吧,可以用來除錯、查內鬼、計費
用來查發生什麼事(人事時地物,怎麼發生的,作了哪些事)
kube-apiserver 依據 Audit Policy 把記錄送到 Audit beckends後端
我們來看最重要的2個部分
建議用minikube試看看(minikube版本不能太舊)
# 把這個檔案copy到 ~/.minikube/addons/
$ cp audit-policy.yaml ~/.minikube/
# Log all requests at the Metadata level.
# 記錄Metadata層級的所有requests
# (最簡單的audit policy)
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
$ minikube stop # 先停掉
$ minikube start --extra-config=apiserver.Authorization.Mode=RBAC --extra-config=apiserver.Audit.LogOptions.Path=/var/logs/audit.log --extra-config=apiserver.Audit.PolicyFile=/etc/kubernetes/addons/audit-policy.yaml
$ minikube ssh
$ cat /var/logs/audit.log # 上面設定的存放地方,會看到一堆json的東西全部擠在一起
# 推一個在shell底下看json的指令: jq
https://stedolan.github.io/jq/download/
Ubuntu安裝:
$ sudo apt-get install jq
macOS安裝:
$ brew install jq
# 使用方式
$ cat /var/logs/audit.log | jq . # PIPE給jq處理
-觀看 JSON 的新工具:jq
https://ithelp.ithome.com.tw/articles/10130071
如果上跑有成功跑起來
更多的policy,可參考官網的,視需要的再加到rules:
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
-Log backend:把事件寫到disk
--audit-log-path 存放路徑
--audit-log-maxage 保留audit log最大天數
--audit-log-maxbackup 保留audit log最大數量
--audit-log-maxsize 保留audit log最大size(單位:MB,megabytes)
-Webhook backend:把事件送到外面的api
--audit-webhook-config-file 設定檔的路徑
--audit-webhook-initial-backoff 第1次失敗後重發的等待時間
哇哈哈,k8s真是博大精深,而且新工具層出不窮
這幾天一直google下來,
深覺官網文件非常完整,實務上要不時查閱
但如果要掌握k8s,真的要花很多去實際操作才行