首過重要的儲存篇,今天是一些我也不太懂了的其他主題,就見笑啦~
在建立Pod的時候注入(injecting)像secrets、ConfigMaps、volumes、 volume mounts、environment variables等資源
如果你要deploy一堆applications
可以先把想inject的資料定義在yaml檔
而且符合(match)的才作用(用selector、matchLabels)
請注意關鍵字
yaml檔如下:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: allow-database
spec:
selector:
matchLabels: # 關鍵在這行:matchLabels
role: frontend # 適用到符合的labels
env:
- name: DB_PORT
value: "6379"
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
$ kubectl create -f https://k8s.io/examples/podpreset/preset.yaml
# 刪
$ kubectl delete podpreset allow-database
podpreset "allow-database" deleted
apiVersion: v1
kind: Pod
metadata:
name: website
labels:
app: website
role: frontend # 重點在這個label
spec:
containers:
- name: website
image: nginx
ports:
- containerPort: 80
$ kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
$ kubectl get pod website -o yaml
apiVersion: v1
kind: Pod
metadata:
name: website
labels:
app: website
role: frontend
annotations:
podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version"
spec:
containers:
- name: website
image: nginx
volumeMounts:
- mountPath: /cache
name: cache-volume
ports:
- containerPort: 80
env:
- name: DB_PORT
value: "6379"
volumes:
- name: cache-volume
emptyDir: {}
因為看不懂,就沒寫了
請勇者們自行前往挑戰囉
https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
RBAC真的很重要!!
如果你已成功使用cluster(例如用minikube或kubeadm建的)
一定要回頭把RBAC補起來,雖然我自己也是看不懂啦~
要用哪種authroization mode
必須在API server啟用時指定
--authorization-mod = RBAC
kops跟kubeadm預設都用RBAC
minikube的話,可以start時指定
$ minikube start --extra-config=apiserver.Authorization.Mode=RBAC
kind: Role
# kind: ClusterRole # 如果要適用到所有namespaces,就改用ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default # 授權範圍就只有default namespace
name: pod-reader
roles:
- apiGroups: [""]
resources: ["pods", "secrets"] # 可使用的資源:pods、secrets
# 還有deployments
verbs: ["get", "watch", "list"] # 可操作的動作(大概就是「看」read)
# 還有create,update,patch,delete
kind: RoleBinding
# kind: ClusterRoleBinding # 如果要Binding ClusterRole就改成ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: read-pods
subjects: # 訂閱者,給bob這個role
- kind: User
name: bob
apiGroup: rbac.authorization.k8s.io
roleRef: # 要binding的role
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
$ kubectl create -f role.yaml
$ kubectl delete -f role.yaml # 如果要重複練習
$ kubectl config use-context bob # 切換context到bob
$ kubectl get pods -n kube-system
Helm蠻常看到的,類似apt、yum、npm,Helm是for kubernetes的套件管理工具
Helm是由CNCF,The Cloud Native Computing Foundation所維護
SIG-Apps is a Special Interest Group for deploying and operating apps in Kubernetes. They meet each week to demo and discuss tools and projects.
https://docs.helm.sh/using_helm/#installing-helm
$ brew install kubernetes-helm
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin # 這個role本來就有了,所要不用再建,用roleRef即可
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
$ kubectl create -f rbac-config.yaml # 建立service account "tiller"
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
$ kubectl create namespace tiller-world
namespace "tiller-world" created
$ kubectl create serviceaccount tiller --namespace tiller-world
serviceaccount "tiller" created
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: tiller-world # 可以管理所有namespace "tiller-world"的resources
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding
namespace: tiller-world
subjects:
- kind: ServiceAccount
name: tiller
namespace: tiller-world
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
# 建Role
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
# role binding
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
# 建完service acoount跟namespace之後,終於可以初使化啦
$ helm init --service-account tiller --tiller-namespace tiller-world
$HELM_HOME has been configured at /Users/awesome-user/.helm.
Tiller (the Helm server side component) has been installed into your Kubernetes Cluster.
Happy Helming!
# 安裝nginx,指定namespace
$ helm install nginx --tiller-namespace tiller-world --namespace tiller-world
NAME: wayfaring-yak
LAST DEPLOYED: Mon Aug 7 16:00:16 2017
NAMESPACE: tiller-world
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
wayfaring-yak-alpine 0/1 ContainerCreating 0 0s
$ helm init # 初使化,在cluster安裝Tiller
$ helm reset # remove Tiller
$ helm install # 安裝chart
$ helm search # 找chart
$ helm list # 列出已安裝chart清單
$ helm upgrade # 升級
$ helm rollback # 退回前一版
Helm使用的packaging format
charts用來描述kubernetes resource,本是是a collection of files(文件集合)
一個chart可以deploy一個app(例如:database,mysql chart)
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
myvalue: "Hello World"
drink: {{ .Values.favoriteDrink }}
# 先建一個新的namespace "myorg-system"
$ kubectl create namespace myorg-system
namespace "myorg-system" created
# 把tiller裝到其他namespace
$ kubectl create serviceaccount tiller --namespace myorg-system
serviceaccount "tiller" created
# 讓tiller可管理所有myorg-users的resource
# 建一個role
# role-tiller.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: myorg-users
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
$ kubectl create -f role-tiller.yaml
role "tiller-manager" created
# 作role binding
# rolebinding-tiller.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding
namespace: myorg-users
subjects:
- kind: ServiceAccount
name: tiller
namespace: myorg-system
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
$ kubectl create -f rolebinding-tiller.yaml
rolebinding "tiller-binding" created
# 步驟幾乎都一樣
完整步驟請參考官網文件(例如:完整的產生憑證指令)
https://docs.helm.sh/using_helm/#using-ssl-between-helm-and-tiller
簡單的說,就是helm client跟tiller server都餵憑證,就可以用有加密的port了
遇到困難…就靠google啦
$ helm init --dry-run --debug --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
# --dry-run # 類似測試模式,可以看到比較多資訊
# --dry-run、--debug可不下
$ helm init --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem
# 看一下你的tiller有沒有建成功
$ kubectl -n kube-system get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
... other stuff
tiller-deploy 1 1 1 1 2m
$ kubectl get pods -n kube-system
# copy憑證(略)
$ cp ca.cert.pem $(helm home)/ca.pem
$ cp helm.cert.pem $(helm home)/cert.pem
$ cp helm.key.pem $(helm home)/key.pem
# 餵你的helm吃憑證
$ helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem
# enable TLS
$ helm ls --tls
mychart/
Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: A helm chart for k8s cluster
version: 0.1.0
values.yaml
key:value
templates/
deployment.yaml
service.yaml
# 加helm repository
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install bitnami/node
# 也可以直接設定repository
$ helm install --name my-release \
--set repository=https://github.com/jbianquetti-nami/simple-node-app.git,replicas=2 \
bitnami/node
$ helm install --name my-release bitnami/node # 建議加release name
$ helm delete my-release # uninstall the chart
# 安裝
$ helm install stable/nginx-ingress
# 在deploy the node helm chart時,指定ingress
$ helm install --name my-release bitnami/node --set ingress.enabled=true,ingress.host=example.com,service.type=ClusterIP
今天參考:
中文的,不錯呢,這個網站可以加到書籤喔
https://k8smeetup.github.io/docs/concepts/services-networking/ingress/
官網文件(英文的內容比較新)
https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress算比較新的東西,在比較舊kubernetes的書就沒有
所以變化比較大,要多多關注k8s官網及GCE
Kubernetes 30天學習筆記-[Day 19] 在 Kubernetes 中實現負載平衡 - Ingress Controller
https://ithelp.ithome.com.tw/articles/10196261?sc=iThelpR
文章很簡明的介紹ingress,還有在minikube上實作:
https://github.com/kubernetes/ingress-nginx
https://kubernetes.github.io/ingress-nginx/deploy/#minikube
哇,ingress也是非常地重要
這一篇也不能打混
從 Kubernetes 1.1 以後,用ingress來處理cluster的inbound connections(流進來的流量)
借用一下官網的說明
internet # 客戶怎麼使用服務?
|
------------ # 怎麼expose出去呢?還可以做什麼?
[ Services ] # cluster 可以用ip互連
internet # 透過POST Ingress resource到API server來請求ingress
|
[Ingress Controller] # LoadBalancer(專門for HTTP(s)-based的application)
[Ingress] # 依據RULE,把需求導到對的pod&service上、SSL/TLS(掛secret)
| | # 架在Master Node上
--|-----|--
[ Services ]
目前的ingress似乎還在bata階段
除了GCE/Google Kubernetes Engine以外的,得運行一個ingress controller的pod。
ingress跟ingress controller似乎是不同的2個東西
ingress controller可以作HTTP(s)-based的load balancer,所以如果你是用
AWS LoadBalancer,可以把全部的HTTP(s)-based直接給ingress controller,降低成本
不一定要用ingress
沒有rule的ingress,所有流量都會送到單一service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
# 建一個ingress Object
$ kubectl create -f ingress.yaml
# 看一下ingress object
$ kubectl get ingress test-ingress
NAME HOSTS ADDRESS PORTS AGE
test-ingress * 107.178.254.228 80 59s
# Ingress Object:test-ingress
# 上面沒有任何RULE
# 有一個service: testsvc:80 (由ingress controller分配ip:107.178.254.228)
# 所有107.178.254.228:80的流量都會導到這個service
假設domain name是foo.bar.com (IP:178.91.123.132)
有2個service都用port 80,要如何設定ingress,來把需求導到正確的service?
foo.bar.com -> 178.91.123.132 -> / foo s1:80
/ bar s2:80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths: # Ingress Controller會依path導到正確的service
- path: /foo # http://foo.bar.com:80/foo
backend:
serviceName: s1
servicePort: 80
- path: /bar # http://foo.bar.com:80/bar
backend:
serviceName: s2
servicePort: 80
# Ingress Object: test 會變成這樣
$ kubectl describe ingress test
Name: test
Namespace: default
Address: 178.91.123.132 # 對外的ip
Default backend: default-http-backend:80 (10.8.2.3:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo s1:80 (10.8.0.90:80) # 內部的virtual ip
/bar s2:80 (10.8.0.91:80)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 22s loadbalancer-controller default/test
這個也是非常實用的,假設情境:你只有一個實體IP,但有一堆網站
foo.bar.com --| |-> foo.bar.com s1:80 # 露天拍賣
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com s2:80 # 商店街
像這樣的形式是基於Host header來作路由請求
Host = uri-host [ ":" port ] # https://tools.ietf.org/html/rfc7230#section-5.4
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80
用憑證讓網站有https,為了提高成功率,請用nginx
目前ingress只支援1個TLS port 443
apiVersion: v1
data: # TLS必備的
tls.crt: base64 encoded cert # certificate
tls.key: base64 encoded key # private key
kind: Secret
metadata:
name: testsecret
namespace: default
type: Opaque
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
- secretName: testsecret # 將憑證掛在 Ingress 底下
backend:
serviceName: s1
servicePort: 80
這只是起頭而已
再來web server那麼怎麼設定,請參考:
節目的最後…
# 方法1:
$ kubectl edit ingress test
# 方法2:
$ kubectl replace -f 修改過的.yaml