iT邦幫忙

第 12 屆 iThome 鐵人賽

DAY 16
0

Day 16 Kubernetes 異地同步開戰 - Istio Service Mesh 跨叢集安裝基礎操作篇

本日重點與方向 (TAG): kubernetes、k8s、Istio、Envoy、Sidecar、Sidecar proxy、Micro Service、Service Mesh
今天將會介紹使用 Bare Metal 進行 Kubernetes 環境中,基於 Istio 的連結多叢集的服務部署機制,對於 Service Mesh 的概念就去看一下去年的鐵人,我們這邊主要在如何搭建跨叢集服務鏈結的配置,這邊會需要先組建 2 個 kubernetes 叢集,可在同一個區網之下沒關係,只要可以連通即可,並且因為先前的測試與部署的驗證,目前來說此篇只用於 Istio 的 1.4 版本,新的 Istio 支援到 1.7 版本(官方),有興趣的人就再去搞一下吧。

Istio 配置參考

https://istio.io/v1.4/docs/setup/install/multicluster/shared-gateways/

本次使用設備資訊

Network Switch

  • 數量: 1
  • 型號: D-Link 1210-28 (L2 Switch)

叢集架構設計

  • 主叢集
    • Name: sdn-k8s-b5-1
    • CPU: 8 vCPU
    • RAM: 8 GiB
    • Kubernetes: 1.15.5
    • Subnete IP: 10.0.0.206
    • LoadBalance IP: 10.0.0.46
  • 附屬叢集(第二叢集)
    • Name: sdn-k8s-b5-2
    • CPU: 8 vCPU
    • RAM: 8 GiB
    • Kubernetes: 1.15.5
    • Subnete IP: 10.0.0.208
    • LoadBalance IP: 10.0.0.47

kubernetes 雙叢集調整

處理雙叢集連線管理的問題

  1. 抓取附屬叢集的 kubernetes 叢集資訊
  • 抓取設定檔的指令
cat $HOME/.kube/config
  • 輸出結果(參考)
root@sdn-k8s-b5-2:~# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3lOVEEyTXpnd05Wb1hEVE13TURneU16QTJNemd3TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWdLCnZNdlNZaVp5Umd5dXp5YlZGdzlvQzhFRzB4NThYaDdLT0c2VHRGMGwzSjBXcWJRL0lFUmhEaWhoVHlOYzcxMWYKSjhFRHJZRDJlMG9za1pQeTUrUGtpQ0JpZ1FPZ0lEWGo5Vkl2RG51TThFRW94TFI1Q0JzZWtoR292WCtBRnhZMQpla29zN3V4NW9UcFVlY1pDYThrNDYrWDRTZTFBWWc1VU0wVUE1Mm5PWmROajk3ODNzckw0eW1hbXdDZ0J6YnExCnorZWhpZWVHOHRENDZKRzNjTjA2algybTFZdldPNlV4UURwblF4SHVFZnMyaU5EeU9ROEpCd1I4cjRMUnI0WFcKWmUrMUYvd3BQZTBYc3FrVEpCY3BxK24ycXZ0YktIQW43Rk9PNzVWdVZLVnRzZGQ1V1dXTTlmSkpsQXp1cjJjTwpVZ3NuNG9uRXdQWnBteG1wT0FFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcHFPb1ZNRUdRVXBzWnczVEJBMVc5K2VkZVoKSWZuVnltaEd6K0QrWkVtaVl4WGRDNW5xL1dUbVcxaWRQNEM0d04xc0FBMHkvYjR2N01RWEVzQXdoZXU4aU90cwoyUi8wM25vdUlHaWo5d2gvSzF0L0dSVVhlWnh2UjZZVWJ5TUpvQzhZcE1ObjRkdExySWdYZ29VampUSkZFWWhkCkpIZkNrbGc3LzNVVGE3OXJTOHBlM0x4aHA2ejJJNTdjVzUwWTU3SEZoN2JTd1Yzd1c0ekJvUkxQQXhVamo3TlEKUmlLY3JmTEYyK3hpZkpMSEpzaDdRNDZ6Q3dHMHFyd0hFSkxNdFhpU0h2VXRFUmI4bi9SVVFxMm5Nb2owZVk4TQpYN09sWklPNTBkbUtTUlpRVGpsVE1jS2RYRDhweVlRR0Z0NmpCMzZ4TDBWU1FMVGtLV2RyVWJrK1d6ND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.0.0.208:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJRkgxTDlsWFFXWTR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1qVXdOak00TURWYUZ3MHlNVEE0TWpVd05qTTRNRGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdpQmlaNjl1Q0NrMDY0RysKRm1rdFNqZHFlRmFHaWtydW90eGRRdFVya2djeGVNTEVtSFVrU2ErZGRrTFp0eXFCcTJWK3ZuWVFDNUlBZnV3MgpKQ2JiVHdVdkk1c1pFUVY5NW82UDZ4WkxFTWtpRi9DT0lmQmhrYnNQZ1BuVEtKWFhYSXJ2aDdBS0tjS2x6TEV6Cm9DWWRQZndJTTlaODNrenRYR3hvajczelUvTjd6R0I5bU5GYmxYNkRtQU1BaGc3L0JzRVgwWTVRUlYvUm1xOHYKOW9DaGdLSHBrK2VjSUpNUW1RQUdNcG1oNkY5S3lrUmczUzRsR2czaWFIVWZHSlFRdStERHdFRWprRk14MS9udQpkOWx1dG9HM1BVeFZCcTlFeldSZ0RZWDByQzBvYkNmSnJReTl6S2YvVVVLUkdJbVlLU3FxU0JOTlAwMGZMVzhXClhDaFFsd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFETjd3bmFCUXN3WGpMa2cvNFBMcEwzbzVrS2MwM1kxOTNoagpUYXV0N3dnNXE2Zlk0cHRLV1YyblM3U2VRUWtpd1kvbVA2RGZPU2dtd3FOWnhNbHY3NkM3MTZPM1FoWmhsOHhNCk9TVkY0TFZSVUtMUWVEc2Nid2s3WTBtbnBSd3BnK1M0REVPanNYMmRkYzNjelM4eXdTck1YYWZidHdyQlN0UnMKcFlWYm0xNmVVQ3BIa2hCZU1GRVJBd1pBb2VVOTJ1TjdseGNDYlJPWmdjTmQ0Um9TMVcwWjEzenNsYjVVZmppegp5MUx6K1ZHVUFCeXNVbnZ0TEgrSmJ1MTgxVEx3RUxkQjZSQlhRVTRtbEN3TWtQWkN6YjNYTlF6dTZyMzRCQUFaCmpiUkdxcHBvZXZGbVE5bDYzaFV6aXdWOWJwUHZmMlMxQytxajJiR2dBM1MyZTkxVUpyaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBd2lCaVo2OXVDQ2swNjRHK0Zta3RTamRxZUZhR2lrcnVvdHhkUXRVcmtnY3hlTUxFCm1IVWtTYStkZGtMWnR5cUJxMlYrdm5ZUUM1SUFmdXcySkNiYlR3VXZJNXNaRVFWOTVvNlA2eFpMRU1raUYvQ08KSWZCaGtic1BnUG5US0pYWFhJcnZoN0FLS2NLbHpMRXpvQ1lkUGZ3SU05Wjgza3p0WEd4b2o3M3pVL043ekdCOQptTkZibFg2RG1BTUFoZzcvQnNFWDBZNVFSVi9SbXE4djlvQ2hnS0hwaytlY0lKTVFtUUFHTXBtaDZGOUt5a1JnCjNTNGxHZzNpYUhVZkdKUVF1K0REd0VFamtGTXgxL251ZDlsdXRvRzNQVXhWQnE5RXpXUmdEWVgwckMwb2JDZkoKclF5OXpLZi9VVUtSR0ltWUtTcXFTQk5OUDAwZkxXOFdYQ2hRbHdJREFRQUJBb0lCQUZCeWlUVTh2eFdFdGhpTwp5TTZTd2FFSy9BVm9uaEs3WU05L0VPcjhXalVHNUJxT1pGaGwyeWJxTHcvdVBqa28xVm5KRXRBdEx4TU1hMFl0CjczWGw0R2FMMkhBaCt5NVJuMDRuY0Q3VkcwQ1dpWmx3S0FhcWpsU05ONnlVVzB4clpEZEdvR01Uc2ZLQ1pxRkEKSWd2Ukg3Y3JOZDc5bVB1cTE0YkFxa2cvU0pKV0VGWFZ2MTk1ZUtDdndPb3hqaUQ5cW0xSUJZeTNBM2Q5Sm45RQovbGo0OXBFYUlpSGMxaWhodkdtNHFBa1NtK0FuVGVWSGlXTjk5enYwVjdHcDg3VVdKQzQ5TWtxZjdIc01zN2Y2CmNyb0xObjNiRHk3cnZtWjJoc3lrQ3NBT1k1eG9pZ0NuOGdyQklqMDdBQTZ4d3JTTGtqQmVRRFF0NDZaa2VvWDIKUHlSdDdQa0NnWUVBOHlRMk1kWFhXOUtHMU9BWEV5c2svT1JuRS9uTzhNT2RXUkhsdSt5dDdNR0tnTTlndjJCOQorSm9UMEtYSUtRQ2laempOcVJnRHhCamo0TnZXU0NicklyNEFLakJOYmJQNmViUm82WkVxUXNxWkh1ODJZUVUyCklBVFQvbHU5NWNBUGxlUGxydUhZemZIbTRmZ3JDSWhLVTMyVENPU3ZYSThFUkNsbFZIcm9kL1VDZ1lFQXpHU1UKaUx4MGk1RHVURWlUMjNXaEZjZEdEV21neGsrc2M2OTNiREkvaGhqQkRrSndjbmh0V2Q2ZWhjbFloTWpBd3paNApnd3pwcDBIUHQ4em9zRW1uRzZEeGNHYVpKUWZwWkN6bGlRTExCNXBEMlBCcFAxaDV3MmxjOS9zeTloMTFoYk9nCnF6c1QvZ3Z0YWZ2cSs0ZDREWVQ5SWlJd3A3TmRuTURKTlJGOXF0c0NnWUJRbmpjaCt3ZDNPS3pnTkpVeUUrSWwKd0EyMWYrVHZ5OHlHVmZyWWZyZUVndi9MaWZkSVBWUkhjNzhTTllYU29wVTJxSXo0ZmkveGUxZERuV0RGZDdJTApTUGlCQkpjSHd0OVFMMU9CN2xJVzUxb3grWnNNUEZBZitiblk0czVxT1c1eGdxa0xmWE1IaGlmSjBTRmpxTjBNCkpkejAyKzZSUUJKb0QxbTcweXoxYVFLQmdRQ1c3ejl3cWhvMlpsUlRLTlZuSHJwUjV0SW9YWFJJZmRXUHFHZTgKRW04dWkyRWxNcEx2TlZjckltWlZ6Wlg4bUhNZ3RUelJLZHZ4azN2Yzh3aHlCakhOQ1ZEQi9FSGpRckJyTld4YgpmU0NKQUxaUm9WZFhXL0t2QjBPTUxJZzVqdytXS0V2aHBzTGd1OVlhaWRuQTNRMGVqcktQWGtnbnp5QzEvUGVZCnBNMzBPUUtCZ1FDeGxRbWZKY1BPZTZNNjhMWElhTzdWd3dCbkhaTmlBUzcwTWQzK1V2bWtGd2ZLNjNyVnVlUHcKSnNtbGpEV1VNSGFxcWdMTzNiUmxvdjJDTmR6MVdtUS9NdnhJTDZMdFl4dVNNT0d4ZG02VkdPNmQ0aVpVd1ZMQQpIMUZuLzA0YllpZit0TzlsNnU2UmJqcGFNY2pseThJUWpNQkZKTDRNMExyV1Y2MHdiVXJXL3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  1. 將附屬叢集的 kubernetes 資訊填入主叢集的設定檔案中
  • 填入從附屬叢集提取的項目,並修改 context 提供切換叢集時使用。
    • cluster > certificate-authority-data
    • user > client-certificate-data
    • user > client-key-data
nano $HOME/.kube/config
  • 修改後的狀態
  • context > cluster 對應 clusters > cluster > name
  • context > user 會對應 users > user > name
  • context > name 對應到後續用 cli 切換的名稱
root@sdn-k8s-b5-1:~# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3lOVEEyTXpneE5Wb1hEVE13TURneU16QTJNemd4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRqCmNvc3YyQ01hK05SeTlBNS9hRUdzbzRta3RYTzNkejFpRlFjR29PRzYvbkwvb3dRNG92UjVnbDBRZHZvaWVhNisKVEIzeTY3bXlES3hqVHVwSlBhTXFQaUhZa3NVaWdheWVsZUFnTkJHaDBJYWJmRHBGUEZWWEFoQklJTlI0VGJMVApIeVV0M2RqM2ZOMTVOeWZCWlBYR3N4cjhBTWVscS9Mdmk1NjNtbmRDQ01vbXpNWHNCV2lvMVl0dllRM1lmc1ZICmFtUnJUb0R6UDZydkFQdEJ2YUpoakhOdUxZMFd1QlcxSWloTEVtNUdZZUN6RkFBZ05tSVFCVWxxeUJSbVl3MmUKOUZzelJoMkQwR1FwckZaZzgwNWd5cGJNcitkbXpXRkU5L1IxTkZxR2xFQVZQMW40OUNwM3B3eWFVZDIvWEM1dQo2anJyZjdXZHAvSmZhS1JFcm1NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFTUw5L3I4Wkx2NGxkWmtGVk9RcmZJZGsxUU0KZGl0MzRCdFkrL3hsV2Q3V3JCK1lHRnJSWHJtL1hGNXdxRGVwOG85bm9RaHRLKzZZY3UxeDBldWdLSE82WUJqQwpqRm5mTlJqVjB2dEVlMUlMTU15KzlYVDZqQk02QWYwN2wzZjF1U3J2YjdOTDE5TnZmd0Z6M0JacjJTNFNMa2tjClVrSUtId0FtV3dUWFAwbzEwZitwbHFTdWU4Mm8vakcybHFKSkJkdFo4eDdKNFo2d3VGZTdpZzZLWFBJOVpxclAKY3ROSUNlSXJuVC9uQTFPZkwrM0kyRnpDQTlia3NseE1aamw1VmdKWmVGSDhnakVwbWh5STE3VUZPRGVHN3F2ZAovcGdQSzJUcEw3M1g0WGtmaEphRTU1SzNXWkxqcWN2VlJiNzBXRHpodUFrcElnYk9xNmhlK1A1eExhUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.0.0.206:6443
  name: cluster-206
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3lOVEEyTXpnd05Wb1hEVE13TURneU16QTJNemd3TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWdLCnZNdlNZaVp5Umd5dXp5YlZGdzlvQzhFRzB4NThYaDdLT0c2VHRGMGwzSjBXcWJRL0lFUmhEaWhoVHlOYzcxMWYKSjhFRHJZRDJlMG9za1pQeTUrUGtpQ0JpZ1FPZ0lEWGo5Vkl2RG51TThFRW94TFI1Q0JzZWtoR292WCtBRnhZMQpla29zN3V4NW9UcFVlY1pDYThrNDYrWDRTZTFBWWc1VU0wVUE1Mm5PWmROajk3ODNzckw0eW1hbXdDZ0J6YnExCnorZWhpZWVHOHRENDZKRzNjTjA2algybTFZdldPNlV4UURwblF4SHVFZnMyaU5EeU9ROEpCd1I4cjRMUnI0WFcKWmUrMUYvd3BQZTBYc3FrVEpCY3BxK24ycXZ0YktIQW43Rk9PNzVWdVZLVnRzZGQ1V1dXTTlmSkpsQXp1cjJjTwpVZ3NuNG9uRXdQWnBteG1wT0FFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcHFPb1ZNRUdRVXBzWnczVEJBMVc5K2VkZVoKSWZuVnltaEd6K0QrWkVtaVl4WGRDNW5xL1dUbVcxaWRQNEM0d04xc0FBMHkvYjR2N01RWEVzQXdoZXU4aU90cwoyUi8wM25vdUlHaWo5d2gvSzF0L0dSVVhlWnh2UjZZVWJ5TUpvQzhZcE1ObjRkdExySWdYZ29VampUSkZFWWhkCkpIZkNrbGc3LzNVVGE3OXJTOHBlM0x4aHA2ejJJNTdjVzUwWTU3SEZoN2JTd1Yzd1c0ekJvUkxQQXhVamo3TlEKUmlLY3JmTEYyK3hpZkpMSEpzaDdRNDZ6Q3dHMHFyd0hFSkxNdFhpU0h2VXRFUmI4bi9SVVFxMm5Nb2owZVk4TQpYN09sWklPNTBkbUtTUlpRVGpsVE1jS2RYRDhweVlRR0Z0NmpCMzZ4TDBWU1FMVGtLV2RyVWJrK1d6ND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.0.0.208:6443
  name: cluster-208
contexts:
- context:
    cluster: cluster-206
    user: user-206
  name: kubernetes-206
- context:
    cluster: cluster-208
    user: user-208
  name: kubernetes-208
current-context: kubernetes-206
kind: Config
preferences: {}
users:
- name: user-206
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJUzN1eWZla2tGckV3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1qVXdOak00TVRWYUZ3MHlNVEE0TWpVd05qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXE0a0dkakZ0Tk91WGNoVFoKWHFEUW9tYWJ4OXhUc0kwWW5ubTdkNnBoVkNXdVRwdVZZQ0M4RUxONXJRVVZZWjJ0OWlhRXpRVjZWTjhZektGOAo4YmxDTWIzYkNPN0YwR3Q5bU1jejZ0ZWw0amtUYUowMUxiczN0TzBac1FRcjBvOVJMa2ZrZHlkbkRJWFVFV29KCmxwdWhPYlErUkNFd3ZITzJyd0VwL21hb0RNbUt2eTJvTUMyd2tGenZXN0E5Rmh2Q3ovS0hNOVJQSjBDR0c1dUIKUjloZ3dGTm1Cc1pXeC80b3d5TG1QQ3loTUpuVUFLRGNDRFZtSU9oRmw4L3FUWUlmb0hZN25LVHVDVVlEWnJKcQpOU2JzUDBuakNIcVlKVjU2b21sTkFEbzh1L29NdHMvaStHeWZybUR3ejhHYWEwR3VNelVEYTRXbFMrSjdMUFhvCnQxRVQzUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLVjNreFc4Tk10Y2YvZHJ4aTB6aHhsckRINGlET1hkRGprbgpid09XUEtIaEwrRW8rT2hkV1hLSGlqMENQYjh3dUo0NVdwKzh1Z0tvbWQwd3pxbEJVVlp4YW5FWkE0aDFzOG9xCkdHRmJLUDVNdWFuU0l4dWx6OElVSWdkcEtxYVF6Q3pIZ1hQRmhoUTVQNndYbkQyMHJOeEluc0tlTmphYTI4ODYKRW1zbFRsS0V3RUpwOVVmWXhiZ2J1YjVlYjBERE9jR3BPalBtZjhGUGNyVkVpbTlmNWRmZFljWWU5MUwzNXl3UAo3VmhWSHRGUkdOaVM5Zm5Kd1F5ZEJncjFRMlVSREwzRytXTnNVTWRkSVNNb3J0TVp6dkpGYlJpNVVLUEZwbmtBCmEydmpZc242ZjRMc2pOREswZjU4NlhOd2hXWnRSWm5NUEN1S0cvTCtDb0x6TlB6SWo3UT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcTRrR2RqRnROT3VYY2hUWlhxRFFvbWFieDl4VHNJMFlubm03ZDZwaFZDV3VUcHVWCllDQzhFTE41clFVVllaMnQ5aWFFelFWNlZOOFl6S0Y4OGJsQ01iM2JDTzdGMEd0OW1NY3o2dGVsNGprVGFKMDEKTGJzM3RPMFpzUVFyMG85UkxrZmtkeWRuRElYVUVXb0pscHVoT2JRK1JDRXd2SE8ycndFcC9tYW9ETW1LdnkybwpNQzJ3a0Z6dlc3QTlGaHZDei9LSE05UlBKMENHRzV1QlI5aGd3Rk5tQnNaV3gvNG93eUxtUEN5aE1KblVBS0RjCkNEVm1JT2hGbDgvcVRZSWZvSFk3bktUdUNVWURackpxTlNic1AwbmpDSHFZSlY1Nm9tbE5BRG84dS9vTXRzL2kKK0d5ZnJtRHd6OEdhYTBHdU16VURhNFdsUytKN0xQWG90MUVUM1FJREFRQUJBb0lCQVFDTlhnbElRU1hLVmxyaQp0eElKcmFra0hrSkdiV3MvZHBrU2lpcVl6WDhXOVZMNUQ3b0VsaFhJQWRIR2FRa2RBUEZNaXFRcHYxajVOei9kCjdUem1qaEppb2lBdzlXOXJmQnJ2WFVTSlI1NDdtV1JJZEQ5T2FCdlo3UW1lWEZ5dFZGWElPWkd0TFhqODFoSlgKSTdleE9xT2R4ZEVISHY5bVlFcnZZWnMxUVc4LzBVTysvcmhwb3RKYVBoNTRJOG5tUDVzYkNMUW1GL2lMVmVhVgoyeWtNNEo1NFpKL0dVTHc4M2J2QjN6dk1RMkpDVU1EdzQ0SHpTRlhqeVRFRXQ1dGYyb0RvM05TYll0UG4wMXd4CjFnNGhpbTVoMVBnYWRkT3l2VCtJYmUvUFNveE9uTjB3MCtrS1FCVGUxQTRUQWE0VDEzeHF6clRteFhsdVBMczkKL1pHUjYrWjFBb0dCQU1jWkttbjZPWExYaEFORHBrMkgyVFp1OE55R2cwb0N3R3kxL3BrN1pUKzFibkpvN1gycQp5YW02S0kyd1lPVjlNSDBVUEtkVDl1aURZZitSRFFOZmtCZmp4TVN1SU50VUZXYlk4YzgyQjdVNFZ6YWVLQW4wCm5uMWhDMVplT2Myc3JpTVZtSVo3Y0p3STdoaDZWMXMxalZqZ0FiNFFhQU9uaVdQYnpTMlJoeWFMQW9HQkFOeVAKUEYvVUN4MkZESUd2U2Q3UytZM2NCTkFrSXlqa3JDYXJZUDNMWnhIR2t5UnVpK0xBZnNVeDhaOGNCL1JOL1YzQwpoZHF6bEVVenN2d1p2dlFTR0tWempOeWZYN0xiWUlMNDQ1bHdhTXppSkZVYUZVblJCVjVxNHArMFNuRk5DQUx6CmVyUmk5NXhFZVNGbG9ZYkprc0ZvMFZ1bmdBd1IvTnVHclVNUlhPUTNBb0dCQUtCTnU5K3VUOWtPZTBVaGorSDIKMGtaSWx2Z0grZWQ2UmJLQjZtYzM4bktVUTBRdEJhTGNBeGo4UVRDcjVhaUEzcXltd1pzOE9KM0hRdjFCcmNlYwpodWtsUThYVUtiSk9oaGpUN2dZWGk2YzJvTW5pRjN6RWoyT0Y0bG44N2UrUzdIWmxLZlNGcVFxSkNpTjlSWjZ6CmhJWWRmbW1vemdhN094ekMyZldwcWJhM0FvR0FmRkdRS2tPTzhGaXFMLzdwbUZzNnBxYzVYMGkvT0xHTUIwL2EKSDdPaXFQWlF3ZHc5cE5Yem5wc0VJamJlbE9uUXdpUis1a01LYytjc0g4VXpTTWRhZFFlb2drS2k0bUdkQ0xYWgpOQWVVU3NlOHl1c2t6TEt3WUFQSE1WV2lFRExuTFNLb0t6ME5iRnQ4RzBMNXhNdWhtTHJJSnUxRzA0YmdDNnpoCnhFZnBJQWtDZ1lCWVFIK3dYNy9BblhYOEpoWFREQVlBTmVCNmE3WnFEWnpKb2dqcWdodGgvWG9lbGhkUHkzaFAKUUNZVERvaXRYQ25ZajllRVFFMHJRMS8ra0dwQzZCa3Y1ZWVJV25KbWkxaGx3VzJaNVQ5MTRFcWdtd0t4bCsvMApPc1dTeFhGV2RRazhpbHBVTE1CQXc1MmRhd3NEaStwL1F5anVweHNzenNBVHZ2eXRJT1VEUWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
- name: user-208
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJRkgxTDlsWFFXWTR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1qVXdOak00TURWYUZ3MHlNVEE0TWpVd05qTTRNRGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdpQmlaNjl1Q0NrMDY0RysKRm1rdFNqZHFlRmFHaWtydW90eGRRdFVya2djeGVNTEVtSFVrU2ErZGRrTFp0eXFCcTJWK3ZuWVFDNUlBZnV3MgpKQ2JiVHdVdkk1c1pFUVY5NW82UDZ4WkxFTWtpRi9DT0lmQmhrYnNQZ1BuVEtKWFhYSXJ2aDdBS0tjS2x6TEV6Cm9DWWRQZndJTTlaODNrenRYR3hvajczelUvTjd6R0I5bU5GYmxYNkRtQU1BaGc3L0JzRVgwWTVRUlYvUm1xOHYKOW9DaGdLSHBrK2VjSUpNUW1RQUdNcG1oNkY5S3lrUmczUzRsR2czaWFIVWZHSlFRdStERHdFRWprRk14MS9udQpkOWx1dG9HM1BVeFZCcTlFeldSZ0RZWDByQzBvYkNmSnJReTl6S2YvVVVLUkdJbVlLU3FxU0JOTlAwMGZMVzhXClhDaFFsd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFETjd3bmFCUXN3WGpMa2cvNFBMcEwzbzVrS2MwM1kxOTNoagpUYXV0N3dnNXE2Zlk0cHRLV1YyblM3U2VRUWtpd1kvbVA2RGZPU2dtd3FOWnhNbHY3NkM3MTZPM1FoWmhsOHhNCk9TVkY0TFZSVUtMUWVEc2Nid2s3WTBtbnBSd3BnK1M0REVPanNYMmRkYzNjelM4eXdTck1YYWZidHdyQlN0UnMKcFlWYm0xNmVVQ3BIa2hCZU1GRVJBd1pBb2VVOTJ1TjdseGNDYlJPWmdjTmQ0Um9TMVcwWjEzenNsYjVVZmppegp5MUx6K1ZHVUFCeXNVbnZ0TEgrSmJ1MTgxVEx3RUxkQjZSQlhRVTRtbEN3TWtQWkN6YjNYTlF6dTZyMzRCQUFaCmpiUkdxcHBvZXZGbVE5bDYzaFV6aXdWOWJwUHZmMlMxQytxajJiR2dBM1MyZTkxVUpyaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBd2lCaVo2OXVDQ2swNjRHK0Zta3RTamRxZUZhR2lrcnVvdHhkUXRVcmtnY3hlTUxFCm1IVWtTYStkZGtMWnR5cUJxMlYrdm5ZUUM1SUFmdXcySkNiYlR3VXZJNXNaRVFWOTVvNlA2eFpMRU1raUYvQ08KSWZCaGtic1BnUG5US0pYWFhJcnZoN0FLS2NLbHpMRXpvQ1lkUGZ3SU05Wjgza3p0WEd4b2o3M3pVL043ekdCOQptTkZibFg2RG1BTUFoZzcvQnNFWDBZNVFSVi9SbXE4djlvQ2hnS0hwaytlY0lKTVFtUUFHTXBtaDZGOUt5a1JnCjNTNGxHZzNpYUhVZkdKUVF1K0REd0VFamtGTXgxL251ZDlsdXRvRzNQVXhWQnE5RXpXUmdEWVgwckMwb2JDZkoKclF5OXpLZi9VVUtSR0ltWUtTcXFTQk5OUDAwZkxXOFdYQ2hRbHdJREFRQUJBb0lCQUZCeWlUVTh2eFdFdGhpTwp5TTZTd2FFSy9BVm9uaEs3WU05L0VPcjhXalVHNUJxT1pGaGwyeWJxTHcvdVBqa28xVm5KRXRBdEx4TU1hMFl0CjczWGw0R2FMMkhBaCt5NVJuMDRuY0Q3VkcwQ1dpWmx3S0FhcWpsU05ONnlVVzB4clpEZEdvR01Uc2ZLQ1pxRkEKSWd2Ukg3Y3JOZDc5bVB1cTE0YkFxa2cvU0pKV0VGWFZ2MTk1ZUtDdndPb3hqaUQ5cW0xSUJZeTNBM2Q5Sm45RQovbGo0OXBFYUlpSGMxaWhodkdtNHFBa1NtK0FuVGVWSGlXTjk5enYwVjdHcDg3VVdKQzQ5TWtxZjdIc01zN2Y2CmNyb0xObjNiRHk3cnZtWjJoc3lrQ3NBT1k1eG9pZ0NuOGdyQklqMDdBQTZ4d3JTTGtqQmVRRFF0NDZaa2VvWDIKUHlSdDdQa0NnWUVBOHlRMk1kWFhXOUtHMU9BWEV5c2svT1JuRS9uTzhNT2RXUkhsdSt5dDdNR0tnTTlndjJCOQorSm9UMEtYSUtRQ2laempOcVJnRHhCamo0TnZXU0NicklyNEFLakJOYmJQNmViUm82WkVxUXNxWkh1ODJZUVUyCklBVFQvbHU5NWNBUGxlUGxydUhZemZIbTRmZ3JDSWhLVTMyVENPU3ZYSThFUkNsbFZIcm9kL1VDZ1lFQXpHU1UKaUx4MGk1RHVURWlUMjNXaEZjZEdEV21neGsrc2M2OTNiREkvaGhqQkRrSndjbmh0V2Q2ZWhjbFloTWpBd3paNApnd3pwcDBIUHQ4em9zRW1uRzZEeGNHYVpKUWZwWkN6bGlRTExCNXBEMlBCcFAxaDV3MmxjOS9zeTloMTFoYk9nCnF6c1QvZ3Z0YWZ2cSs0ZDREWVQ5SWlJd3A3TmRuTURKTlJGOXF0c0NnWUJRbmpjaCt3ZDNPS3pnTkpVeUUrSWwKd0EyMWYrVHZ5OHlHVmZyWWZyZUVndi9MaWZkSVBWUkhjNzhTTllYU29wVTJxSXo0ZmkveGUxZERuV0RGZDdJTApTUGlCQkpjSHd0OVFMMU9CN2xJVzUxb3grWnNNUEZBZitiblk0czVxT1c1eGdxa0xmWE1IaGlmSjBTRmpxTjBNCkpkejAyKzZSUUJKb0QxbTcweXoxYVFLQmdRQ1c3ejl3cWhvMlpsUlRLTlZuSHJwUjV0SW9YWFJJZmRXUHFHZTgKRW04dWkyRWxNcEx2TlZjckltWlZ6Wlg4bUhNZ3RUelJLZHZ4azN2Yzh3aHlCakhOQ1ZEQi9FSGpRckJyTld4YgpmU0NKQUxaUm9WZFhXL0t2QjBPTUxJZzVqdytXS0V2aHBzTGd1OVlhaWRuQTNRMGVqcktQWGtnbnp5QzEvUGVZCnBNMzBPUUtCZ1FDeGxRbWZKY1BPZTZNNjhMWElhTzdWd3dCbkhaTmlBUzcwTWQzK1V2bWtGd2ZLNjNyVnVlUHcKSnNtbGpEV1VNSGFxcWdMTzNiUmxvdjJDTmR6MVdtUS9NdnhJTDZMdFl4dVNNT0d4ZG02VkdPNmQ0aVpVd1ZMQQpIMUZuLzA0YllpZit0TzlsNnU2UmJqcGFNY2pseThJUWpNQkZKTDRNMExyV1Y2MHdiVXJXL3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

測試叢集切換的行為

  • 主叢集 context: kubernetes-206
  • 附屬叢集 context: kubernetes-208
kubectl config use-context <context-name>
  • 切換到主叢集
root@sdn-k8s-b5-1:~# kubectl config use-context kubernetes-206
Switched to context "kubernetes-206".
root@sdn-k8s-b5-1:~# kubectl get pod -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-2dkl4               1/1     Running   0          22m
kube-system   coredns-5c98db65d4-j2ntt               1/1     Running   0          22m
kube-system   etcd-sdn-k8s-b5-1                      1/1     Running   0          21m
kube-system   kube-apiserver-sdn-k8s-b5-1            1/1     Running   0          21m
kube-system   kube-controller-manager-sdn-k8s-b5-1   1/1     Running   0          21m
kube-system   kube-flannel-ds-amd64-rqv4n            1/1     Running   0          2m41s
kube-system   kube-proxy-tgvtf                       1/1     Running   0          22m
kube-system   kube-scheduler-sdn-k8s-b5-1            1/1     Running   0          21m
  • 切換到附屬叢集
root@sdn-k8s-b5-1:~# kubectl config use-context kubernetes-208
Switched to context "kubernetes-208".
root@sdn-k8s-b5-1:~# kubectl get pod -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-fgn68               1/1     Running   0          22m
kube-system   coredns-5c98db65d4-j8njg               1/1     Running   0          22m
kube-system   etcd-sdn-k8s-b5-2                      1/1     Running   0          21m
kube-system   kube-apiserver-sdn-k8s-b5-2            1/1     Running   0          21m
kube-system   kube-controller-manager-sdn-k8s-b5-2   1/1     Running   0          21m
kube-system   kube-flannel-ds-amd64-bpljv            1/1     Running   0          2m46s
kube-system   kube-proxy-2z7p7                       1/1     Running   0          22m
kube-system   kube-scheduler-sdn-k8s-b5-2            1/1     Running   0          21m

修復 k8s on-premises 沒有外部 LoadBalancer 的問題

安裝 MetalLB on kubernetes

參考文章: https://k2r2bai.com/2019/09/27/ironman2020/day12/

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml

驗證 MetalLB 的服務是否運行

  • 主叢集
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl -n metallb-system get pod
NAME                        READY   STATUS    RESTARTS   AGE
controller-55d74449-rnxgc   1/1     Running   0          31m
speaker-bsmnc               1/1     Running   0          31m
  • 附屬叢集
root@sdn-k8s-b5-2:~# kubectl -n metallb-system get pod
NAME                        READY   STATUS    RESTARTS   AGE
controller-55d74449-l4vm5   1/1     Running   0          31m
speaker-b6n4d               1/1     Running   0          31m

設定 LoadBalancer 發布的 IP Range

  • 主叢集
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      auto-assign: true
      addresses:
      - 10.0.0.46-10.0.0.46
EOF
  • 附屬叢集
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      auto-assign: true
      addresses:
      - 10.0.0.47-10.0.0.47
EOF

Istio 下載安裝與部署

下載並安裝 Istio 到主叢集中

  • 這邊需要指定 Istio 版本 1.4.6
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.6 sh -
export PATH="$PATH:/root/istio-1.4.6/bin"

設置 Istio 跨叢集主節點配置

  • --context=$CTX_CLUSTER1 可以空白,會使用 use-context 去切換叢集即可
kubectl config use-context kubernetes-206
kubectl create --context=$CTX_CLUSTER1 ns istio-system
cd istio-1.4.6/
kubectl create --context=$CTX_CLUSTER1 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem

Istio 設定多叢集的相關設定 (在主叢集設定)

nano install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml
  • 原本的設定檔案
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
  values:
    security:
      selfSigned: false
    gateways:
      istio-ingressgateway:
        env:
          ISTIO_META_NETWORK: "network1"
    global:
      mtls:
        enabled: true
      controlPlaneSecurityEnabled: true
      proxy:
        accessLogFile: "/dev/stdout"
      network: network1
      meshExpansion:
        enabled: true
    pilot:
      meshNetworks:
        networks:
          network1:
            endpoints:
            - fromRegistry: Kubernetes
            gateways:
            - address: 0.0.0.0
              port: 443
          network2:
            endpoints:
            - fromRegistry: n2-k8s-config
            gateways:
            - address: 0.0.0.0
              port: 443
  • 修改設定檔案
    • 設定 pilot.meshNetworks.networks.network1.gateways.address: istio 的主叢集 svc/ingress-geteway 的 ingress-gateway-ip,這邊參考先設定的 LoadBalancer IP。
apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
spec:
  values:
    security:
      selfSigned: false
    gateways:
      istio-ingressgateway:
        env:
          ISTIO_META_NETWORK: "network1"
    global:
      mtls:
        enabled: true
      controlPlaneSecurityEnabled: true
      proxy:
        accessLogFile: "/dev/stdout"
      network: network1
      meshExpansion:
        enabled: true
    pilot:
      meshNetworks:
        networks:
          network1:
            endpoints:
            - fromRegistry: Kubernetes
            gateways:
            - address: 10.0.0.46
              port: 443
          network2:
            endpoints:
            - fromRegistry: n2-k8s-config
            gateways:
            - address: 10.0.0.47
              port: 443

在主節點之上進行部署 Istio 跨叢集設定檔案與部署 Istio

  • 部署指令
istioctl manifest apply --context=$CTX_CLUSTER1 -f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml
  • 指令執行結果
root@sdn-k8s-b5-1:~/istio-1.4.6# istioctl manifest apply --context=$CTX_CLUSTER1 \
>   -f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Prometheus...
- Applying manifest for component Citadel...
- Applying manifest for component Injector...
- Applying manifest for component Pilot...
- Applying manifest for component IngressGateway...
- Applying manifest for component Policy...
- Applying manifest for component Galley...
- Applying manifest for component Telemetry...
✔ Finished applying manifest for component Citadel.
✔ Finished applying manifest for component Prometheus.
✔ Finished applying manifest for component Injector.
✔ Finished applying manifest for component Galley.
✔ Finished applying manifest for component Policy.
✔ Finished applying manifest for component IngressGateway.
✔ Finished applying manifest for component Pilot.
✔ Finished applying manifest for component Telemetry.


✔ Installation complete
  • 查看 Istio 部署結果
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get pods --context=$CTX_CLUSTER1 -n istio-system
NAME                                      READY   STATUS    RESTARTS   AGE
istio-citadel-556c8d9795-zxsdt            1/1     Running   0          2m45s
istio-galley-7644d8967f-9k8k5             2/2     Running   0          2m45s
istio-ingressgateway-7f987d6cc-7ktvc      1/1     Running   0          2m45s
istio-pilot-977b47c9-lhjx6                2/2     Running   0          2m45s
istio-policy-55598c6497-5w4r5             2/2     Running   1          2m45s
istio-sidecar-injector-7c67976ffd-6zs8h   1/1     Running   0          2m45s
istio-telemetry-86cf7699d8-glgls          2/2     Running   1          2m45s
prometheus-685585888b-skw7m               1/1     Running   0          2m45s

建立主叢集中,接收 附屬叢集回饋之請求的接入點 (ingress-gateway)

kubectl apply --context=$CTX_CLUSTER1 -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cluster-aware-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 443
      name: tls
      protocol: TLS
    tls:
      mode: AUTO_PASSTHROUGH
    hosts:
    - "*.local"
EOF

抓取目前設定的 Ingress-gateway 資訊,修改 Istio 設定配置檔案。

  • 這部分用於動態分配 LoadBalance IP 的環境,若是先用 MetalLB 設定過的,則不需要做更動。
  • 指令
kubectl get svc istio-ingressgateway -n istio-system
  • 系統回饋
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                                                                   AGE
istio-ingressgateway   LoadBalancer   10.106.213.36   10.0.0.46     15020:31743/TCP,80:30316/TCP,443:30045/TCP,15029:30012/TCP,15030:31126/TCP,15031:31282/TCP,15032:31425/TCP,15443:30038/TCP,15011:30178/TCP,8060:31345/TCP,853:30167/TCP   10m
  • 修改項目
    • 查詢預先填入的 IP (一個叢集會有 2 組要調動)
    • 修改成剛剛查詢出來的 LoadBalancer IP
kubectl edit cm -n istio-system --context=$CTX_CLUSTER1 istio

設置 Istio 跨叢集附屬節點配置

  • 輸出主叢集的 LOCAL_GW 環境變數
export LOCAL_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-ingressgateway -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
echo ${LOCAL_GW_ADDR}
  • 設定附屬叢集基礎環境
kubectl config use-context kubernetes-208
kubectl create --context=$CTX_CLUSTER2 ns istio-system
kubectl create --context=$CTX_CLUSTER2 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
CLUSTER_NAME=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].name}')
istioctl manifest apply --context=$CTX_CLUSTER2 \
  --set profile=remote \
  --set values.global.mtls.enabled=true \
  --set values.gateways.enabled=true \
  --set values.security.selfSigned=false \
  --set values.global.controlPlaneSecurityEnabled=true \
  --set values.global.createRemoteSvcEndpoints=true \
  --set values.global.remotePilotCreateSvcEndpoint=true \
  --set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
  --set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
  --set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
  --set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
  --set values.global.network="network2" \
  --set values.global.multiCluster.clusterName=${CLUSTER_NAME} \
  --set autoInjection.enabled=true
  • 執行結果
root@sdn-k8s-b5-1:~/istio-1.4.6# istioctl manifest apply --context=$CTX_CLUSTER2 \
>   --set profile=remote \
>   --set values.global.mtls.enabled=true \
>   --set values.gateways.enabled=true \
>   --set values.security.selfSigned=false \
>   --set values.global.controlPlaneSecurityEnabled=true \
>   --set values.global.createRemoteSvcEndpoints=true \
>   --set values.global.remotePilotCreateSvcEndpoint=true \
>   --set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
>   --set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
>   --set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
>   --set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
>   --set values.global.network="network2" \
>   --set values.global.multiCluster.clusterName=${CLUSTER_NAME} \
>   --set autoInjection.enabled=true
- Applying manifest for component Base...

✔ Finished applying manifest for component Base.
- Applying manifest for component Citadel...
- Applying manifest for component IngressGateway...
- Applying manifest for component Injector...
- Pruning objects for disabled component Galley...
- Pruning objects for disabled component Policy...
- Pruning objects for disabled component Pilot...
- Pruning objects for disabled component Telemetry...
- Pruning objects for disabled component Prometheus...
✔ Finished pruning objects for disabled component Policy.
✔ Finished pruning objects for disabled component Telemetry.
✔ Finished pruning objects for disabled component Pilot.
✔ Finished pruning objects for disabled component Galley.
✔ Finished pruning objects for disabled component Prometheus.
✔ Finished applying manifest for component Citadel.
✔ Finished applying manifest for component IngressGateway.
✔ Finished applying manifest for component Injector.


✔ Installation complete
  • 檢查部署狀態
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio!=ingressgateway
NAME                                      READY   STATUS    RESTARTS   AGE
istio-citadel-556c8d9795-lfw7c            1/1     Running   0          6m28s
istio-sidecar-injector-7c67976ffd-jt72z   1/1     Running   0          6m25s
  • 準備 n2-k8s-config 的預備資訊
CLUSTER_NAME=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].name}')
SERVER=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
SECRET_NAME=$(kubectl --context=$CTX_CLUSTER2 get sa istio-reader-service-account -n istio-system -o jsonpath='{.secrets[].name}')
CA_DATA=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['ca\.crt']}")
TOKEN=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['token']}" | base64 --decode)
  • 設定 n2-k8s-config 的 Secret
cat <<EOF > n2-k8s-config
apiVersion: v1
kind: Config
clusters:
  - cluster:
      certificate-authority-data: ${CA_DATA}
      server: ${SERVER}
    name: ${CLUSTER_NAME}
contexts:
  - context:
      cluster: ${CLUSTER_NAME}
      user: ${CLUSTER_NAME}
    name: ${CLUSTER_NAME}
current-context: ${CLUSTER_NAME}
users:
  - name: ${CLUSTER_NAME}
    user:
      token: ${TOKEN}
EOF
  • 設定 主叢集連結附屬叢集
kubectl config use-context kubernetes-206
kubectl create --context=$CTX_CLUSTER1 secret generic n2-k8s-secret --from-file n2-k8s-config -n istio-system
kubectl label --context=$CTX_CLUSTER1 secret n2-k8s-secret istio/multiCluster=true -n istio-system
  • 觀看設定後附屬叢集的連結
kubectl config use-context kubernetes-208
kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio=ingressgateway
  • 執行結果
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio=ingressgateway
NAME                                   READY   STATUS    RESTARTS   AGE
istio-ingressgateway-9fbcdd69c-b4j5r   1/1     Running   0          7m35s

測試跨叢集的環境狀態

主叢集服務部署

  • 部署服務
kubectl config use-context kubernetes-206
kubectl create --context=$CTX_CLUSTER1 ns sample
kubectl label --context=$CTX_CLUSTER1 namespace sample istio-injection=enabled
kubectl create --context=$CTX_CLUSTER1 -f samples/helloworld/helloworld.yaml -l app=helloworld -n sample
kubectl create --context=$CTX_CLUSTER1 -f samples/helloworld/helloworld.yaml -l version=v1 -n sample
  • 驗證測試狀態
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get po --context=$CTX_CLUSTER1 -n sample
NAME                             READY   STATUS    RESTARTS   AGE
helloworld-v1-6757db4ff5-b4rwz   2/2     Running   0          101s
  • 部署生成 Request 請求的 Pod
kubectl apply --context=$CTX_CLUSTER1 -f samples/sleep/sleep.yaml -n sample
  • 執行結果
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get po --context=$CTX_CLUSTER1 -n sample -l app=sleep
NAME                     READY   STATUS    RESTARTS   AGE
sleep-6bdb595bcb-qbhxw   2/2     Running   0          12s

附屬叢集服務部署

  • 部署服務
kubectl config use-context kubernetes-208
kubectl create --context=$CTX_CLUSTER2 ns sample
kubectl label --context=$CTX_CLUSTER2 namespace sample istio-injection=enabled
kubectl create --context=$CTX_CLUSTER2 -f samples/helloworld/helloworld.yaml -l app=helloworld -n sample
kubectl create --context=$CTX_CLUSTER2 -f samples/helloworld/helloworld.yaml -l version=v2 -n sample
  • 驗證測試狀態
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get pod --context=$CTX_CLUSTER2 -n sample
NAME                             READY   STATUS    RESTARTS   AGE
helloworld-v2-85bc988875-f5hcc   1/2     Running   0          99s
  • 部署生成 Request 請求的 Pod
kubectl apply --context=$CTX_CLUSTER2 -f samples/sleep/sleep.yaml -n sample
  • 執行結果
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl get po --context=$CTX_CLUSTER2 -n sample -l app=sleep
NAME                     READY   STATUS    RESTARTS   AGE
sleep-6bdb595bcb-bqb29   2/2     Running   0          58s

驗證跨叢集通訊功能

主叢集

  • 切換叢集並進行服務呼叫
kubectl config use-context kubernetes-206
kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
  • 呼叫服務的回饋
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl config use-context kubernetes-206
Switched to context "kubernetes-206".
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v1, instance: helloworld-v1-6757db4ff5-67jkq
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v2, instance: helloworld-v2-85bc988875-cr4cj
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v1, instance: helloworld-v1-6757db4ff5-67jkq
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v1, instance: helloworld-v1-6757db4ff5-67jkq

附屬叢集

  • 切換叢集並進行服務呼叫
kubectl config use-context kubernetes-208
kubectl exec --context=$CTX_CLUSTER2 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
  • 呼叫服務的回饋
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl config use-context kubernetes-208
Switched to context "kubernetes-208".
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v2, instance: helloworld-v2-85bc988875-cr4cj
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v2, instance: helloworld-v2-85bc988875-cr4cj
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v2, instance: helloworld-v2-85bc988875-cr4cj
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v1, instance: helloworld-v1-6757db4ff5-67jkq
root@sdn-k8s-b5-1:~/istio-1.4.6# kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Hello version: v2, instance: helloworld-v2-85bc988875-cr4cj

Note 筆記一下 R

  1. 跨叢集呼叫的話,需要有對方叢集服務的 SVC,這樣才可通到另一個叢集。
Node\Pod&SVC Pod Pod RelationShip Service_Creation
Node_A Pod_A, Pod_B Pod_A => Pod_C Pod_A, Pod_B, Pod_C
Node_B Pod_C, Pod_D Pod_D => Pod_F Pod_C, Pod_D, Pod_F
Node_C Pod_E, Pod_F Pod_E => Pod_B Pod_E, Pod_F, Pod_B
  1. 叢集互相連通失敗的話,就全部重灌吧。

上一篇
Day 15 Kubernetes 耐久戰技大考驗 - Dbench on Kubernetes 安裝基礎操作篇
下一篇
Day 17 GCP 公有雲_雲端叢集實戰 - GKE 組建之路
系列文
基於付費公有雲與開源機房自建私有雲之雲端應用服務測試兼叢集與機房託管服務實戰之勇者崎嶇波折且劍還掉在路上的試煉之路30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言