iT邦幫忙

第 11 屆 iThome 鐵人賽

DAY 7
0
自我挑戰組

SDN/NFV 網路虛擬化調度平台系列 第 7

Day 7 - Kubernetes Multus CNI 實作 I

前言

接續前一篇的 Kubernetes Multus CNI 介紹後,我們對於 Multus 已經有了基本的認知,因此本篇將實作 Multus CNI 的部署。

Kubernetes 環境

這裡使用了 Kubernetes 的版本皆為 1.15.3,我們可以透過 kubectl version 來確認自己 kubernetes 的版本。

root@sdn-k8s-b3-8:/home/ubuntu# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:41:55Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

我們使用了 flannel CNI 作為我們主要的 CNI,我們需要確保 CNI 屬於正常運作

root@sdn-k8s-b3-8:/home/ubuntu# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-clhgz               1/1     Running   0          9h
coredns-5c98db65d4-n5rtm               1/1     Running   0          9h
etcd-sdn-k8s-b3-8                      1/1     Running   0          9h
kube-apiserver-sdn-k8s-b3-8            1/1     Running   0          9h
kube-controller-manager-sdn-k8s-b3-8   1/1     Running   0          9h
kube-flannel-ds-amd64-bljwn            1/1     Running   0          9h
kube-proxy-qlnk4                       1/1     Running   0          9h
kube-scheduler-sdn-k8s-b3-8            1/1     Running   0          9h

Multus CNI 安裝

通常官方會推薦兩種方式

  1. 手動將 Multus bin 檔放到 /opt/cni/bin
  2. 使用 daemonset 安裝和配置 Multus CNI

本篇將透過後者 daemonset 安裝配置 Muluts CNI。
首先請將 muluts-cni clone 到本地目錄,並移動到 multus-cni 目錄內。

git clone https://github.com/intel/multus-cni
cd multus-cni/

使用已撰寫好的 daemonset

root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl apply -f ./images/multus-daemonset.yml
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
clusterrole.rbac.authorization.k8s.io/multus created
clusterrolebinding.rbac.authorization.k8s.io/multus created
serviceaccount/multus created
configmap/multus-cni-config created
daemonset.extensions/kube-multus-ds-amd64 created
daemonset.extensions/kube-multus-ds-ppc64le created

確認 daemonset

root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl get daemonset -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE 
kube-flannel-ds-amd64     1         1         1       1            1         
kube-flannel-ds-arm       0         0         0       0            0         
kube-flannel-ds-arm64     0         0         0       0            0         
kube-flannel-ds-ppc64le   0         0         0       0            0         
kube-flannel-ds-s390x     0         0         0       0            0         
kube-multus-ds-amd64      1         1         1       1            1         
kube-multus-ds-ppc64le    0         0         0       0            0         
kube-proxy                1         1         1       1            1         

創建 network attachment definition

這裡需要注意由於我們使用的 CNI 是 macvlan ,因此我們需要將主機的實體網卡名稱設定到 network atachment definition 內。

cat <<EOF > network-attach.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf-1
spec:
  config: '{
            "cniVersion": "0.3.0",
            "type": "macvlan",
            "master": "enp0s25",
            "mode": "bridge",
            "ipam": {
                "type": "host-local",
                "ranges": [
                    [ {
                         "subnet": "10.10.0.0/16",
                         "rangeStart": "10.10.1.20",
                         "rangeEnd": "10.10.3.50",
                         "gateway": "10.10.0.254"
                    } ]
                ]
            }
        }'
EOF

部署 network attach.yaml

root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl apply -f network-attach.yaml
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf-1 created

部署 Pod

cat <<EOF > pod-case-01.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-case-01
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf-1
spec:
  containers:
  - name: pod-case-01
    image: docker.io/centos/tools:latest
    command:
    - /sbin/init
EOF


-----


cat <<EOF > pod-case-02.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-case-02
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf-1
spec:
  containers:
  - name: pod-case-01
    image: docker.io/centos/tools:latest
    command:
    - /sbin/init
EOF
root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl apply -f pod-case-01.yaml
pod/pod-case-01 created
root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl apply -f pod-case-02.yaml
pod/pod-case-02 created

我們可以成功看到 Pod 裡面創建了第二個介面

root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl exec -it pod-case-01 -- ip -d address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 26:3a:7c:33:01:a1 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    veth numtxqueues 1 numrxqueues 1
    inet 10.244.0.5/24 scope global eth0
       valid_lft forever preferred_lft forever
4: net1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 16:00:a7:7c:bb:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    macvlan mode bridge numtxqueues 1 numrxqueues 1
    inet 10.10.1.20/16 scope global net1
       valid_lft forever preferred_lft forever
root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl exec -it pod-case-02 -- ip -d address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether c2:00:69:ac:53:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    veth numtxqueues 1 numrxqueues 1
    inet 10.244.0.8/24 scope global eth0
       valid_lft forever preferred_lft forever
4: net1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 2e:0d:9c:11:4b:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0
    macvlan mode bridge numtxqueues 1 numrxqueues 1
    inet 10.10.1.21/16 scope global net1
       valid_lft forever preferred_lft forever

確認 Pod 的連線

root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl exec -it pod-case-02 -- ping 10.10.1.20
PING 10.10.1.20 (10.10.1.20) 56(84) bytes of data.
64 bytes from 10.10.1.20: icmp_seq=1 ttl=64 time=0.081 ms
64 bytes from 10.10.1.20: icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from 10.10.1.20: icmp_seq=3 ttl=64 time=0.055 ms
^C
--- 10.10.1.20 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.055/0.064/0.081/0.014 ms
root@sdn-k8s-b3-8:/home/ubuntu/multus-cni# kubectl exec -it pod-case-02 -- ping 10.244.0.5
PING 10.244.0.5 (10.244.0.5) 56(84) bytes of data.
64 bytes from 10.244.0.5: icmp_seq=1 ttl=64 time=0.118 ms
64 bytes from 10.244.0.5: icmp_seq=2 ttl=64 time=0.082 ms
^C
--- 10.244.0.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.082/0.100/0.118/0.018 ms

下一篇將持續實作 multus 的範例...

Reference

https://01.org/zh/kubernetes/building-blocks/multus-cni?langredirect=1
https://github.com/intel/multus-cni


上一篇
Day 6 - Kubernetes Multus CNI 介紹
下一篇
Day 8 - Kubernetes Multus CNI 實作 II
系列文
SDN/NFV 網路虛擬化調度平台30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言