容器編排之Kubernetes多租戶網路隔離

Kubernetes的一個重要特性就是要把不同node節點的pod(container)連接起來,無視物理節點的限制。但是在某些應用環境中,比如公有雲,不同租戶的pod不應該互通,這個時候就需要網路隔離。幸好,Kubernetes提供了NetworkPolicy,支持按Namespace級別的網路隔離,這篇文章就帶你去了解如何使用NetworkPolicy。

需要注意的是,使用NetworkPolicy需要特定的網路解決方案,如果不啟用,即使配置了NetworkPolicy也無濟於事。我們這裡使用Calico解決網路隔離問題。

互通測試

在使用NetworkPolicy之前,我們先驗證不使用的情況下,pod是否互通。這裡我們的測試環境是這樣的:

Namespace:ns-calico1,ns-calico2

Deployment: ns-calico1/calico1-nginx, ns-calico2/busybox

Service: ns-calico1/calico1-nginx

先創建Namespace:

apiVersion: v1nkind: Namespacenmetadata:n name: ns-calico1n labels:n user: calico1n---napiVersion: v1nkind: Namespacenmetadata:n name: ns-calico2n

# kubectl create -f namespace.yamlnnamespace "ns-calico1" creatednnamespace "ns-calico2" createdn# kubectl get nsnNAME STATUS AGEndefault Active 9dnkube-public Active 9dnkube-system Active 9dnns-calico1 Active 12snns-calico2 Active 8sn

接著創建ns-calico1/calico1-nginx:

apiVersion: extensions/v1beta1nkind: Deploymentnmetadata:n name: calico1-nginxn namespace: ns-calico1nspec:n replicas: 1n template:n metadata:n labels:n user: calico1n app: nginxn spec:n containers:n - name: nginxn image: nginxn ports:n - containerPort: 80n---napiVersion: v1nkind: Servicenmetadata:n name: calico1-nginxn namespace: ns-calico1n labels: n user: calico1nspec:n selector:n app: nginxn ports:n - port: 80n

# kubectl create -f calico1-nginx.yamlndeployment "calico1-nginx" creatednservice "calico1-nginx" createdn# kubectl get svc -n ns-calico1nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEncalico1-nginx 192.168.3.141 <none> 80/TCP 26sn# kubectl get deploy -n ns-calico1nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEncalico1-nginx 1 1 1 1 34sn

最後創建ns-calico2/calico2-busybox:

apiVersion: v1nkind: Podnmetadata:n name: calico2-busyboxn namespace: ns-calico2nspec:n containers:n - name: busyboxn image: busyboxn command:n - sleepn - "3600"n

# kubectl create -f calico2-busybox.yamlnpod "calico2-busybox" createdn# kubectl get pod -n ns-calico2nNAME READY STATUS RESTARTS AGEncalico2-busybox 1/1 Running 0 40sn

測試服務已經安裝完成,現在我們登進calico2-busybox里,看是否能夠連通calico1-nginx

# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1nConnecting to calico1-nginx.ns-calico1 (192.168.3.141:80)n

由此可以看出,在沒有設置網路隔離的時候,兩個不同Namespace下的Pod是可以互通的。接下來我們使用Calico進行網路隔離。

網路隔離

先決條件

要想在Kubernetes集群中使用Calico進行網路隔離,必須滿足以下條件:

  1. kube-apiserver必須開啟運行時extensions/v1beta1/networkpolicies,即設置啟動參數:--runtime-config=extensions/v1beta1/networkpolicies=true
  2. kubelet必須啟用cni網路插件,即設置啟動參數:--network-plugin=cni
  3. kube-proxy必須啟用iptables代理模式,這是默認模式,可以不用設置
  4. kube-proxy不得啟用--masquerade-all,這會跟calico衝突

注意:配置Calico之後,之前在集群中運行的Pod都要重新啟動

安裝calico

首先需要安裝Calico網路插件,我們直接在Kubernetes集群中安裝,便於管理。

# Calico Version v2.1.4n# http://docs.projectcalico.org/v2.1/releases#v2.1.4n# This manifest includes the following component versions:n# calico/node:v1.1.3n# calico/cni:v1.7.0n# calico/kube-policy-controller:v0.5.4nn# This ConfigMap is used to configure a self-hosted Calico installation.nkind: ConfigMapnapiVersion: v1nmetadata:n name: calico-confign namespace: kube-systemndata:n # Configure this with the location of your etcd cluster.n etcd_endpoints: "https://10.1.2.154:2379,https://10.1.2.147:2379"nn # Configure the Calico backend to use.n calico_backend: "bird"nn # The CNI network configuration to install on each node.n cni_network_config: |-n {n "name": "k8s-pod-network",n "type": "calico",n "etcd_endpoints": "__ETCD_ENDPOINTS__",n "etcd_key_file": "__ETCD_KEY_FILE__",n "etcd_cert_file": "__ETCD_CERT_FILE__",n "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",n "log_level": "info",n "ipam": {n "type": "calico-ipam"n },n "policy": {n "type": "k8s",n "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",n "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"n },n "kubernetes": {n "kubeconfig": "__KUBECONFIG_FILEPATH__"n }n }nn # If youre using TLS enabled etcd uncomment the following.n # You must also populate the Secret below with these files.n etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca"n etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"n etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key"nn---nn# The following contains k8s Secrets for use with a TLS enabled etcd cluster.n# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/napiVersion: v1nkind: Secretntype: Opaquenmetadata:n name: calico-etcd-secretsn namespace: kube-systemndata:n # Populate the following files with etcd TLS configuration if desired, but leave blank ifn # not using TLS for etcd.n # This self-hosted install expects three files with the following names. The valuesn # should be base64 encoded strings of the entire contents of each file.n etcd-key: base64 key.pemn etcd-cert: base64 cert.pemn etcd-ca: base64 ca.pemnn---nn# This manifest installs the calico/node container, as welln# as the Calico CNI plugins and network config onn# each master and worker node in a Kubernetes cluster.napiVersion: extensions/v1beta1nkind: DaemonSetnmetadata:n name: calico-noden namespace: kube-systemn labels:n k8s-app: calico-nodenspec:n selector:n matchLabels:n k8s-app: calico-noden template:n metadata:n labels:n k8s-app: calico-noden annotations:n scheduler.alpha.kubernetes.io/critical-pod: n scheduler.alpha.kubernetes.io/tolerations: |n [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },n {"key":"CriticalAddonsOnly", "operator":"Exists"}]n spec:n hostNetwork: truen containers:n # Runs calico/node container on each Kubernetes node. Thisn # container programs network policy and routes on eachn # host.n - name: calico-noden image: quay.io/calico/node:v1.1.3n env:n # The location of the Calico etcd cluster.n - name: ETCD_ENDPOINTSn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_endpointsn # Choose the backend to use.n - name: CALICO_NETWORKING_BACKENDn valueFrom:n configMapKeyRef:n name: calico-confign key: calico_backendn # Disable file logging so `kubectl logs` works.n - name: CALICO_DISABLE_FILE_LOGGINGn value: "true"n # Set Felix endpoint to host default action to ACCEPT.n - name: FELIX_DEFAULTENDPOINTTOHOSTACTIONn value: "ACCEPT"n # Configure the IP Pool from which Pod IPs will be chosen.n - name: CALICO_IPV4POOL_CIDRn value: "192.168.0.0/16"n - name: CALICO_IPV4POOL_IPIPn value: "always"n # Disable IPv6 on Kubernetes.n - name: FELIX_IPV6SUPPORTn value: "false"n # Set Felix logging to "info"n - name: FELIX_LOGSEVERITYSCREENn value: "info"n # Location of the CA certificate for etcd.n - name: ETCD_CA_CERT_FILEn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_can # Location of the client key for etcd.n - name: ETCD_KEY_FILEn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_keyn # Location of the client certificate for etcd.n - name: ETCD_CERT_FILEn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_certn # Auto-detect the BGP IP address.n - name: IPn value: ""n securityContext:n privileged: truen #resources:n #requests:n #cpu: 250mn volumeMounts:n - mountPath: /lib/modulesn name: lib-modulesn readOnly: truen - mountPath: /var/run/calicon name: var-run-calicon readOnly: falsen - mountPath: /calico-secretsn name: etcd-certsn # This container installs the Calico CNI binariesn # and CNI network config file on each node.n - name: install-cnin image: quay.io/calico/cni:v1.7.0n command: ["/install-cni.sh"]n env:n # The location of the Calico etcd cluster.n - name: ETCD_ENDPOINTSn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_endpointsn # The CNI network config to install on each node.n - name: CNI_NETWORK_CONFIGn valueFrom:n configMapKeyRef:n name: calico-confign key: cni_network_confign volumeMounts:n - mountPath: /host/opt/cni/binn name: cni-bin-dirn - mountPath: /host/etc/cni/net.dn name: cni-net-dirn - mountPath: /calico-secretsn name: etcd-certsn volumes:n # Used by calico/node.n - name: lib-modulesn hostPath:n path: /lib/modulesn - name: var-run-calicon hostPath:n path: /var/run/calicon # Used to install CNI.n - name: cni-bin-dirn hostPath:n path: /opt/cni/binn - name: cni-net-dirn hostPath:n path: /etc/cni/net.dn # Mount in the etcd TLS secrets.n - name: etcd-certsn secret:n secretName: calico-etcd-secretsnn---nn# This manifest deploys the Calico policy controller on Kubernetes.n# See https://github.com/projectcalico/k8s-policynapiVersion: extensions/v1beta1nkind: Deploymentnmetadata:n name: calico-policy-controllern namespace: kube-systemn labels:n k8s-app: calico-policyn annotations:n scheduler.alpha.kubernetes.io/critical-pod: n scheduler.alpha.kubernetes.io/tolerations: |n [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },n {"key":"CriticalAddonsOnly", "operator":"Exists"}]nspec:n # The policy controller can only have a single active instance.n replicas: 1n strategy:n type: Recreaten template:n metadata:n name: calico-policy-controllern namespace: kube-systemn labels:n k8s-app: calico-policyn spec:n # The policy controller must run in the host network namespace so thatn # it isnt governed by policy that would prevent it from working.n hostNetwork: truen containers:n - name: calico-policy-controllern image: quay.io/calico/kube-policy-controller:v0.5.4n env:n # The location of the Calico etcd cluster.n - name: ETCD_ENDPOINTSn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_endpointsn # Location of the CA certificate for etcd.n - name: ETCD_CA_CERT_FILEn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_can # Location of the client key for etcd.n - name: ETCD_KEY_FILEn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_keyn # Location of the client certificate for etcd.n - name: ETCD_CERT_FILEn valueFrom:n configMapKeyRef:n name: calico-confign key: etcd_certn # The location of the Kubernetes API. Use the default Kubernetesn # service for API access.n - name: K8S_APIn value: "https://kubernetes.default:443"n # Since were running in the host namespace and might not have KubeDNSn # access, configure the containers /etc/hosts to resolven # kubernetes.default to the correct service clusterIP.n - name: CONFIGURE_ETC_HOSTSn value: "true"n volumeMounts:n # Mount in the etcd TLS secrets.n - mountPath: /calico-secretsn name: etcd-certsn volumes:n # Mount in the etcd TLS secrets.n - name: etcd-certsn secret:n secretName: calico-etcd-secretsn

# kubectl create -f calico.yamlnconfigmap "calico-config" creatednsecret "calico-etcd-secrets" createdndaemonset "calico-node" createdndeployment "calico-policy-controller" createdn# kubectl get ds -n kube-system nNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGEncalico-node 1 1 1 1 1 <none> 52sn # kubectl get deploy -n kube-systemnNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEncalico-policy-controller 1 1 1 1 6mn

這樣就搭建了Calico網路,接下來就可以配置NetworkPolicy了。

配置NetworkPolicy

首先,修改ns-calico1的配置:

apiVersion: v1nkind: Namespacenmetadata:n name: ns-calico1n labels:n user: calico1n annotations:n net.beta.kubernetes.io/network-policy: |n {n "ingress": {n "isolation": "DefaultDeny"n }n }n

# kubectl apply -f ns-calico1.yamlnnamespace "ns-calico1" configuredn

如果這個時候再測試兩個pod是否連通,一定會不通:

# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1nConnecting to calico1-nginx.ns-calico1 (192.168.3.71:80)nwget: download timed outn

這就是我們想要的效果,不同Namespace之間的pod不能互通,當然這只是最簡單的情況,如果這時候ns-calico1的pod去連接ns-calico2的pod,還是互通的。因為ns-calico2沒有設置Namespace annotations。

而且,這時候的ns-calico1會拒絕任何pod的通訊請求。因為,Namespace的annotations只是指定了拒絕所有的通訊請求,還未規定何時接受其他pod的通訊請求。在這裡,我們指定只有擁有user=calico1標籤的pod可以互聯。

apiVersion: extensions/v1beta1nkind: NetworkPolicynmetadata:n name: calico1-network-policyn namespace: ns-calico1nspec:n podSelector:n matchLabels:n user: calico1n ingress:n - from:n - namespaceSelector:n matchLabels:n user: calico1n - podSelector:n matchLabels:n user: calico1n--- napiVersion: v1nkind: Podnmetadata:n name: calico1-busyboxn namespace: ns-calico1n labels:n user: calico1nspec:n containers:n - name: busyboxn image: busyboxn command:n - sleepn - "3600"n

# kubectl create -f calico1-network-policy.yamlnnetworkpolicy "calico1-network-policy" createdn# kubectl create -f calico1-busybox.yamlnpod "calico1-busybox" createdn

這時候,如果我通過calico1-busybox連接calico1-nginx,則可以連通。

# kubectl exec -it calico1-busybox -n ns-calico1 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 nConnecting to calico1-nginx.ns-calico1 (192.168.3.71:80)n

這樣我們就實現了Kubernetes的網路隔離。基於NetworkPolicy,可以實現公有雲安全組策略。更多的NetworkPolicy參數,請參考 api-reference。

參考資料:

  1. Network Policies

  2. Declaring Network Policy

  3. Using Calico for NetworkPolicy

  4. Calico for Kubernetes

推薦閱讀:

編排工具充分發揮了 Linux 容器技術優勢
Kubernetes 為什麼這麼重要?

TAG:Kubernetes | Docker | 容器云 |