使用kubeadm創建一個K8s的Cluster

我們有很多種方式在很多種環境下安裝配置Kubernetes的cluster(包含master和worker nodes),如 Picking the Right Solution所展示的這樣。 本文中,Gemfield將演示如何使用kubeadm在Bare metal的Ubuntu操作系統中創建一個K8s的Cluster。

一,在Host機器的Ubuntu中安裝kubeadm

1,禁掉swap分區

gemfield@sl:~$ sudo swapoff -a#要永久禁掉swap分區,打開如下文件注釋掉swap那一行gemfield@sl:~$ sudo vi /etc/fstab

否則會報錯:

11月 25 16:52:38 sl kubelet[2856]: error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on11月 25 16:52:38 sl systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE11月 25 16:52:38 sl systemd[1]: kubelet.service: Unit entered failed state.11月 25 16:52:38 sl systemd[1]: kubelet.service: Failed with result exit-code.

2,安裝Docker

gemfield@sl:~$ sudo apt-get updategemfield@sl:~$ sudo apt-get install -y docker.io

更多docker詳情,請參考Gemfield專欄的前述文章

3, 確保kubelet所使用的cgroup驅動和Docker的一樣

注意!!!本步驟在K8s 1.8.4上不用配置,配置了反而錯了!!!

配置下面的文件後,重啟docker服務

gemfield@sl:~$ cat /etc/docker/daemon.json{ "exec-opts": ["native.cgroupdriver=systemd"]}gemfield@sl:~$ sudo systemctl restart docker

否則會報錯:

11月 25 16:56:29 sl kubelet[4203]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

4,配置apt 源

gemfield@sl:~$ cat /etc/apt/sources.list.d/kubernetes.listdeb http://apt.kubernetes.io/ kubernetes-xenial main

5,配置apt的proxy來翻牆

因為下載的package都在packages.cloud.google.com 下,你懂的。

gemfield@sl:~$ cat /etc/apt/apt.confAcquire::http::Proxy "http://gemfield.org:7030";Acquire::https::Proxy "http://gemfield.org:7030";gemfield@sl:~$

6,安裝這個源的公鑰

#配置代理,mmp你懂的gemfield@edge:~$ export http_proxy=gemfield.org:7030gemfield@edge:~$ export https_proxy=gemfield.org:7030#安裝gemfield@sl:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

否則無法驗證簽名,會出現如下錯誤:

獲取:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,942 B]錯誤:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease 由於沒有公鑰,無法驗證下列簽名: NO_PUBKEY 3746C208A7317B0F正在讀取軟體包列表... 完成 W: GPG 錯誤:https://packages.cloud.google.com/apt kubernetes-xenial InRelease: 由於沒有公鑰,無法驗證下列簽名: NO_PUBKEY 3746C208A7317B0FE: 倉庫 「http://apt.kubernetes.io kubernetes-xenial InRelease」 沒有數字簽名。N: 無法安全地用該源進行更新,所以默認禁用該源。N: 參見 apt-secure(8) 手冊以了解倉庫創建和用戶配置方面的細節。

7,運行apt update更新源

gemfield@sl:~$ sudo apt update獲取:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB] 命中:3 http://cn.archive.ubuntu.com/ubuntu xenial InRelease 命中:4 http://cn.archive.ubuntu.com/ubuntu xenial-updates InRelease 命中:5 http://cn.archive.ubuntu.com/ubuntu xenial-backports InRelease 獲取:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,942 B] 獲取:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [8,064 B] 已下載 119 kB,耗時 20秒 (5,789 B/s) 正在讀取軟體包列表... 完成正在分析軟體包的依賴關係樹 正在讀取狀態信息... 完成 所有軟體包均為最新。

8,安裝kubeadm, kubelet 和 kubectl

gemfield@sl:~$ sudo apt install -y kubelet kubeadm kubectl 正在讀取軟體包列表... 完成正在分析軟體包的依賴關係樹 正在讀取狀態信息... 完成 將會同時安裝下列軟體: ebtables kubernetes-cni下列【新】軟體包將被安裝: ebtables kubeadm kubectl kubelet kubernetes-cni升級了 0 個軟體包,新安裝了 5 個軟體包,要卸載 0 個軟體包,有 0 個軟體包未被升級。需要下載 51.5 MB/51.6 MB 的歸檔。解壓縮後會消耗 369 MB 的額外空間。獲取:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.5.1-00 [5,560 kB] 獲取:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.8.4-00 [19.2 MB] 獲取:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.8.4-00 [8,612 kB] 獲取:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.8.4-00 [18.1 MB] 已下載 51.5 MB,耗時 6分 53秒 (124 kB/s) 正在選中未選擇的軟體包 ebtables。(正在讀取資料庫 ... 系統當前共安裝有 201592 個文件和目錄。)正準備解包 .../ebtables_2.0.10.4-3.4ubuntu2_amd64.deb ...正在解包 ebtables (2.0.10.4-3.4ubuntu2) ...正在選中未選擇的軟體包 kubernetes-cni。正準備解包 .../kubernetes-cni_0.5.1-00_amd64.deb ...正在解包 kubernetes-cni (0.5.1-00) ...正在選中未選擇的軟體包 kubelet。正準備解包 .../kubelet_1.8.4-00_amd64.deb ...正在解包 kubelet (1.8.4-00) ...正在選中未選擇的軟體包 kubectl。正準備解包 .../kubectl_1.8.4-00_amd64.deb ...正在解包 kubectl (1.8.4-00) ...正在選中未選擇的軟體包 kubeadm。正準備解包 .../kubeadm_1.8.4-00_amd64.deb ...正在解包 kubeadm (1.8.4-00) ...正在處理用於 systemd (229-4ubuntu21) 的觸發器 ...正在處理用於 ureadahead (0.100.0-19) 的觸發器 ...正在處理用於 man-db (2.7.5-1) 的觸發器 ...正在設置 ebtables (2.0.10.4-3.4ubuntu2) ...update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults正在設置 kubernetes-cni (0.5.1-00) ...正在設置 kubelet (1.8.4-00) ...正在設置 kubectl (1.8.4-00) ...正在設置 kubeadm (1.8.4-00) ...正在處理用於 systemd (229-4ubuntu21) 的觸發器 ...正在處理用於 ureadahead (0.100.0-19) 的觸發器 ...gemfield@sl:~$

root@sl:~# kubeadm init[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.[init] Using Kubernetes version: v1.8.4[init] Using Authorization modes: [Node RBAC][preflight] Running pre-flight checks[preflight] WARNING: Connection to "https://192.168.1.118:6443" uses proxy "http://ai.gemfield.org:7030". If that is not intended, adjust your proxy settings[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)[certificates] Generated ca certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [sl kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.118][certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated sa key and public key.[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"[init] This often takes around a minute; or longer if the control plane images have to be pulled.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz/syncloop failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz/syncloop failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz/syncloop failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz/syncloop failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.[kubelet-check] It seems like the kubelet isnt running or healthy.[kubelet-check] The HTTP call equal to curl -sSL http://localhost:10255/healthz failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.Unfortunately, an error has occurred: timed out waiting for the conditionThis error is likely caused by that: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) - There is no internet connection; so the kubelet cant pull the following control plane images: - gcr.io/google_containers/kube-apiserver-amd64:v1.8.4 - gcr.io/google_containers/kube-controller-manager-amd64:v1.8.4 - gcr.io/google_containers/kube-scheduler-amd64:v1.8.4You can troubleshoot this for example with the following commands if youre on a systemd-powered system: - systemctl status kubelet - journalctl -xeu kubeletcouldnt initialize a Kubernetes cluster

(完)

推薦閱讀:

TAG:Kubernetes | Ubuntu | 容器虛擬化 |