使用kubeadm快速部署安装k8s集群

安装kubeadm

当前环境

c1.cloud 10.10.10.1 (Debian 9) worker
c2.cloud 10.10.10.2 (Debian 9) master

配置ipv4转发

cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

安装docker

推荐docker版本 v1.12 v1.13 和 17.03

apt-get update
apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
   "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
   $(lsb_release -cs) \
   stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')

个人喜好docker run --rm -v /var/run/docker.sock:/var/run/docker.sock rainbond/archiver gr-docker-utils

安装kubeadm,kubelet,kubectl

当前版本是k8s v1.10.2版本

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

Master节点配置cgroup driver

要保证kubelet的cgroup driver和docker一致。

root@c2:~# docker info | grep -i cgroup
Cgroup Driver: cgroupfs

修改文件kubelet的配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,将其中的KUBELET_CGROUP_ARGS参数更改成cgroupfs

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

另外Kubernetes从1.8开始要求关闭系统的 Swap ,如果不关闭,默认配置的kubelet将无法启动,我们可以通过 kubelet 的启动参数--fail-swap-on=false更改这个限制

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

修改完成后,重新加载我们的配置文件即可

systemctl daemon-reload
systemctl restart kubelet

集群安装

初始化

在master节点执行初始化操作

kubeadm init --pod-network-cidr=192.168.0.0/16  --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.10.10.2 --ignore-preflight-errors=Swap

其实命令很简单kubeadm init,因为我们这里选择calico作为 Pod 的网络插件,所以需要指定–pod-network-cidr=192.168.0.0/16,然后是apiserver的通信地址即master节点的ip地址。指定参数–ignore-preflight-errors=Swap来忽略swap的错误提示信息。

执行输出结果

root@c2:~# kubeadm init --pod-network-cidr=192.168.0.0/16  --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.10.10.2 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.10.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [c2.cloud kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [c2.cloud] and IPs [10.10.10.2]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 27.501480 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node c2.cloud as master by adding a label and a taint
[markmaster] Master c2.cloud tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 4em7ga.2hx94q0ds08odkkr
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.10.2:6443 --token 4em7ga.2hx94q0ds08odkkr --discovery-token-ca-cert-hash sha256:d8cc7008ea08c981368f7b4b174d8aabf426c3d958f860b77cac696a66e78d67

配置如何使用kubectl访问集群的方式:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

查看集群信息

root@c2:~# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
root@c2:~# kubectl get csr
NAME        AGE       REQUESTOR              CONDITION
csr-zdk2g   17m       system:node:c2.cloud   Approved,Issued
root@c2:~# kubectl get node
NAME       STATUS     ROLES     AGE       VERSION
c2.cloud   NotReady   master    17m       v1.10.2

安装pod network

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

执行成后大概1分钟使用kubectl get pods命令可以查看到我们集群中的组件运行状态,如果都是Running状态的话,那么恭喜你,你的master节点安装成功了。

root@c2:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
kube-system   calico-etcd-xlpnw                          1/1       Running   0          1m
kube-system   calico-kube-controllers-685755779f-v5c5g   1/1       Running   0          1m
kube-system   calico-node-5scbv                          2/2       Running   0          1m
kube-system   etcd-c2.cloud                              1/1       Running   0          21m
kube-system   kube-apiserver-c2.cloud                    1/1       Running   0          21m
kube-system   kube-controller-manager-c2.cloud           1/1       Running   0          22m
kube-system   kube-dns-86f4d74b45-kmqfw                  0/3       Pending   0          22m
kube-system   kube-proxy-xpr6n                           1/1       Running   0          22m
kube-system   kube-scheduler-c2.cloud                    1/1       Running   0          21m

kubeadm初始化完成后,默认情况下Pod是不会被调度到master节点上的,需要添加一个worker节点。

# Master Node参与工作负载
kubectl taint nodes c2.cloud node-role.kubernetes.io/master-

测试集群dns服务

root@c2:~# kubectl run curl --image=radial/busyboxplus:curl -i --tty
If you don't see a command prompt, try pressing enter.
[ root@curl-775f9567b5-mhm4j:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-775f9567b5-mhm4j:/ ]$ nslookup spanda.io
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

# 删除
kubectl get deploy
kubectl delete deploy curl
kubectl get pod 
kubectl dete pod xxx

新增worker节点

其他操作基本一致配置。

kubeadm join 10.10.10.2:6443 --token 4em7ga.2hx94q0ds08odkkr --discovery-token-ca-cert-hash sha256:d8cc7008ea08c981368f7b4b174d8aabf426c3d958f860b77cac696a66e78d67 --ignore-preflight-errors=Swap

然后我们把master节点的~/.kube/config文件拷贝到当前worker节点对应的位置即可使用kubectl命令行工具了.

root@c1:~# kubectl get node
NAME       STATUS    ROLES     AGE       VERSION
c1.cloud   Ready     <none>    2m        v1.10.2
c2.cloud   Ready     master    32m       v1.10.2

安装Addons

Weave Scope

curl https://cloud.weave.works/k8s/scope.yaml?k8s-version=1.10.2\&k8s-service-type=NodePort -sL -o scope.yaml
kubectl create -f .

kubectl get deployment weave-scope-app -n weave
kubectl get services  weave-scope-app -n weave

# 或者
kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl port-forward -n weave "$(kubectl get -n weave pod --selector=weave-scope-component=app -o jsonpath='{.items..metadata.name}')" 4040

具体可以参考 Installing on Orchestrators - Kubernetes