使用 kubeadm 快速部署 k8s 集群
kubeadm 是一种工具,旨在为创建 Kubernetes 集群提供最佳实践的快速路径,它以用户友好的方式执行必要的操作,以使可以最低限度的可行,安全的启动并运行群集。只需将 kubeadm,kubelet,kubectl安装到服务器,其他核心组件以容器化方式快速部署。
前置准备
Letting iptables see bridged traffic
解决 iptables 而导致流量无法正确路由的问题
|
|
Check required ports
Kubernetes 的 Master 组件和 Node 组件需要使用某些特定端口,使用 kubeadm 部署集群前需要放开 Node 上面的端口,具体如下所示:
- Master 节点
| Protocol | Direction | Port Range | Purpose | Used By |
|---|---|---|---|---|
| TCP | Inbound | 6443* | Kubernetes API server | All |
| TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 10251 | kube-scheduler | Self |
| TCP | Inbound | 10252 | kube-controller-manager | Self |
- Worker 节点
| Protocol | Direction | Port Range | Purpose | Used By |
|---|---|---|---|---|
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
| TCP | Inbound | 30000-32767 | NodePort Services | All |
Installing runtime
|
|
Installing kubeadm, kubelet and kubectl
首先配置 yum 源,然后安装 kubeadm, kubelet, kubectl,设置 kubelet 开启启动。
|
|
关闭 swapoff:
|
|
部署集群
配置 Master 节点
修改 kubelet 参数
|
|
导出配置文件并修改
|
|
配置 kubernetes master 节点
|
|
--upload-certs参数:可以在后续执行加入节点时自动分发证书文件tee kubeadm-init.log参数: 用以输出日志
执行init操作的时候可以看到日志如下:
W0601 11:33:16.858211 1719 strict.go:47] unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"KubeProxyConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28"
W0601 11:33:16.858535 1719 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=KubeProxyConfiguration
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.1.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.1.102 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0601 11:33:33.405533 1719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0601 11:33:33.411476 1719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.511863 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
ca23402e2e70c5613b2ee10507b6065a548bb715f992c335e6498f25d30c0f96
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.102:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2d14d0998d3d2921771e6c6a81477b5124d87f920b7c4caeec8ebefe3c94fe5b
执行过程关键内容:
[preflight] 运行一系列 preflight 检查
[kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
[certificates] 生成相关的各种证书
[kubeconfig] 生成 KubeConfig 文件,存放在 /etc/kubernetes 目录中,组件之间通信需要使用对应文件
[control-plane] 使用 /etc/kubernetes/manifest 目录下的 YAML 文件,安装 Master 组件
[etcd] 使用 /etc/kubernetes/manifest/etcd.yaml 安装 Etcd 服务
[kubelet] 使用 configMap 配置 kubelet
[patchnode] 更新 CNI 信息到 Node 上,通过注释的方式记录
[mark-control-plane] 为当前节点打标签,打了角色 Master,和不可调度标签,默认就不会使用 Master 节点来运行 Pod
[bootstrap-token] 生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
[addons] 安装附加组件 CoreDNS 和 kube-proxy
配置Kubectl
|
|
验证
|
|
配置 Worker 节点
修改 kubelet 参数
|
|
使用 kubeadm join 命令将 Worker 节点加入到 k8s 集群
配置 k8s worker 节点
|
|
验证
|
|
Kubernetes的 Node 节点上执行 kubectl命令出现错误:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
出现这个问题的原因是kubectl命令需要使用 kubernetes-admin 来运行,需要将主节点中的 /etc/kubernetes/admin.conf文件拷贝到从节点相同目录下,然后配置环境变量。
|
|
配置网络插件
这里选择安装 calico 作为网络插件
|
|
Calico 启动之后,可以看到 Node 都处于 Ready 状态
|
|
问题排查
在 kubeadm 部署的时候,可能会碰到下面的问题:
|
|
排查步骤如下:
- 查看 kubelet 是否正常运行,正如日志所说
|
|
这里报错提示 misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs",这是什么意思呢?
简单来说,kubelet 的 cgroup driver 是 systemd,但是 docker 的 cgroup driver 是 cgroup driver,查看 kubeadm init 的日志,可以看到:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
那么我们将 docker 的 cgroup driver 修改为 systemd 即可
|
|
- 检查节点上 10248 端口是否放开
- 检查节点上 swap 是否禁用
参考资料
-
No backlinks found.