在 Scheduler 和 Scheduling Framework 这两篇文章中,我介绍了 k8s 中 kube-scheduler 的实现机制,它作为集群控制面的关键组件,负责监听所有分配Node的新建 Pod,为其分配合适的 Node 绑定运行。由于 k8s 集群是动态运行的,随着集群状态的动态变化,kube-scheduler 在调度过程中基于各种算法作出的调度决策可能已经不再是适合集群资源的最佳调度决策。但这就造成了集群资源的不均衡,比如当前某个 Node 上的 Pod 被调度到其他的 Node 才是我们期待看到的集群资源最均衡状态。但是 Pod 一旦被绑定了 Node 是不会触发重新调度的,而我们一个一个手动驱逐 Pod 十分麻烦,这就引入了 DeScheduler 做二次调度来重新平衡集群。
前面提到,为了重新平衡集群,我们需要找到那些需要重新调度的 Pod,然后将其重新调度到新的 Node 。其中第二步正是我们 kube-scheduler 的工作。所以 DeScheduler 的工作就是根据各种策略找到需要重新调度的 Pod,并将其驱逐。
Descheduler 可以根据一些规则和配置策略来帮助我们重新平衡集群状态,当前项目实现了五种策略:RemoveDuplicates、LowNodeUtilization、RemovePodsViolatingInterPodAntiAffinity、RemovePodsViolatingNodeAffinity、RemovePodsViolatingNodeTaintsRemovePodsViolatingTopologySpreadConstraint、RemovePodsHavingTooManyRestarts和 PodLifeTime,这些策略都是可以启用或者禁用的,作为策略的一部分,也可以配置与策略相关的一些参数,默认情况下,所有策略都是启用的。
这8种策略包含一些相同的配置:
nodeSelector:只有包含对应 labels 的 Selector 才可以被驱逐
evictLocalStoragePods:允许驱逐那些包含本地存储的Pod
maxNoOfPodsToEvictPerNode:每个Node可以被驱逐的Pod的最大数目
1
2
3
4
5
6
7
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
nodeSelector: prod=dev
evictLocalStoragePods: true
maxNoOfPodsToEvictPerNode: 40
strategies:
...
|
该策略确保只有一个和 Pod 关联的 RS、RC、Deployment 或者 Job 资源对象运行在同一节点上。如果还有更多的 Pod 则将这些重复的 Pod 进行驱逐,以便更好地在集群中分散 Pod。如果某些节点由于某些原因崩溃了,这些节点上的 Pod 漂移到了其他节点,导致多个与 RS 或者 RC 关联的 Pod 在同一个节点上运行,就有可能发生这种情况,一旦出现故障的节点再次准备就绪,就可以启用该策略来驱逐这些重复的 Pod,当前,没有与该策略关联的参数,要禁用该策略,也很简单,只需要配置成 false 即可:
1
2
3
4
5
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemoveDuplicates":
enabled: false
|
该策略查找未充分利用的节点,并从其他节点驱逐 Pod,以便 kube-scheudler 重新将它们调度到未充分利用的节点上。该策略的参数可以通过字段 nodeResourceUtilizationThresholds 进行配置。
节点的利用率不足可以通过配置 thresholds 阈值参数来确定,可以通过 CPU、内存和 Pod 数量的百分比进行配置。如果节点的使用率均低于所有阈值,则认为该节点未充分利用。当前 节点的利用率是通过 Pod 的 request 来计算。
此外,还有一个可配置的阈值 targetThresholds,该阈值用于计算可从中驱逐 Pod 的那些潜在节点,对于所有节点 thresholds 和 targetThresholds 之间的阈值被认为是合理使用的,不考虑驱逐。targetThresholds 阈值也可以针对 CPU、内存和 Pod 数量进行配置。thresholds 和 targetThresholds 可以根据你的集群需求进行动态调整。
和 LowNodeUtilization 策略关联的另一个参数是 numberOfNodes,只有当未充分利用的节点数大于该配置值的时候,才可以配置该参数来激活该策略,该参数对于大型集群非常有用,其中有一些节点可能会频繁使用或短期使用不足,默认情况下,numberOfNodes 为0。
参数配置如下:
| Name |
Type |
thresholds |
map(string:int) |
targetThresholds |
map(string:int) |
numberOfNodes |
int |
thresholdPriority |
int (see priority filtering) |
thresholdPriorityClassName |
string (see priority filtering) |
如下所示示例:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"LowNodeUtilization":
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
"cpu" : 20
"memory": 20
"pods": 20
targetThresholds:
"cpu" : 50
"memory": 50
"pods": 50
|
该策略可以确保从节点中删除违反 NoSchedule 污点的 Pod。比如有一个名为 podA 的 Pod,通过配置容忍 key=value:NoSchedule 允许被调度到有该污点配置的节点上,如果节点的污点随后被更新或者删除了,则污点将不再被 Pods 的容忍满足,然后将被驱逐,如下所示配置策略:
1
2
3
4
5
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemovePodsViolatingNodeTaints":
enabled: true
|
该策略确保从节点中删除违反节点亲和性的 Pod。比如名为 podA 的 Pod 被调度到了节点 nodeA,podA 在调度的时候满足了节点亲和性规则 requiredDuringSchedulingIgnoredDuringExecution,但是随着时间的推移,节点 nodeA 不再满足该规则了,那么如果另一个满足节点亲和性规则的节点 nodeB 可用,则 podA 将被从节点 nodeA 驱逐,如下所示的策略配置示例:
1
2
3
4
5
6
7
8
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemovePodsViolatingNodeAffinity":
enabled: true
params:
nodeAffinityType:
- "requiredDuringSchedulingIgnoredDuringExecution"
|
该策略可以确保从节点中删除违反 Pod 反亲和性的 Pod。比如某个节点上有 podA 这个 Pod,并且 podB 和 podC(在同一个节点上运行)具有禁止它们在同一个节点上运行的反亲和性规则,则 podA 将被从该节点上驱逐,以便 podB 和 podC 运行正常运行。当 podB 和 podC 已经运行在节点上后,反亲和性规则被创建就会发送这样的问题,目前没有和该策略相关联的配置参数,要禁用该策略,同样配置成 false 即可:
1
2
3
4
5
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemovePodsViolatingInterPodAntiAffinity":
enabled: false
|
这个算法保证那些在 Nodes 上重启过多的 Pods 能够被重调度。比如一个需要挂载 EBS/PD 的 Pod 不能挂载,那么该 Pod 需要被重新调度,他的参数如下:
示例:
1
2
3
4
5
6
7
8
9
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemovePodsHavingTooManyRestarts":
enabled: true
params:
podsHavingTooManyRestarts:
podRestartThreshold: 100
includingInitContainers: true
|
PodLifeTime 策略会将那些存在时间超过 maxPodLifeTimeSeconds 的 Pod 驱逐。你也可以通过 podStatusPhases 设定某些特定的状态的 Pod 才可以被驱逐。
Parameters:
1
2
3
4
5
6
7
8
9
10
|
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"PodLifeTime":
enabled: true
params:
podLifeTime:
maxPodLifeTimeSeconds: 86400
podStatusPhases:
- "Pending"
|
通过 Descheduler 项目 Github 仓库中的 README 文档介绍,我们可以在 Kubernetes 集群内部通过 Job 或者 CronJob 的形式来运行 Deschduler,这样可以多次运行而无需用户手动干预,此外 Descheduler 的 Pod 在 kube-system 命名空间下面以 critical pod 的形式运行,可以避免被自身或者 kubelet 驱逐了。
首先定义如下所示的 RBAC 资源对象:(rbac.yaml)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: descheduler-cluster-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: descheduler-sa
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: descheduler-cluster-role-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: descheduler-cluster-role
subjects:
- name: descheduler-sa
kind: ServiceAccount
namespace: kube-system
|
然后我们可以通过 ConfigMap 资源对象来定义 Descheduler 的均衡策略:(configmap.yaml)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
apiVersion: v1
kind: ConfigMap
metadata:
name: descheduler-policy-configmap
namespace: kube-system
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
"RemoveDuplicates":
enabled: true
"RemovePodsViolatingInterPodAntiAffinity":
enabled: true
"LowNodeUtilization":
enabled: true
params:
nodeResourceUtilizationThresholds:
thresholds:
"cpu" : 20
"memory": 20
"pods": 20
targetThresholds:
"cpu" : 50
"memory": 50
"pods": 50
|
最后可以通过 Job 或者 CronJob 资源对象来运行 Descheduler,这里我们以 Job 为例进行说明:(job.yaml)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
apiVersion: batch/v1
kind: Job
metadata:
name: descheduler-job
namespace: kube-system
spec:
parallelism: 1
completions: 1
template:
metadata:
name: descheduler-pod
spec:
priorityClassName: system-cluster-critical
containers:
- name: descheduler
image: us.gcr.io/k8s-artifacts-prod/descheduler/descheduler:v0.10.0
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--v"
- "3"
restartPolicy: "Never"
serviceAccountName: descheduler-sa
volumes:
- name: policy-volume
configMap:
name: descheduler-policy-configmap
|
确保集群中有 system-cluster-critical 这个 priorityclass,否则去掉该字段:
1
2
3
4
|
$ kubectl get priorityclass
NAME VALUE GLOBAL-DEFAULT AGE
system-cluster-critical 2000000000 false 128d
system-node-critical 2000001000 false 128d
|
直接创建上面的3个资源对象即可:
1
2
3
|
$ kubectl create -f rbac.yaml
$ kubectl create -f configmap.yaml
$ kubectl create -f job.yaml
|
运行成功后可以查看 Descheduler 的 Job 任务的日志:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
$ kubectl get pods -n kube-system -l job-name=descheduler-job
NAME READY STATUS RESTARTS AGE
descheduler-job-zmf6c 0/1 Completed 0 4m54s
$ kubectl logs -f descheduler-job-zmf6c -n kube-system
I0316 04:07:37.226628 1 reflector.go:153] Starting reflector *v1.Node (1h0m0s) from pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108
I0316 04:07:37.226916 1 reflector.go:188] Listing and watching *v1.Node from pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108
I0316 04:07:37.326830 1 duplicates.go:50] Processing node: "ydzs-master"
I0316 04:07:37.521882 1 duplicates.go:50] Processing node: "ydzs-node1"
I0316 04:07:37.559308 1 duplicates.go:50] Processing node: "ydzs-node2"
I0316 04:07:37.608759 1 duplicates.go:50] Processing node: "ydzs-node3"
I0316 04:07:37.643679 1 duplicates.go:50] Processing node: "ydzs-node4"
I0316 04:07:37.841509 1 duplicates.go:50] Processing node: "ydzs-node5"
I0316 04:07:37.888281 1 duplicates.go:50] Processing node: "ydzs-node6"
I0316 04:07:38.392268 1 lownodeutilization.go:147] Node "ydzs-master" is appropriately utilized with usage: api.ResourceThresholds{"cpu":42.5, "memory":7.589289022953152, "pods":7.2727272727272725}
I0316 04:07:38.392390 1 lownodeutilization.go:149] allPods:8, nonRemovablePods:8, bePods:0, bPods:0, gPods:0
I0316 04:07:38.392541 1 lownodeutilization.go:141] Node "ydzs-node1" is under utilized with usage: api.ResourceThresholds{"cpu":20, "memory":7.770754643481218, "pods":17.272727272727273}
I0316 04:07:38.392579 1 lownodeutilization.go:149] allPods:19, nonRemovablePods:16, bePods:0, bPods:2, gPods:1
I0316 04:07:38.392684 1 lownodeutilization.go:141] Node "ydzs-node2" is under utilized with usage: api.ResourceThresholds{"cpu":13.75, "memory":6.294311261219786, "pods":14.545454545454545}
I0316 04:07:38.392740 1 lownodeutilization.go:149] allPods:16, nonRemovablePods:12, bePods:1, bPods:2, gPods:1
I0316 04:07:38.392822 1 lownodeutilization.go:141] Node "ydzs-node3" is under utilized with usage: api.ResourceThresholds{"cpu":17.5, "memory":10.905163145899156, "pods":14.545454545454545}
I0316 04:07:38.392877 1 lownodeutilization.go:149] allPods:16, nonRemovablePods:13, bePods:1, bPods:1, gPods:1
I0316 04:07:38.392959 1 lownodeutilization.go:141] Node "ydzs-node4" is under utilized with usage: api.ResourceThresholds{"cpu":15, "memory":5.180600069310763, "pods":13.636363636363637}
I0316 04:07:38.393033 1 lownodeutilization.go:149] allPods:15, nonRemovablePods:14, bePods:0, bPods:0, gPods:1
I0316 04:07:38.393166 1 lownodeutilization.go:141] Node "ydzs-node5" is under utilized with usage: api.ResourceThresholds{"cpu":7.5, "memory":3.484434378300685, "pods":20}
I0316 04:07:38.393221 1 lownodeutilization.go:149] allPods:22, nonRemovablePods:12, bePods:10, bPods:0, gPods:0
I0316 04:07:38.393326 1 lownodeutilization.go:147] Node "ydzs-node6" is appropriately utilized with usage: api.ResourceThresholds{"cpu":10.9375, "memory":8.780774633317726, "pods":21.818181818181817}
I0316 04:07:38.393381 1 lownodeutilization.go:149] allPods:24, nonRemovablePods:14, bePods:7, bPods:2, gPods:1
I0316 04:07:38.393412 1 lownodeutilization.go:65] Criteria for a node under utilization: CPU: 20, Mem: 20, Pods: 20
I0316 04:07:38.393437 1 lownodeutilization.go:72] Total number of underutilized nodes: 5
I0316 04:07:38.393455 1 lownodeutilization.go:85] all nodes are under target utilization, nothing to do here
I0316 04:07:38.393497 1 pod_antiaffinity.go:44] Processing node: "ydzs-master"
I0316 04:07:38.454204 1 pod_antiaffinity.go:44] Processing node: "ydzs-node1"
I0316 04:07:38.659807 1 pod_antiaffinity.go:44] Processing node: "ydzs-node2"
I0316 04:07:38.865860 1 pod_antiaffinity.go:44] Processing node: "ydzs-node3"
I0316 04:07:39.093155 1 pod_antiaffinity.go:44] Processing node: "ydzs-node4"
I0316 04:07:39.570699 1 pod_antiaffinity.go:44] Processing node: "ydzs-node5"
I0316 04:07:39.974866 1 pod_antiaffinity.go:44] Processing node: "ydzs-node6"
|
从上面日志中可以看出我整个集群目前都还是比较均衡的状态,所以没有 Pod 被驱逐进行重新调度。如果遇到节点资源使用率极度不均衡的时候可以尝试使用 Descheduler 来对集群进行重新平衡。
Author
houmin.wei
Publish
December 10, 2020
LastMod
May 7, 2023
License
CC BY-NC-ND 4.0