Kubernetes学习笔记05
  aNy5PUCye2R8 2023年11月02日 46 0

1、Pod容器组

可以把容器组(Pod)等同于某个特定应用和服务的:逻辑主机,其中包含一个或多个应用容器,这些容器相对紧密的耦合在一起,相互协调完成这个“逻辑主机”特定的工作。

1.1 Kubernetes的Pod

容器组(Pod)是Kubernetes的最小工作单位,一个Pod代表集群上正在运行的一个进程,Pod中的容器会作为一个整体被Master调度到一个Node上运行。

Kubernetes学习笔记05_Kubernetes学习笔记

Pod为容器提供了两种共享资源:网络存储

网络

每个Pod被分配一个独立的IP地址,Pod中的每个容器共享网络命名空间,包括IP地址和网络,Pod内的容器可以使用Localhost相互通信。

存储

Pod可以指定一组共享Volumes,Pod中的所有容器都可以访问共享Volumes,允许这些容器数据共享。

Volumes还用于Pod中的数据持久化,以防其中一个容器需要重新启动而丢失数据,当Kubernetes挂载Volume到Pod,本质上是将Volume挂载到Pod中的每一个容器。

1.2 查看Pod

kubectl get pods

查看Pod参数

说明

无参数

查看当前命名空间

--namespace kube-system

查看kube-system命名空间

-A

查看所有命名空间

# 在default的名字空间没有pod
[root@master ~]# kubectl get pods 
No resources found in default namespace.

# 查看kube-system空间查看pod
[root@master ~]# kubectl get pods --namespace kube-system 
NAME                                       READY   STATUS        RESTARTS   AGE
calico-kube-controllers-7cc8dd57d9-gc5pj   1/1     Running       0          26m
calico-kube-controllers-7cc8dd57d9-qzc2v   1/1     Terminating   0          98m
calico-node-74wx2                          1/1     Running       1          9d
calico-node-759t9                          1/1     Running       0          94m
calico-node-8qjzg                          1/1     Running       1          9d
calico-node-jg4ld                          1/1     Running       1          9d
coredns-545d6fc579-cv67b                   1/1     Running       0          26m
coredns-545d6fc579-mt5v5                   1/1     Running       1          9d
coredns-545d6fc579-sw8bz                   1/1     Terminating   0          98m
etcd-master                                1/1     Running       3          9d
kube-apiserver-master                      1/1     Running       4          9d
kube-controller-manager-master             1/1     Running       3          9d
kube-proxy-5kdvd                           1/1     Running       1          9d
kube-proxy-5mhh9                           1/1     Running       0          94m
kube-proxy-c54vr                           1/1     Running       3          9d
kube-proxy-mzrqv                           1/1     Running       1          9d
kube-scheduler-master                      1/1     Running       3          9d
metrics-server-bcfb98c76-5v794             1/1     Running       0          41m

1.3 创建Pod

kubectl run 名字 --image 镜像

创建Pod参数

说明

--labels

标签=值

--env

变量名=值

--port

端口号

--image-pull-policy

镜像下载策略

# 创建一个名为pod1的pod,使用nginx镜像
[root@master ~]# kubectl run pod1 --image=nginx
pod/pod1 created

# 查看pod状态
[root@master ~]# kubectl get pod --namespace default -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          40s   10.244.104.3   node2   <none>           <none>

1.4 生成创建pod文件

yaml文件子级和父级之间缩进2空格,子级的第一个位置以'-'开头

[root@master ~]#  kubectl run pod1 --image=nginx --dry-run=client -o yaml > pod1.yaml
[root@master ~]# cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: nginx
    name: pod1
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

在pod执行命令,然后休眠10秒

[root@master ~]# kubectl run pod2 --image=nginx --image-pull-policy IfNotPresent --dry-run=client -o yaml -- sh -c 'echo hello ; sleep 10' > pod2.yaml 
[root@master ~]# cat pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - args:
    - sh
    - -c
    - echo hello ; sleep 10
    image: nginx
    imagePullPolicy: IfNotPresent
    name: pod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

1.5 根据编排文件创建pod

kubectl apply -f 文件名

[root@master ~]# kubectl apply -f pod2.yaml 
pod/pod2 created

1.6 查看Pod状态

kubectl get pods

查看pod状态参数

说明

--show-labels

显示标签

-l run=pod1

查看指定pod运行状态

-o wide

查看详细信息

显示标签

[root@master ~]# kubectl get pods --show-labels 
NAME   READY   STATUS             RESTARTS   AGE    LABELS
pod1   1/1     Running            0          47m    run=pod1
pod2   0/1     CrashLoopBackOff   3          101s   run=pod2

查看指定pod运行状态

[root@master ~]# kubectl get pods -l run=pod1
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          50m
[root@master ~]# kubectl get pods -l run=pod2
NAME   READY   STATUS             RESTARTS   AGE
pod2   0/1     CrashLoopBackOff   5          4m19s

查看pod详细信息

[root@master ~]# kubectl get pod -o wide 
NAME   READY   STATUS             RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
pod1   1/1     Running            0          53m   10.244.104.3   node2   <none>           <none>
pod2   0/1     CrashLoopBackOff   6          8m    10.244.104.5   node2   <none>           <none>

1.7 删除pod

kubectl delete pod pod名

# 删除pod2
[root@master ~]# kubectl delete pod pod2 
pod "pod2" deleted

2、Pod基本操作

2.1 pod内执行命令

命令格式:kubectl exec pod 名字 -- 命令

# 在pod1下执行ls命令
[root@master ~]# kubectl exec pod1 -- ls /usr/share/nginx/html
50x.html
index.html

2.2 复制操作

主机文件 --> 容器:kubectl cp 要复制文件 pod名:复制路径

# 将/etc/hosts文件复制到pod中
[root@master ~]# kubectl cp /etc/hosts pod1:/usr/share/nginx/html
[root@master ~]# kubectl exec pod1 -- ls /usr/share/nginx/html
50x.html
hosts
index.html

容器文件 --> 主机:kubectl cp pod名:复制路径 文件名

[root@master ~]# kubectl cp pod1:/usr/share/nginx/html ./html/
tar: Removing leading `/' from member names
[root@master ~]# ls html/
50x.html  hosts  index.html

2.3 获取命令行

操作类似docker

[root@master ~]# kubectl exec -it pod1 -- bash
root@pod1:/#

2.4 查看pod详细信息

kubectl describe pod pod1

[root@master ~]# kubectl describe pod pod1 
Name:         pod1
Namespace:    default
Priority:     0
Node:         node2/192.168.0.102
Start Time:   Sun, 25 Jun 2023 14:01:40 +0800
Labels:       run=pod1
Annotations:  cni.projectcalico.org/containerID: d1f22fde386dbb724ad27aeb0a819c72343e7e70079699b832fd864eea886c6b
              cni.projectcalico.org/podIP: 10.244.104.3/32
              cni.projectcalico.org/podIPs: 10.244.104.3/32
Status:       Running
IP:           10.244.104.3
IPs:
  IP:  10.244.104.3
Containers:
  pod1:
    Container ID:   docker://e2c5e304d86e348fa5511826749a32325fc70e09d695247bc34a31deb10728d2
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sun, 25 Jun 2023 14:01:47 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbv9z (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-pbv9z:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

2.5 一个pod运行多容器

2.5.1 编排文件

# 修改pod生成文件,运行两个nginx容器
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod2
  name: pod2
spec:
  containers:
  - name: c1
    image: nginx
    # 优先拉取本地镜像
    imagePullPolicy: IfNotPresent
    command: ['sh','-c','echo hello;','sleep 10']
  - name: c2
    image: nginx
    imagePullPolicy: IfNotPresent  
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

其中command命令等价于

- args:
  - sh
  - -c
  - echo hello;
  - sleep 10

2.5.2 创建pod

[root@master ~]# kubectl apply -f pod2.yaml        
pod/pod2 created

2.5.3 进入某个容器

[root@master ~]# kubectl exec -it pod2 -c c2 -- bash 
root@pod2:/#

2.6 查看pod日志

查看方式:kubectl logs pod名称

# 单个容器使用kubectl logs pod
[root@master ~]# kubectl logs pod1 
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/06/25 06:01:47 [notice] 1#1: using the "epoll" event method
2023/06/25 06:01:47 [notice] 1#1: nginx/1.21.5
2023/06/25 06:01:47 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/06/25 06:01:47 [notice] 1#1: OS: Linux 5.14.0-284.11.1.el9_2.x86_64
2023/06/25 06:01:47 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1073741816:1073741816
2023/06/25 06:01:47 [notice] 1#1: start worker processes
2023/06/25 06:01:47 [notice] 1#1: start worker process 31
2023/06/25 06:01:47 [notice] 1#1: start worker process 32
2023/06/25 06:01:47 [notice] 1#1: start worker process 33
2023/06/25 06:01:47 [notice] 1#1: start worker process 34

# 多个容器使用kubectl logs pod名称 -c 内部容器名称
[root@master ~]# kubectl logs pod2 -c c1 
hello
[root@master ~]# kubectl logs pod2 -c c2
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/06/25 10:32:14 [notice] 1#1: using the "epoll" event method
2023/06/25 10:32:14 [notice] 1#1: nginx/1.21.5
2023/06/25 10:32:14 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/06/25 10:32:14 [notice] 1#1: OS: Linux 5.14.0-284.11.1.el9_2.x86_64
2023/06/25 10:32:14 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1073741816:1073741816
2023/06/25 10:32:14 [notice] 1#1: start worker processes
2023/06/25 10:32:14 [notice] 1#1: start worker process 31
2023/06/25 10:32:14 [notice] 1#1: start worker process 32
2023/06/25 10:32:14 [notice] 1#1: start worker process 33
2023/06/25 10:32:14 [notice] 1#1: start worker process 34

2.7 指定pod放置位置

通过给节点增加标签,指定pod在特定节点上运行,nodeSelector选项,可以指定pod在期望节点上运行

# 查看所有节点标签
kubectl get nodes --show-labels 

# 查看特定节点标签
kubectl get nodes node1 --show-labels 

# 查看节点详细信息
kubectl describe nodes node1

设置节点标签

[root@master ~]# kubectl label node node1 diskxx=ssdxx
node/node1 labeled

# 验证
[root@master ~]# kubectl get node node1 --show-labels 
NAME    STATUS   ROLES    AGE   VERSION   LABELS
node1   Ready    <none>   10d   v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,diskxx=ssdxx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux

取消节点某个标签

[root@master ~]# kubectl label node node1 diskxx-
node/node1 labeled

编辑配置文件

[root@master ~]# cat pod3.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod3
  name: pod3
spec:
  nodeSelector:
    diskxx: ssdxx
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod3
    resources: {}
    imagePullPolicy: IfNotPresent 
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

生成容器

[root@master ~]# kubectl apply -f pod3.yaml 
pod/podlabel created

验证是否放置在node1下

[root@master ~]# kubectl describe pod pod3 | grep -i node
Node:         node1/192.168.0.101
Node-Selectors:              diskxx=ssdxx
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  Normal  Scheduled  84s   default-scheduler  Successfully assigned default/pod3 to node1

3、注释Annotations

可以使用Labels或Annotations将元数据附加到Kubernetes对象,标签可用于选择对象并查找满足某些条件的对象集合。相比之下,Annotations不用于标识和选择对象。Annotations中的元数据可以使small或large,structured或unstructured,并且可以包含标签不允许使用的字符。

3.1 Annotations格式

"annotations": {
  "key1" : "value1" ,
  "key2" : "value2"
}

以下是Annotations中记录信息的一些例子:

  • 构件、发布的镜像信息,如时间戳,发行ID、GIT分支,PR编号,镜像哈希值和Registry地址
  • 一些日志记录、监视、分析或audit repositories
  • 一些工具信息:名称、版本、构件信息等
  • 用户或工具/系统来源信息,例如来自其他生态系统组件对象的URL
  • 负责人的电话/座机,或一些信息目录

注意:Annotations不会被Kubernetes直接使用,其主要目的是方便用户阅读查找。

3.2 查看Annotations属性

[root@master ~]# kubectl describe pod pod3 
Name:         pod3
Namespace:    default
Priority:     0
Node:         node1/192.168.0.101
Start Time:   Sun, 25 Jun 2023 20:45:39 +0800
Labels:       run=pod3
Annotations:  cni.projectcalico.org/containerID: 45b93c4d6787cadaca31f3a977aab8a72835a946106234996921c5209469957e
              cni.projectcalico.org/podIP: 10.244.166.130/32
              cni.projectcalico.org/podIPs: 10.244.166.130/32
Status:       Running
...

3.3 添加Annotations属性

[root@master ~]# kubectl annotate nodes node1 name=node1
node/node1 annotated

3.4 删除Annotations属性

[root@master ~]# kubectl annotate nodes node1 name-
node/node1 annotated

3.5 查看Annotations属性

[root@master ~]# kubectl describe nodes node1 | grep Annotations -A 5
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    name: node1
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.0.101/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 10.244.166.128
                    volumes.kubernetes.io/controller-managed-attach-detach: true

4、节点的cordon与drain

cordon:标志Node不可调度但不影响其上正在运行的Pod,通常集群在升级维护时很常用

drain:通过驱逐需要维护节点的pod,让该节点上的pod在其他节点运行,并标记警戒线

4.1 创建deployment

创建deployment类型yaml文件

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > d1.yaml

编辑yaml文件,副本数改为9(replicas:9)

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 9
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}

执行deployment文件,此时发现有9个pod已经运行,每个node上分别运行3个

[root@master ~]# kubectl apply -f d1.yaml            
[root@master ~]# kubectl get pods -o wide 
NAME                     READY   STATUS    RESTARTS   AGE   IP               NODE   
nginx-6799fc88d8-5vtl7   1/1     Running   0          70s   10.244.135.4     node3  
nginx-6799fc88d8-cmwj8   1/1     Running   0          70s   10.244.166.138   node1  
nginx-6799fc88d8-ctzqm   1/1     Running   0          70s   10.244.166.139   node1  
nginx-6799fc88d8-dwdh6   1/1     Running   0          70s   10.244.104.4     node2  
nginx-6799fc88d8-msjlp   1/1     Running   0          70s   10.244.135.5     node3  
nginx-6799fc88d8-qwzgq   1/1     Running   0          70s   10.244.166.140   node1  
nginx-6799fc88d8-rqsn5   1/1     Running   0          70s   10.244.135.3     node3  
nginx-6799fc88d8-vnv2w   1/1     Running   0          70s   10.244.104.5     node2  
nginx-6799fc88d8-xmzkq   1/1     Running   0          70s   10.244.104.6     node2

4.2 设置cordon

设置node2为cordon,拉起警戒线,该节点进入不可调度状态

[root@master ~]# kubectl cordon node2 
node/node2 cordoned

查看节点状态信息

[root@master ~]# kubectl get nodes 
NAME     STATUS                     ROLES                  AGE     VERSION
master   Ready                      control-plane,master   3d16h   v1.21.1
node1    Ready                      <none>                 3d16h   v1.21.1
node2    Ready,SchedulingDisabled   <none>                 3d16h   v1.21.1
node3    Ready                      <none>                 3d16h   v1.21.1

4.3 横向扩展副本数

通过命令设置副本数为12

kubectl scale deployment nginx --replicas=12

查看pod状态

# 此时node2已经不接受新的pod
[root@master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP               NODE 
nginx-6799fc88d8-5vtl7   1/1     Running   0          30m   10.244.135.4     node3
nginx-6799fc88d8-cmwj8   1/1     Running   0          30m   10.244.166.138   node1
nginx-6799fc88d8-ctzqm   1/1     Running   0          30m   10.244.166.139   node1
nginx-6799fc88d8-dwdh6   1/1     Running   0          30m   10.244.104.4     node2
nginx-6799fc88d8-jsfxq   1/1     Running   0          65s   10.244.135.6     node3
nginx-6799fc88d8-msjlp   1/1     Running   0          30m   10.244.135.5     node3
nginx-6799fc88d8-mt25q   1/1     Running   0          65s   10.244.135.7     node3
nginx-6799fc88d8-np2pr   1/1     Running   0          65s   10.244.166.141   node1
nginx-6799fc88d8-qwzgq   1/1     Running   0          30m   10.244.166.140   node1
nginx-6799fc88d8-rqsn5   1/1     Running   0          30m   10.244.135.3     node3
nginx-6799fc88d8-vnv2w   1/1     Running   0          30m   10.244.104.5     node2
nginx-6799fc88d8-xmzkq   1/1     Running   0          30m   10.244.104.6     node2

删除node2剩余3个pod

[root@master ~]# kubectl get pods -o wide | grep node2 
nginx-6799fc88d8-dwdh6   1/1     Running   0          31m     10.244.104.4     node2   <none>           <none>
nginx-6799fc88d8-vnv2w   1/1     Running   0          31m     10.244.104.5     node2   <none>           <none>
nginx-6799fc88d8-xmzkq   1/1     Running   0          31m     10.244.104.6     node2   <none>           <none>
[root@master ~]# kubectl delete pod nginx-6799fc88d8-dwdh6 nginx-6799fc88d8-vnv2w nginx-6799fc88d8-xmzkq
pod "nginx-6799fc88d8-dwdh6" deleted
pod "nginx-6799fc88d8-vnv2w" deleted
pod "nginx-6799fc88d8-xmzkq" deleted

此时新的pod已经分配到node1和node3上

[root@master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE  
nginx-6799fc88d8-5vtl7   1/1     Running   0          32m     10.244.135.4     node3 
nginx-6799fc88d8-6dtvp   1/1     Running   0          59s     10.244.166.143   node1 
nginx-6799fc88d8-9bxxw   1/1     Running   0          59s     10.244.135.8     node3 
nginx-6799fc88d8-cmwj8   1/1     Running   0          32m     10.244.166.138   node1 
nginx-6799fc88d8-ctzqm   1/1     Running   0          32m     10.244.166.139   node1 
nginx-6799fc88d8-hnm62   1/1     Running   0          59s     10.244.166.142   node1 
nginx-6799fc88d8-jsfxq   1/1     Running   0          3m35s   10.244.135.6     node3 
nginx-6799fc88d8-msjlp   1/1     Running   0          32m     10.244.135.5     node3 
nginx-6799fc88d8-mt25q   1/1     Running   0          3m35s   10.244.135.7     node3 
nginx-6799fc88d8-np2pr   1/1     Running   0          3m35s   10.244.166.141   node1 
nginx-6799fc88d8-qwzgq   1/1     Running   0          32m     10.244.166.140   node1 
nginx-6799fc88d8-rqsn5   1/1     Running   0          32m     10.244.135.3     node3

4.4 恢复cordon

[root@master ~]# kubectl uncordon node2 
node/node2 uncordoned

4.5 节点的drain

通过驱逐需要维护节点的pod,让该节点上的pod在其他节点运行,并标记警戒线

# 将node1设置为drain
kubectl drain node1 --ignore-daemonsets --delete-emptydir-data

查看节点状态

[root@master ~]# kubectl get nodes 
NAME     STATUS                     ROLES                  AGE     VERSION
master   Ready                      control-plane,master   3d17h   v1.21.1
node1    Ready,SchedulingDisabled   <none>                 3d16h   v1.21.1
node2    Ready                      <none>                 3d16h   v1.21.1
node3    Ready                      <none>                 3d16h   v1.21.1

查看pod

[root@master ~]# kubectl get pods -o wide 
NAME                     READY   STATUS    RESTARTS   AGE     IP              NODE 
nginx-6799fc88d8-5vtl7   1/1     Running   0          41m     10.244.135.4    node3
nginx-6799fc88d8-88wpm   1/1     Running   0          98s     10.244.104.12   node2
nginx-6799fc88d8-89kr6   1/1     Running   0          98s     10.244.104.7    node2
nginx-6799fc88d8-9bxxw   1/1     Running   0          9m36s   10.244.135.8    node3
nginx-6799fc88d8-jsfxq   1/1     Running   0          12m     10.244.135.6    node3
nginx-6799fc88d8-kxqc6   1/1     Running   0          98s     10.244.104.13   node2
nginx-6799fc88d8-msjlp   1/1     Running   0          41m     10.244.135.5    node3
nginx-6799fc88d8-mt25q   1/1     Running   0          12m     10.244.135.7    node3
nginx-6799fc88d8-n876z   1/1     Running   0          98s     10.244.104.11   node2
nginx-6799fc88d8-nf726   1/1     Running   0          98s     10.244.104.10   node2
nginx-6799fc88d8-rqsn5   1/1     Running   0          41m     10.244.135.3    node3
nginx-6799fc88d8-zsd72   1/1     Running   0          98s     10.244.104.9    node2

维护结束后,修改节点状态

[root@master ~]# kubectl uncordon node1 
node/node1 uncordoned
[root@master ~]# kubectl get nodes 
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   3d17h   v1.21.1
node1    Ready    <none>                 3d16h   v1.21.1
node2    Ready    <none>                 3d16h   v1.21.1
node3    Ready    <none>                 3d16h   v1.21.1

5、节点Taint及pod的tolerations

Taints(污点)tolerations(污点容忍)用于保证Pod不被调度到不合适的Node上

  • Taint应用于Node上
  • 而toleration则应用于Pod上(Toleration是可选的

5.1 查看节点taints设置

[root@master ~]# kubectl describe nodes node1 | grep -Ei 'roles|taints'
Roles:              <none>
Taints:             <none>

5.2 配置节点taint(污点)

常见的污点设置一般为NoSchedule

[root@master ~]# kubectl taint nodes node1 keyxx=valuexx:NoSchedule
node/node1 tainted
[root@master ~]# kubectl describe nodes node1 | grep -Ei 'roles|taints'
Roles:              <none>
Taints:             keyxx=valuexx:NoSchedule

5.3 生成pod

[root@master ~]# kubectl apply -f d1.yaml 
deployment.apps/nginx created

5.4 查看pod具体位置

由于已经给node1定义污点,故pod不会主动放置在node1上

[root@master ~]# kubectl get pods -o wide 
NAME                     READY   STATUS              RESTARTS   AGE   IP       NODE 
nginx-6799fc88d8-5jtgh   0/1     ContainerCreating   0          7s    <none>   node3
nginx-6799fc88d8-62drm   0/1     ContainerCreating   0          7s    <none>   node2
nginx-6799fc88d8-mtlxt   0/1     ContainerCreating   0          7s    <none>   node3
nginx-6799fc88d8-p8md8   0/1     ContainerCreating   0          7s    <none>   node2
nginx-6799fc88d8-snn46   0/1     ContainerCreating   0          7s    <none>   node2
nginx-6799fc88d8-xxs56   0/1     ContainerCreating   0          7s    <none>   node2
nginx-6799fc88d8-z6jms   0/1     ContainerCreating   0          7s    <none>   node3
nginx-6799fc88d8-zbgts   0/1     ContainerCreating   0          7s    <none>   node3
nginx-6799fc88d8-zz67w   0/1     ContainerCreating   0          7s    <none>   node3

5.5 删除pod

[root@master ~]# kubectl delete deployments.apps nginx 
deployment.apps "nginx" deleted

5.6 让pod在taint节点运行

# 生成编排文件
kubectl run podtaint --image=nginx --dry-run=client -o yaml > podtaint.yaml

# 给node1打上标签(Label)
kubectl label node node1 diskxx=ssdxx

配置编排文件如下

[root@master ~]# cat podtaint.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: podtaint
  name: podtaint
spec:
  # 标签选择器
  nodeSelector:
    diskxx: ssdxx
  # 配置tolerations,指定key、value和NoSchedule
  tolerations:
  - key: keyxx
    operator: Equal
    value: valuexx
    effect: NoSchedule
  containers:
  - image: nginx
    name: podtaint
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

5.7 查看Pod运行状态

此时pod已经顺利在node1上运行

[root@master ~]# kubectl get pod -o wide 
NAME       READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
podtaint   1/1     Running   0          19s   10.244.166.145   node1   <none>           <none>


【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论