10.ReplicaSet手动蓝绿部署、滚动发布、回滚及Deployment自动滚动发布、回滚及金丝雀发布、回滚
  IS4yhiOomKTv 2023年11月02日 21 0
Kubernetes的控制器
Kubernetes的控制器类型
  ◼ 打包于Controller Manager中内置提供的控制器,例如Service Controller、Deployment Controller等
      ◆基础型、核心型控制器
      ◆打包运行于kube-controller-manager中
  ◼ 插件或第三方应用的专用控制器,例如Ingress插件ingress-nginx的Controller,网络插件Project Calico的Controller等
      ◆高级控制器,通常需要借助于基础型控制器完成其功能
      ◆以Pod形式托管运行于Kubernetes之上,而且这些Pod很可能会由内置的控制器所控制


以编排Pod化运行的应用为核心的控制器,通常被统称为工作负载型控制器
  ◼ 无状态应用编排:ReplicaSet、Deployment
  ◼ 有状态应用编排:StatefulSet、第三方专用的Operator
  ◼ 系统级应用:DaemonSet
  ◼ 作业类应用:Job和CronJob
应用编排
定义工作负载型控制器资源对象以编排应用
  ◼ 内置的各工作负载型控制器都有对应的资源类型,控制器专用于管控其对应类型的资源对象
      ◆例如,Deployment控制器对应有Deployment资源类型
      ◆这些资源类型都是Kubernetes上标准的API资源,并遵循资源规范的通用格式
  ◼ 用户需要编排某应用时,需要事先确认好应用的运行目标模式,例如实例数据及结束条件等,并根据模式选出对应的工作负载控制器类型
  ◼ 而后根据相应控制器的资源类型的规范,编写资源配置,并提交给API Server即可
  ◼ 相应控制器监视到API Server上新建的资源后,即会运行代码确保对象实例的实际状态(Status)与期望状态(Spec)一致


注意:控制器对象仅负责确保API Server上有相应数量的符合标签选择器的Pod对象的定义,至于Pod
对象的Status如何与Spec保持一致,则要由相应节点上的kubelet负责保证
Deployment控制器简介
负责编排无状态应用的基础控制器是ReplicaSet,相应的资源类型通过三个关键组件定义如何编排一个无状态应用
   ◼ replicas:期望运行的Pod副本数
   ◼ selector:标签选择器
   ◼ podTemplate:Pod模板


Deployment是建立在ReplicaSet控制器上层的更高级的控制器
   ◼ 借助于ReplicaSet完成无状态应用的基本编排任务
   ◼ 它位于ReplicaSet更上面一层,基于ReplicaSet提供了滚动更新、回滚等更为强大的应用编排功能
   ◼ 是ReplicaSet资源的编排工具
      ◆Deployment编排ReplicaSet
      ◆ReplicaSet编排Pod
   ◼ 但是,应该直接定义Deployment资源来编排Pod应用,ReplicaSet无须显式给出


ReplicaSet资源规范
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: rs-example
spec:
  replicas: 2          #副本数
  selector:            #标签选择器
    matchLabels:  
      app: demoapp
      release: stable
  template:            #pod模板
    metadata:
      labels:
        app: demoapp
        release: stable
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0

各Pod的名字以ReplicaSet名称为前缀,后缀随机生成
Deployment资源规范
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-example
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demoapp
      release: stable
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0

ReplicaSet对象名称以Deployment名称为前缀,后缀是Pod模板的hash码
各Pod的名字以Deployment名称为前缀,中间是Pod模板的hash码,后缀随机生成
示例:ReplicaSet (手动模拟蓝绿部署)

先删除集群内的所有svc和deployment

最初版本1.0

定义1.0版本
[root@K8s-master01 chapter8]#cat replicaset-demo.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-demo
spec:
  minReadySeconds: 3
  replicas: 2                #两个副本
  selector:                  #这里的标签选择器,比service复杂
    matchLabels:      #kubectl explain rs.spec.selector  (matchExpressions和matchLabels) 
      app: demoapp
      release: stable
      version: v1.0
  template:                  #定义的一定是pod的资源
    metadata:
      labels:                #labels要符合selector的条件
        app: demoapp
        release: stable
        version: v1.0
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0
        ports:
        - name: http
          containerPort: 80
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 15
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
创建ReplicaSet v1.0          
[root@K8s-master01 chapter8]#kubectl apply -f replicaset-demo.yaml 
replicaset.apps/replicaset-demo created
[root@K8s-master01 chapter8]#kubectl get rs
NAME                 DESIRED   CURRENT   READY   AGE
demoapp-55c5f88dcb   3         3         3       21h
replicaset-demo      2         2         0       16s

[root@K8s-master01 chapter8]#kubectl get pods
NAME                       READY   STATUS      RESTARTS       AGE
replicaset-demo-mfpll      1/1     Running     0              101s
replicaset-demo-mxlh6      1/1     Running     1 (40s ago)    101s
如果改变replicas的值,只会改变应用执行扩容和缩容操作,并不会改变应用版本更新
如果想改变应用版本的更新,需要更改template里面的内容。

创建service
[root@K8s-master01 chapter8]#cat service-blue-green.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: demoapp-svc
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: demoapp
    version: v1.0
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
[root@K8s-master01 chapter8]#kubectl apply -f service-blue-green.yaml 
service/demoapp-svc created
[root@K8s-master01 chapter8]#kubectl get svc
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
demoapp-svc   ClusterIP   10.103.99.11   <none>        80/TCP    10s
kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP   4m6s

demoapp-svc通过就绪状态检测后可用
[root@K8s-master01 chapter8]#kubectl get ep
NAME          ENDPOINTS                                         AGE
demoapp-svc   10.244.4.64:80,10.244.5.51:80                     2m1s
kubernetes    10.0.0.100:6443,10.0.0.101:6443,10.0.0.102:6443   5m57s

建立一个pod客户端请求服务,对service进行持续性访问
[root@K8s-master01 chapter8]#kubectl run client-$RANDOM --image=ikubernetes/admin-box:v1.2 --restart=Never -it --rm --command -- /bin/bash

对service进行持续性访问
root@client-20739 /# while true; do curl demoapp-svc.default; sleep .5; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-mfpll, ServerIP: 10.244.5.51!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-mfpll, ServerIP: 10.244.5.51!

此时做更新,看服务会不会中断

升级版本

定义v1.1版本
[root@K8s-master01 chapter8]#cat replicaset-demo-v1.1.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-demo-v1.1   #不能与v1.0版本同名
spec:
  minReadySeconds: 3
  replicas: 2
  selector:
    matchLabels:
      app: demoapp
      release: stable
      version: v1.1
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
        version: v1.1
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.1
        ports:
        - name: http
          containerPort: 80
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
创建升级版本后的v1.1版本
[root@K8s-master01 chapter8]#kubectl apply -f replicaset-demo-v1.1.yaml 
replicaset.apps/replicaset-demo-v1.1 created

[root@K8s-master01 chapter8]#kubectl get pods
NAME                         READY   STATUS      RESTARTS        AGE
replicaset-demo-v1.1-mjfk8   1/1     Running     0               2m42s
replicaset-demo-v1.1-qm8zv   1/1     Running     0               2m42s
[root@K8s-master01 chapter8]#kubectl get svc -o wide
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE    SELECTOR
demoapp-svc   ClusterIP   10.103.99.11   <none>        80/TCP    137m   app=demoapp,version=v1.0
kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP   141m   <none>
查看访问情况为1.0版本,以及svc的labels,也是v1.0版本,发现v1.1就绪了也不会被后端所使用
v1.1为正常,但访问的还是原v1.0版本,需要修改对应的service配置yaml文件中的selector
[root@K8s-master01 chapter8]#cat service-blue-green.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: demoapp-svc
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: demoapp
    version: v1.1
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
创建
[root@K8s-master01 chapter8]#kubectl apply -f service-blue-green.yaml 
service/demoapp-svc configured

查看访问情况(由v1.0版本升级到了v1.1版本)
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-mxlh6, ServerIP: 10.244.4.64!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-mfpll, ServerIP: 10.244.5.51!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-v1.1-qm8zv, ServerIP: 10.244.3.69!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-v1.1-mjfk8, ServerIP: 10.244.5.52!
如果版本有问题,需要重新回滚到v1.0,则直接把service的yaml文件新版本修改为v1.0执行即可
[root@K8s-master01 chapter8]#cat service-blue-green.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: demoapp-svc
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: demoapp
    version: v1.0
  ports:

  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
[root@K8s-master01 chapter8]#kubectl apply -f service-blue-green.yaml 
service/demoapp-svc configured
会发现访问请求变成了v1.0版本
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-v1.1-mjfk8, ServerIP: 10.244.5.52!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-v1.1-mjfk8, ServerIP: 10.244.5.52!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-mfpll, ServerIP: 10.244.5.51!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-mxlh6, ServerIP: 10.244.4.64!
RS手动滚动发布
初始状态:把ReplicaSet  v1.0版本 把副本数调为5个;v1.1版本 把副本数调为0个;删除原用的service

调整v1.0版本
[root@K8s-master01 chapter8]#cat replicaset-demo.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-demo
spec:
  minReadySeconds: 3
  replicas: 5
  selector:
    matchLabels:
      app: demoapp
      release: stable
      version: v1.0
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
        version: v1.0
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0
        ports:
        - name: http
          containerPort: 80
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
创建v1.0版本
[root@K8s-master01 chapter8]#kubectl apply -f replicaset-demo.yaml 
replicaset.apps/replicaset-demo configured


调整v1.1版本
[root@K8s-master01 chapter8]#cat replicaset-demo-v1.1.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-demo-v1.1
spec:
  minReadySeconds: 3
  replicas: 0
  selector:
    matchLabels:
      app: demoapp
      release: stable
      version: v1.1
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
        version: v1.1
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.1
        ports:
        - name: http
          containerPort: 80
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
创建v1.1版本
[root@K8s-master01 chapter8]#kubectl apply -f replicaset-demo-v1.1.yaml 
replicaset.apps/replicaset-demo-v1.1 configured


删除service
[root@K8s-master01 chapter8]#kubectl delete svc demoapp-svc
service "demoapp-svc" deleted

拷贝原service的yaml文件,并在新的service的yaml文件删除版本信息,这就意味着新旧版本都符合service条件
[root@K8s-master01 chapter8]#cp service-blue-green.yaml service-rollout.yaml
[root@K8s-master01 chapter8]#vim service-rollout.yaml 
[root@K8s-master01 chapter8]#cat service-rollout.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: demoapp-svc
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: demoapp
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
基于新的yaml文件创建新的service
[root@K8s-master01 chapter8]#kubectl apply -f service-rollout.yaml 
service/demoapp-svc created

再次用客户端对服务发起持续访问(此时都是旧版本的,因为新版本还没有pod的出现)
root@client-20739 /# while true; do curl demoapp-svc.default; sleep .5; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-5fglx, ServerIP: 10.244.5.53!
此时修改v1.1版本所对应的yaml文件,把副本数修改为1
[root@K8s-master01 chapter8]#cat replicaset-demo-v1.1.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-demo-v1.1
spec:
  minReadySeconds: 3
  replicas: 1
  selector:
    matchLabels:
      app: demoapp
      release: stable
      version: v1.1
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
        version: v1.1
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.1
        ports:
        - name: http
          containerPort: 80
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
创建v1.1版本
[root@K8s-master01 chapter8]#kubectl apply -f replicaset-demo-v1.1.yaml 
replicaset.apps/replicaset-demo-v1.1 configured

会发现等v1.1的pod为就绪状态,service便会把v1.1的流量转进来
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-5fglx, ServerIP: 10.244.5.53!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.68, ServerName: replicaset-demo-v1.1-gcgw6, ServerIP: 10.244.5.54!

过段时间,如果v1.1版本没问题了,就可以执行滚动操作,对v1.0版本进行缩容,把v1.0的的副本改为4
一步一步的对v1.1进行扩容,对v1.0进行缩容,直到v1.1副本扩容到5,v1.0缩容到0
删除上述的v1.0版本、v1.1版本、service,进行deployment实验
[root@K8s-master01 chapter8]#kubectl delete -f replicaset-demo.yaml -f replicaset-demo-v1.1.yaml -f service-rollout.yaml 
replicaset.apps "replicaset-demo" deleted
replicaset.apps "replicaset-demo-v1.1" deleted
service "demoapp-svc" deleted
deployment资源规范
[root@K8s-master01 chapter8]#kubectl explain deployment.spec
replicas:副本
selector:标签选择器
template:模板
revisionHistoryLimit:修订的历史版本保存数量,默认10个
strategy:定义更新策略
               Recreate:重建
               RollingUpdate:滚动更新 如:一次更新几个pod,默认是滚动更新策略
究竟是先删除再加上,还是先加上再删除,在以下策略中定义
maxSurge:在更新过程中允许多出来的pod,按照比例pod数量最多可以超出目前pod数量的25%
maxUnavailavle:在更新过程中允许少去的pod。按照比例pod数量可以少于目前pod数量的25%

但新旧更替的时候,新旧版本会同时存在,客户流量一部分分到新版本,一部分分到旧版本。
此时的滚动更新存在缺陷:可能向后兼容方面存在问题,如果存在此问题,就使用蓝绿部署,如果不存在新旧版本兼容问题,就使用滚动更新。
示例:Deployment
[root@K8s-master01 chapter8]#cat deployment-demo.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-demo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: demoapp
      release: stable
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0
        ports:
        - containerPort: 80
          name: http
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
创建Deployment
[root@K8s-master01 chapter8]#kubectl apply -f deployment-demo.yaml 
deployment.apps/deployment-demo created

定义service的yaml文件
[root@K8s-master01 chapter8]#cat service-rollout.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: demoapp-svc
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: demoapp
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
创建service
[root@K8s-master01 chapter8]#kubectl apply -f service-rollout.yaml 
service/demoapp-svc created

让客户端持续请求服务,发现是v1.0版本
root@client-20739 /# while true; do curl demoapp-svc.default; sleep .5; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.68, ServerName: deployment-demo-584777f66f-jqhwc, ServerIP: 10.244.5.55!
让deployment自动更新,可直接修改deployment的yaml文件中的版本信息(不用再定义v1.1的yaml文件)
[root@K8s-master01 chapter8]#cat deployment-demo.yaml 
# VERSION: demoapp version
# Maintainer: MageEdu <mage@magedu.com>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-demo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: demoapp
      release: stable
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.1
        ports:
        - containerPort: 80
          name: http
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
[root@K8s-master01 chapter8]#kubectl apply -f deployment-demo.yaml 
deployment.apps/deployment-demo configured

等新版本的pod都就绪,去查看访问结果,会发现v1.0版本的访问会逐渐减少,直至不再访问,v1.1版本的访问会越来越多。
[root@K8s-master01 chapter8]#kubectl get rs -w
NAME                         DESIRED   CURRENT   READY   AGE
deployment-demo-584777f66f   0         0         0       7m38s
deployment-demo-8dcb44cb     4         4         4       2m50s

v1.0版本更新至v1.1版本期间的pod变化过程:
一:v1.0由4个pod变为3个pod,  则v1.1增加2个pod,此时共5个pod
二:v1.0由3个pod变为1个pod,  则v1.1增加至4个pod,此时共5个pod
三:v1.0变为0pod,此时则只剩余v1.1的四个pod,版本升级完毕
如果发现版本中存在问题,就可以执行回滚操作
查看滚动历史
[root@K8s-master01 chapter8]#kubectl rollout history deployment deployment-demo
deployment.apps/deployment-demo 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>   #序号最大的为当前版本

执行更新,把deployment中的yaml文件的版本信息改回v1.0版本,再创建新的deployment,等pod就绪后,就会自动访问到v1.0上

执行回滚,也可以回滚为v1.0版本
kubectl rollout undo deployment deployment-demo
模拟金丝雀发布
定义yaml文件
[root@K8s-master01 chapter8]#cat deployment-demo.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-demo
spec:
  replicas: 4
  selector:
    matchLabels:
      app: demoapp
      release: stable
  strategy:                  #定义pod在滚动更新期间策略
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1     #在滚动更新期间,允许比replicas字段声明的Pod副本数多出的Pod数量为1 
      maxUnavailable: 0#在滚动更新期间,允许比replicas字段声明的Pod副本数少出的Pod数量为0
  template:
    metadata:
      labels:
        app: demoapp
        release: stable
    spec:
      containers:
      - name: demoapp
        image: ikubernetes/demoapp:v1.0
        ports:
        - containerPort: 80
          name: http
        livenessProbe:
          httpGet:
            path: '/livez'
            port: 80
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: '/readyz'
            port: 80
          initialDelaySeconds: 15
创建deployment          
[root@K8s-master01 chapter8]#kubectl apply -f deployment-demo.yaml 
deployment.apps/deployment-demo configured
查看deployment定义的策略
[root@K8s-master01 chapter8]#kubectl get deployment -o yaml
strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
常看详情(也可以查看到策略)
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 1 max surge
启动一次更新,在更新一次之后设定它立即暂停下来
[root@K8s-master01 chapter8]#kubectl rollout --help
Available Commands:
  history       显示上线历史
  pause         将所指定的资源标记为已暂停
  restart       Restart a resource
  resume        恢复暂停的资源
  status        显示上线的状态
  undo          撤销上一次的上线
  
修改上述deployment资源下deployment-demo pod中的demoapp容器的镜像,并在更行一次之后设定它立即暂停下来

查看deploument下的pod信息
[root@K8s-master01 chapter8]#kubectl get deployment -o wide
NAME              READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                     SELECTOR
deployment-demo   4/4     4            4           3h26m   demoapp      ikubernetes/demoapp:v1.0   app=demoapp,release=stable

修改:
[root@K8s-master01 chapter8]#kubectl set image deployment/deployment-demo demoapp=ikubernetes/demoapp:v1.1 -n default && kubectl rollout pause deployment/deployment-demo
deployment.apps/deployment-demo image updated
deployment.apps/deployment-demo paused

查看pod,可发现,有5个pod
[root@K8s-master01 chapter8]#kubectl get pods
NAME                               READY   STATUS      RESTARTS      AGE
deployment-demo-584777f66f-6d29s   1/1     Running     0             28m
deployment-demo-584777f66f-kfb7b   1/1     Running     1 (27m ago)   28m
deployment-demo-584777f66f-ll4gh   1/1     Running     0             28m
deployment-demo-584777f66f-n9ck2   1/1     Running     1 (27m ago)   28m
deployment-demo-8dcb44cb-fjt2t     1/1     Running     0             42s

查看service有5个端点
[root@K8s-master01 chapter8]#kubectl describe svc demoapp-svc
Name:              demoapp-svc
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=demoapp
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.101.199.252
IPs:               10.101.199.252
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.3.73:80,10.244.4.68:80,10.244.4.69:80 + 2 more...
Session Affinity:  None
Events:            <none>

创建一个客户端用于对service进行访问,可发现有新版本,也有旧版本
[root@K8s-master01 chapter8]#kubectl run client-$RANDOM --image=ikubernetes/admin-box:v1.2 --restart=Never --rm -it --command -- /bin/bash
If you don't see a command prompt, try pressing enter.
root@client-7438 /# curl demoapp-svc.default      #在同一名称空间下可不加default
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.74, ServerName: deployment-demo-584777f66f-kfb7b, ServerIP: 10.244.4.68!
root@client-7438 /# curl demoapp-svc.default
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.74, ServerName: deployment-demo-8dcb44cb-fjt2t, ServerIP: 10.244.4.69!

持续发起请求   会发现金丝雀被分到20%的流量
root@client-7438 /# while true; do curl demoapp-svc.default; sleep 1; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.74, ServerName: deployment-demo-584777f66f-6d29s, ServerIP: 10.244.5.59!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.74, ServerName: deployment-demo-8dcb44cb-fjt2t, ServerIP: 10.244.4.69!
给金丝雀这么大的流量,有时候反而会是一种风险,应该有流量网关,引入小部分流量过来
service在此是做不到的,它的流量分配只能基于pod的占比及负载均衡调度算法决定的
service的分配逻辑是完全能符合我们的需要,此时应该结合Ingress,Ingress COntroller (7层负载均衡器)来完成更加精准的算法。只抽出1%,一般而言最多不超过5%的流量给新版的pod,下面的功能可以支撑起我们所说的这种模式

Ingress: 
   Ingress COntroller (7层负载均衡器)
        Ingress-Nginx
        HAProxy
        Trafik 
        Envoy --> Gloo, Contour, Hingress, ...
如果有了igress,service只负责流量发现,负载均衡交与igress来实现。高级的流量治理能力就可以由Ingress COntroller发挥作用
假设此时的发布有问题,需要做回滚,就交给rollout中的undo来实现,新版的pod就会被删除掉,所有的流量分配结果就回到了旧版本
查看版本更新历史
[root@K8s-master01 chapter8]#kubectl rollout history deployment deployment-demo
deployment.apps/deployment-demo 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>
当前为4,一旦回滚,就会回滚到3,但是在历史查看中,就变成了5,如果不停回滚,就会在目前的3和4版本中来回变动

如果指定回滚到某版本:用kubectl rollout undo --help可知   --to-revision=0(表示前一个)
如回滚到第二版:kubectl rollout undo --to-revision=2
如果不做回滚,继续完成滚动发布
[root@K8s-master01 chapter8]#kubectl rollout resume deployment/deployment-demo
deployment.apps/deployment-demo resumed

查看滚动更新过程
[root@K8s-master01 chapter8]#kubectl rollout status deployment/deployment-demo
Waiting for deployment "deployment-demo" rollout to finish: 2 out of 4 new replicas have been updated...
Waiting for deployment "deployment-demo" rollout to finish: 3 out of 4 new replicas have been updated...
总结
高可用特性的发布机制:
Blue-Green:蓝绿
    不存在新旧版本兼容性的问题
    在新版本就绪后,流量一次性全部切换至新版本 
    旧版本的下线,应该延迟一段时间
Canary:金丝雀
    滚动更新:部署完成第一批新版本应用之后,暂停下来;
    这批新版本应用就是所谓Canary
    对额外资源的需求量相对较少
    存在新旧版本兼容性的问题
应用编排: 
    控制器模式和声明式API
工作负载型控制器: 
        无状态应用编排:RelicaSet和Deployment
            控制器 
            声明式API(资源类型)
            资源对象 
            
            声明的关键终态:
            spec:
                replicas:期望的Pod副本数量,Pod与节点之间没有必然联系
                selector 
                    matchLabels
                    matchExpressions
                template: <object> 
                    metadata: 
                        labels: 
                    spec: 

                strategy:
                    type: RollingUpdate
                    rollingUpdate:
                        maxSurge: 在滚动更新期间,允许比replicas字段声明的Pod副本数多出的Pod数量或百分比,默认为25%
                        maxUnavailable:在滚动更新期间,允许比replicas字段声明的Pod副本数少出的Pod数量或百分比,默认为25%
replicas:变动会触发扩缩容操作
selector:变动很有可能导致此前匹配的pod数量减少或者增加,增加label会导致减少pod,减少label会触发基于模板新建pod
template:只有template变动才会触发滚动更新操作

deployment是专门用于编排无状态应用的,他会假设任何一个应用删一个pod,加一个pod,基于模板添加出来就完啦。没有任何需求做额外操作。

如果我们想编排一个有状态应用:如MySQL,如果使用deployment的话,一定意味着这个应用要么只有一个实例,要么各实例之间没有任何关联关系,做测试用。

如果使用deployment期望对MySQL做主从辅助集群,几乎不可能,使用的是其他的控制器资源类型

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
  4Txe79BjyroE   2023年12月08日   30   0   0 TCP
  L83A5jZvvg3Q   2023年11月22日   25   0   0 长连接TCPHTTP
  ozzp9aSSE46S   2023年11月30日   33   0   0 DNSIPPod
  YKMEHzdP8aoh   2023年12月11日   67   0   0 DNSidePod
  KRsXEGSB49bk   2023年11月24日   52   0   0 TCPHTTP首部
  pfb3gDAOHucg   2023年12月09日   42   0   0 TCP网络层协议UDP
  mjtHZIki74si   2023年12月06日   34   0   0 TCP重传