【Kubernets】K8s群集安装部署详细教程-(1)准备工作
  W7xauqsNtuHG 2023年11月02日 35 0

环境说明:

构成:

1台Master+2台Node

NO.

计算机名

IP地址

配置详情

说明

1

M001

192.168.11.120

1C 1GB 40GB

管理节点

2

N001

192.168.11.121

1C 1GB 40GB

工作节点

3

N002

192.168.11.122

1C 1GB 40GB

工作节点

OS:

Centos9 Stream 最小化安装

内核版本:5.14.0-171.el9.x86_64(也可以自己把内核升级到最新的版本)

系统更新到最新:

yum update -y

update后内核的版本号会发生变化;这个我之前没太注意。

更新后我的内核版本升级到:5.14.0-325.el9.x86_64

准备阶段

1.根据要求完成操作系统安装+计算机名+IP地址+Hosts。

设置计算机名

#Master上
hostnamectl set-hostname M001
#Node1上
hostnamectl set-hostname N001
#Node2上
hostnamectl set-hostname N002

设置IP地址:

# 带GUI的命令
>> nmtui
...
>>

或者

>> vi /etc/sysconfig/network-scripts/ifcfg-<interface>
...
>> systemctl restart network
>>

或者直接图形界面下修改。

修改本地hosts文件

所有服务器上都要做

>> sudo vi /etc/hosts
192.168.11.120 m001
192.168.11.121 n001
192.168.11.122 n002

2.检查具备以下条件

  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 所有节点执行时间同步(保证时间一致即可,也可以不用启动chronyd;但是建议启用)
# 启动chronyd服务
>> sudo systemctl start chronyd
>> sudo systemctl enable chronyd
>> date
...
>>
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid
    检查product_uuid方法:
sudo cat /sys/class/dmi/id/product_uuid
  • 开启机器上的某些端口(根据自己的设计需要)
    启用必要的端口后才能使 Kubernetes 的各组件相互通信。可以使用 netcat 之类的工具来检查端口是否启用,例如:
nc 127.0.0.1 6443

6443是默认的api-server的端口;不过我们后面会关掉防火墙,端口检查也不是特别关键的步骤。

你使用的 Pod 网络插件 (详见后续) 也可能需要开启某些特定端口。由于各个 Pod 网络插件的功能都有所不同, 请参阅他们各自文档中对端口的要求。

kubelet default 监听的port是10250。

  • 禁用交换分区。为了保证 kubelet 正常工作,必须禁用交换分区。
    临时关闭
>> sudo swapoff -a

永久关闭swap

>> sudo sed -ri 's/.*swap.*/#&/' /etc/fstab

或者:

# 永久禁用swap分区
>> sudo vi /etc/fstab 
# 注释掉下面的设置
# /dev/mapper/cs-swap swap
# 之后需要重启服务器生效
>>
  • 允许 iptables 检查桥接流量
    确保 br_netfilter 模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter 来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter
    为了确保重启后也会自动加载:
>> cat /etc/modules-load.d/k8s.conf
br_netfilter
>>

为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。

所有节点添加网桥过滤和地址转发功能:

>> sudo cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
>> sudo sysctl --system    #或者sysctl -p /etc/sysctl.d/k8s.conf
>>
  • 关闭防火墙和selinux
    关闭防火墙并清除所有iptables规则
systemctl stop firewalld && systemctl disable firewalld && iptables -F

关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0
  • 加载ip_vs内核模块,为kube-proxy开启ipvs(三台主机都需要执行!
    如果kube-proxy 模式为ip_vs则必须加载【推荐】,我们采用iptables【默认】
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

保证开机自动加载

>> cat /etc/modules-load.d/ip_vs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
>>

注意:如果不满足以上前提条件,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。(可选操作)

yum install -y ipset ipvsadm

备注:IPVS(IP Virtual Server)是 Linux 内核中的一个功能模块,用于实现高性能的负载均衡。它基于三层(IP层)或四层(传输层)的负载均衡技术,可以将流量分发到一组后端服务器上,以提高服务的可用性和吞吐量。

IPVS 可以根据不同的负载均衡算法(如轮询、源地址哈希、最小连接数等)将请求分发到后端服务器,并通过健康检查来排除故障节点。它还支持会话保持,可以确保同一客户端的请求被发送到同一后端服务器。

使用 IPVS 需要配置网络规则和虚拟 IP 地址,同时设置后端服务器群集。可以使用 ipvsadm 命令行工具进行配置和管理,也可以使用一些开源软件(如 Keepalived、HAProxy等)将 IPVS 与其他功能集成在一起,实现更复杂的负载均衡方案。

请注意,要使用 IPVS,您需要运行支持 IPVS 功能的内核,并正确配置相关规则和系统参数。另外,IPVS 是特定于 Linux 系统的功能,不适用于其他操作系统。

3a.然后所有节点安装docker-ce(可选之一;与3b二选一即可)

Kubernetes 社区早在2020年7月就开始着手移除 dockershim,这将意味着 Kubernetes 不再将 docker 作为默认的底层容器工具,Docker 和其他容器运行时将一视同仁,不会单独对待内置支持。

在所有主机都要执行。(M001/N001/N002)

1.安装docker依赖

yum install -y yum-utils

2.设置docker仓库镜像地址

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.安装下载docker

yum install docker-ce docker-ce-cli containerd.io

4.设置docker开机启动

systemctl enable docker && systemctl start docker

5.配置docker 镜像加速器

>> cat /etc/docker/daemon.json
{ 	
	"exec-opts": ["native.cgroupdriver=systemd"], 	
	"registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
 } 
EOF
>>

6.重启docker服务

systemctl restart docker

需要注意的是要配置docker的cgroupdriver。

{
  // 添加这行
  "exec-opts": ["native.cgroupdriver=systemd"],
}

3b.然后所有节点安装Containerd(可选之一;与3b二选一即可;我是用这个)

在所有主机都要执行。(M001/N001/N002)

Kubernetes 社区早在2020年7月就开始着手移除 dockershim,这将意味着 Kubernetes 不再将 docker 作为默认的底层容器工具,Docker 和其他容器运行时将一视同仁,不会单独对待内置支持。

Docker 底层是直接去调去 Containerd,而 Containerd 1.1 版本后就内置实现了 CRI,所以 Docker 也没必要再去单独实现 CRI 了,当 Kubernetes 不再内置支持开箱即用的 Docker 的以后,最好的方式当然也就是直接使用 Containerd 这种容器运行时,而且该容器运行时也已经经过了生产环境实践的 。

1.导入模块

所有主机都需要导入模块,确保开机加载:

>> cat /etc/modules-load.d/containerd.conf
overlay
br_netfilter
>>

执行下面命令进行加载:

modprobe overlay
modprobe br_netfilter

2.部署Containerd

下载docker-ce.repo仓库配置文件

yum install wget -y   #最小化安装的Centos 9默认没安装wget
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

所有主机都要执行(M001/N001/N002)

[root@M001 ~]# yum info containerd.io
Last metadata expiration check: 0:00:34 ago on Thu 15 Jun 2023 05:29:24 PM CST.
Available Packages
Name         : containerd.io
Version      : 1.6.21     #版本号
Release      : 3.1.el9
Architecture : x86_64
Size         : 33 M
Source       : containerd.io-1.6.21-3.1.el9.src.rpm
Repository   : docker-ce-stable
Summary      : An industry-standard container runtime
URL          : https://containerd.io
License      : ASL 2.0
Description  : containerd is an industry-standard container runtime with an emphasis on
             : simplicity, robustness and portability. It is available as a daemon for Linux
             : and Windows, which can manage the complete container lifecycle of its host
             : system: image transfer and storage, container execution and supervision,
             : low-level storage and network attachments, etc.

[root@M001 ~]#

[root@M001 ~]# yum install -y containerd.io    #在docker-ce.repo仓库中
...
Complete!
[root@M001 ~]#

3.配置containerd

[root@M001 ~]# cd /etc/containerd/
[root@M001 containerd]# ll
total 4
-rw-r--r--. 1 root root 886 May  6 04:17 config.toml
[root@M001 containerd]#
[root@M001 containerd]# mv config.toml config.toml.orig
[root@M001 containerd]#
[root@M001 containerd]# containerd config default > config.toml
[root@M001 containerd]# ll
total 12
-rw-r--r--. 1 root root 6925 Jun 15 17:32 config.toml
-rw-r--r--. 1 root root  886 May  6 04:17 config.toml.orig
[root@M001 containerd]#

我这里先把默认生成的config.toml文件也贴在这里:

[root@M001 containerd]# cat -n config.toml
     1  disabled_plugins = []
     2  imports = []
     3  oom_score = 0
     4  plugin_dir = ""
     5  required_plugins = []
     6  root = "/var/lib/containerd"
     7  state = "/run/containerd"
     8  temp = ""
     9  version = 2
    10
    11  [cgroup]
    12    path = ""
    13
    14  [debug]
    15    address = ""
    16    format = ""
    17    gid = 0
    18    level = ""
    19    uid = 0
    20
    21  [grpc]
    22    address = "/run/containerd/containerd.sock"
    23    gid = 0
    24    max_recv_message_size = 16777216
    25    max_send_message_size = 16777216
    26    tcp_address = ""
    27    tcp_tls_ca = ""
    28    tcp_tls_cert = ""
    29    tcp_tls_key = ""
    30    uid = 0
    31
    32  [metrics]
    33    address = ""
    34    grpc_histogram = false
    35
    36  [plugins]
    37
    38    [plugins."io.containerd.gc.v1.scheduler"]
    39      deletion_threshold = 0
    40      mutation_threshold = 100
    41      pause_threshold = 0.02
    42      schedule_delay = "0s"
    43      startup_delay = "100ms"
    44
    45    [plugins."io.containerd.grpc.v1.cri"]
    46      device_ownership_from_security_context = false
    47      disable_apparmor = false
    48      disable_cgroup = false
    49      disable_hugetlb_controller = true
    50      disable_proc_mount = false
    51      disable_tcp_service = true
    52      enable_selinux = false
    53      enable_tls_streaming = false
    54      enable_unprivileged_icmp = false
    55      enable_unprivileged_ports = false
    56      ignore_image_defined_volumes = false
    57      max_concurrent_downloads = 3
    58      max_container_log_line_size = 16384
    59      netns_mounts_under_state_dir = false
    60      restrict_oom_score_adj = false
    61      sandbox_image = "registry.k8s.io/pause:3.6"
    62      selinux_category_range = 1024
    63      stats_collect_period = 10
    64      stream_idle_timeout = "4h0m0s"
    65      stream_server_address = "127.0.0.1"
    66      stream_server_port = "0"
    67      systemd_cgroup = false
    68      tolerate_missing_hugetlb_controller = true
    69      unset_seccomp_profile = ""
    70
    71      [plugins."io.containerd.grpc.v1.cri".cni]
    72        bin_dir = "/opt/cni/bin"
    73        conf_dir = "/etc/cni/net.d"
    74        conf_template = ""
    75        ip_pref = ""
    76        max_conf_num = 1
    77
    78      [plugins."io.containerd.grpc.v1.cri".containerd]
    79        default_runtime_name = "runc"
    80        disable_snapshot_annotations = true
    81        discard_unpacked_layers = false
    82        ignore_rdt_not_enabled_errors = false
    83        no_pivot = false
    84        snapshotter = "overlayfs"
    85
    86        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
    87          base_runtime_spec = ""
    88          cni_conf_dir = ""
    89          cni_max_conf_num = 0
    90          container_annotations = []
    91          pod_annotations = []
    92          privileged_without_host_devices = false
    93          runtime_engine = ""
    94          runtime_path = ""
    95          runtime_root = ""
    96          runtime_type = ""
    97
    98          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
    99
   100        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
   101
   102          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
   103            base_runtime_spec = ""
   104            cni_conf_dir = ""
   105            cni_max_conf_num = 0
   106            container_annotations = []
   107            pod_annotations = []
   108            privileged_without_host_devices = false
   109            runtime_engine = ""
   110            runtime_path = ""
   111            runtime_root = ""
   112            runtime_type = "io.containerd.runc.v2"
   113
   114            [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
   115              BinaryName = ""
   116              CriuImagePath = ""
   117              CriuPath = ""
   118              CriuWorkPath = ""
   119              IoGid = 0
   120              IoUid = 0
   121              NoNewKeyring = false
   122              NoPivotRoot = false
   123              Root = ""
   124              ShimCgroup = ""
   125              SystemdCgroup = false
   126
   127        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
   128          base_runtime_spec = ""
   129          cni_conf_dir = ""
   130          cni_max_conf_num = 0
   131          container_annotations = []
   132          pod_annotations = []
   133          privileged_without_host_devices = false
   134          runtime_engine = ""
   135          runtime_path = ""
   136          runtime_root = ""
   137          runtime_type = ""
   138
   139          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
   140
   141      [plugins."io.containerd.grpc.v1.cri".image_decryption]
   142        key_model = "node"
   143
   144      [plugins."io.containerd.grpc.v1.cri".registry]
   145        config_path = ""
   146
   147        [plugins."io.containerd.grpc.v1.cri".registry.auths]
   148
   149        [plugins."io.containerd.grpc.v1.cri".registry.configs]
   150
   151        [plugins."io.containerd.grpc.v1.cri".registry.headers]
   152
   153        [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
   154
   155      [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
   156        tls_cert_file = ""
   157        tls_key_file = ""
   158
   159    [plugins."io.containerd.internal.v1.opt"]
   160      path = "/opt/containerd"
   161
   162    [plugins."io.containerd.internal.v1.restart"]
   163      interval = "10s"
   164
   165    [plugins."io.containerd.internal.v1.tracing"]
   166      sampling_ratio = 1.0
   167      service_name = "containerd"
   168
   169    [plugins."io.containerd.metadata.v1.bolt"]
   170      content_sharing_policy = "shared"
   171
   172    [plugins."io.containerd.monitor.v1.cgroups"]
   173      no_prometheus = false
   174
   175    [plugins."io.containerd.runtime.v1.linux"]
   176      no_shim = false
   177      runtime = "runc"
   178      runtime_root = ""
   179      shim = "containerd-shim"
   180      shim_debug = false
   181
   182    [plugins."io.containerd.runtime.v2.task"]
   183      platforms = ["linux/amd64"]
   184      sched_core = false
   185
   186    [plugins."io.containerd.service.v1.diff-service"]
   187      default = ["walking"]
   188
   189    [plugins."io.containerd.service.v1.tasks-service"]
   190      rdt_config_file = ""
   191
   192    [plugins."io.containerd.snapshotter.v1.aufs"]
   193      root_path = ""
   194
   195    [plugins."io.containerd.snapshotter.v1.devmapper"]
   196      async_remove = false
   197      base_image_size = ""
   198      discard_blocks = false
   199      fs_options = ""
   200      fs_type = ""
   201      pool_name = ""
   202      root_path = ""
   203
   204    [plugins."io.containerd.snapshotter.v1.native"]
   205      root_path = ""
   206
   207    [plugins."io.containerd.snapshotter.v1.overlayfs"]
   208      root_path = ""
   209      upperdir_label = false
   210
   211    [plugins."io.containerd.snapshotter.v1.zfs"]
   212      root_path = ""
   213
   214    [plugins."io.containerd.tracing.processor.v1.otlp"]
   215      endpoint = ""
   216      insecure = false
   217      protocol = ""
   218
   219  [proxy_plugins]
   220
   221  [stream_processors]
   222
   223    [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
   224      accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
   225      args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
   226      env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
   227      path = "ctd-decoder"
   228      returns = "application/vnd.oci.image.layer.v1.tar"
   229
   230    [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
   231      accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
   232      args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
   233      env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
   234      path = "ctd-decoder"
   235      returns = "application/vnd.oci.image.layer.v1.tar+gzip"
   236
   237  [timeouts]
   238    "io.containerd.timeout.bolt.open" = "0s"
   239    "io.containerd.timeout.shim.cleanup" = "5s"
   240    "io.containerd.timeout.shim.load" = "5s"
   241    "io.containerd.timeout.shim.shutdown" = "3s"
   242    "io.containerd.timeout.task.state" = "2s"
   243
   244  [ttrpc]
   245    address = ""
   246    gid = 0
   247    uid = 0
[root@M001 containerd]#

本步骤的原因:根据文档Container runtimes 中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。

3.1修改前面生成的配置文件/etc/containerd/config.toml:(第125行 附近)

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true	# false 修改为 true

【Kubernets】K8s群集安装部署详细教程-(1)准备工作_群集

3.2 再修改/etc/containerd/config.toml中:(第61行附近)

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "k8s.gcr.io/pause:3.6"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"	#这里一定要注意,需要与后面kubeadm config images list --config=kubeadm-init.yml中列出的pause镜像的版本来进行修改,否则初始化会过不去。

【Kubernets】K8s群集安装部署详细教程-(1)准备工作_群集_02

如何确定这里的版本信息呢?

因为我们安装的是containerd.io 1.6.21版本;所以这里需要使用的是与该版本兼容的且建议是尽量新的sandbox_image版本且与kubeadm config images list --cnotallow=kubeadm-init.yml列出的pause镜像版本一致。(这是最佳实践)

这里有点绕,不妨我们先只修改仓库位置(为了加速);版本保持默认;后面再根据kubeadm的要求修改这里的版本号。

3.3 为镜像下载添加加速源:(在153行上下)

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://hub-mirror.c.163.com"]

【Kubernets】K8s群集安装部署详细教程-(1)准备工作_夏明亮_03

整个修改后的配置文件,我也贴出来:

[root@M001 containerd]# cat -n config.toml
     1  disabled_plugins = []
     2  imports = []
     3  oom_score = 0
     4  plugin_dir = ""
     5  required_plugins = []
     6  root = "/var/lib/containerd"
     7  state = "/run/containerd"
     8  temp = ""
     9  version = 2
    10
    11  [cgroup]
    12    path = ""
    13
    14  [debug]
    15    address = ""
    16    format = ""
    17    gid = 0
    18    level = ""
    19    uid = 0
    20
    21  [grpc]
    22    address = "/run/containerd/containerd.sock"
    23    gid = 0
    24    max_recv_message_size = 16777216
    25    max_send_message_size = 16777216
    26    tcp_address = ""
    27    tcp_tls_ca = ""
    28    tcp_tls_cert = ""
    29    tcp_tls_key = ""
    30    uid = 0
    31
    32  [metrics]
    33    address = ""
    34    grpc_histogram = false
    35
    36  [plugins]
    37
    38    [plugins."io.containerd.gc.v1.scheduler"]
    39      deletion_threshold = 0
    40      mutation_threshold = 100
    41      pause_threshold = 0.02
    42      schedule_delay = "0s"
    43      startup_delay = "100ms"
    44
    45    [plugins."io.containerd.grpc.v1.cri"]
    46      device_ownership_from_security_context = false
    47      disable_apparmor = false
    48      disable_cgroup = false
    49      disable_hugetlb_controller = true
    50      disable_proc_mount = false
    51      disable_tcp_service = true
    52      enable_selinux = false
    53      enable_tls_streaming = false
    54      enable_unprivileged_icmp = false
    55      enable_unprivileged_ports = false
    56      ignore_image_defined_volumes = false
    57      max_concurrent_downloads = 3
    58      max_container_log_line_size = 16384
    59      netns_mounts_under_state_dir = false
    60      restrict_oom_score_adj = false
    61      sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"    #修改:这里修改仓库地址,为了国内能在正常访问
    62      selinux_category_range = 1024
    63      stats_collect_period = 10
    64      stream_idle_timeout = "4h0m0s"
    65      stream_server_address = "127.0.0.1"
    66      stream_server_port = "0"
    67      systemd_cgroup = false
    68      tolerate_missing_hugetlb_controller = true
    69      unset_seccomp_profile = ""
    70
    71      [plugins."io.containerd.grpc.v1.cri".cni]
    72        bin_dir = "/opt/cni/bin"
    73        conf_dir = "/etc/cni/net.d"
    74        conf_template = ""
    75        ip_pref = ""
    76        max_conf_num = 1
    77
    78      [plugins."io.containerd.grpc.v1.cri".containerd]
    79        default_runtime_name = "runc"
    80        disable_snapshot_annotations = true
    81        discard_unpacked_layers = false
    82        ignore_rdt_not_enabled_errors = false
    83        no_pivot = false
    84        snapshotter = "overlayfs"
    85
    86        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
    87          base_runtime_spec = ""
    88          cni_conf_dir = ""
    89          cni_max_conf_num = 0
    90          container_annotations = []
    91          pod_annotations = []
    92          privileged_without_host_devices = false
    93          runtime_engine = ""
    94          runtime_path = ""
    95          runtime_root = ""
    96          runtime_type = ""
    97
    98          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
    99
   100        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
   101
   102          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
   103            base_runtime_spec = ""
   104            cni_conf_dir = ""
   105            cni_max_conf_num = 0
   106            container_annotations = []
   107            pod_annotations = []
   108            privileged_without_host_devices = false
   109            runtime_engine = ""
   110            runtime_path = ""
   111            runtime_root = ""
   112            runtime_type = "io.containerd.runc.v2"
   113
   114            [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
   115              BinaryName = ""
   116              CriuImagePath = ""
   117              CriuPath = ""
   118              CriuWorkPath = ""
   119              IoGid = 0
   120              IoUid = 0
   121              NoNewKeyring = false
   122              NoPivotRoot = false
   123              Root = ""
   124              ShimCgroup = ""
   125              SystemdCgroup = true     #修改:这里对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定。
   126
   127        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
   128          base_runtime_spec = ""
   129          cni_conf_dir = ""
   130          cni_max_conf_num = 0
   131          container_annotations = []
   132          pod_annotations = []
   133          privileged_without_host_devices = false
   134          runtime_engine = ""
   135          runtime_path = ""
   136          runtime_root = ""
   137          runtime_type = ""
   138
   139          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
   140
   141      [plugins."io.containerd.grpc.v1.cri".image_decryption]
   142        key_model = "node"
   143
   144      [plugins."io.containerd.grpc.v1.cri".registry]
   145        config_path = ""
   146
   147        [plugins."io.containerd.grpc.v1.cri".registry.auths]
   148
   149        [plugins."io.containerd.grpc.v1.cri".registry.configs]
   150
   151        [plugins."io.containerd.grpc.v1.cri".registry.headers]
   152
   153        [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
   154          [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]   #增加:仓库加速
   155            endpoint = ["https://hub-mirror.c.163.com"]      #增加:仓库加速
   156
   157      [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
   158        tls_cert_file = ""
   159        tls_key_file = ""
   160
   161    [plugins."io.containerd.internal.v1.opt"]
   162      path = "/opt/containerd"
   163
   164    [plugins."io.containerd.internal.v1.restart"]
   165      interval = "10s"
   166
   167    [plugins."io.containerd.internal.v1.tracing"]
   168      sampling_ratio = 1.0
   169      service_name = "containerd"
   170
   171    [plugins."io.containerd.metadata.v1.bolt"]
   172      content_sharing_policy = "shared"
   173
   174    [plugins."io.containerd.monitor.v1.cgroups"]
   175      no_prometheus = false
   176
   177    [plugins."io.containerd.runtime.v1.linux"]
   178      no_shim = false
   179      runtime = "runc"
   180      runtime_root = ""
   181      shim = "containerd-shim"
   182      shim_debug = false
   183
   184    [plugins."io.containerd.runtime.v2.task"]
   185      platforms = ["linux/amd64"]
   186      sched_core = false
   187
   188    [plugins."io.containerd.service.v1.diff-service"]
   189      default = ["walking"]
   190
   191    [plugins."io.containerd.service.v1.tasks-service"]
   192      rdt_config_file = ""
   193
   194    [plugins."io.containerd.snapshotter.v1.aufs"]
   195      root_path = ""
   196
   197    [plugins."io.containerd.snapshotter.v1.devmapper"]
   198      async_remove = false
   199      base_image_size = ""
   200      discard_blocks = false
   201      fs_options = ""
   202      fs_type = ""
   203      pool_name = ""
   204      root_path = ""
   205
   206    [plugins."io.containerd.snapshotter.v1.native"]
   207      root_path = ""
   208
   209    [plugins."io.containerd.snapshotter.v1.overlayfs"]
   210      root_path = ""
   211      upperdir_label = false
   212
   213    [plugins."io.containerd.snapshotter.v1.zfs"]
   214      root_path = ""
   215
   216    [plugins."io.containerd.tracing.processor.v1.otlp"]
   217      endpoint = ""
   218      insecure = false
   219      protocol = ""
   220
   221  [proxy_plugins]
   222
   223  [stream_processors]
   224
   225    [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
   226      accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
   227      args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
   228      env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
   229      path = "ctd-decoder"
   230      returns = "application/vnd.oci.image.layer.v1.tar"
   231
   232    [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
   233      accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
   234      args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
   235      env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
   236      path = "ctd-decoder"
   237      returns = "application/vnd.oci.image.layer.v1.tar+gzip"
   238
   239  [timeouts]
   240    "io.containerd.timeout.bolt.open" = "0s"
   241    "io.containerd.timeout.shim.cleanup" = "5s"
   242    "io.containerd.timeout.shim.load" = "5s"
   243    "io.containerd.timeout.shim.shutdown" = "3s"
   244    "io.containerd.timeout.task.state" = "2s"
   245
   246  [ttrpc]
   247    address = ""
   248    gid = 0
   249    uid = 0
[root@M001 containerd]#

配置containerd开机启动,并启动containerd

systemctl enable containerd
systemctl start containerd

检查下安装信息

[root@M001 ~]# runc -v
runc version 1.1.7
commit: v1.1.7-0-g860f061
spec: 1.0.2-dev
go: go1.19.9
libseccomp: 2.5.2
[root@M001 ~]# ctr version
Client:
  Version:  1.6.21
  Revision: 3dce8eb055cbb6872793272b4f20ed16117344f8
  Go version: go1.19.9

Server:
  Version:  1.6.21
  Revision: 3dce8eb055cbb6872793272b4f20ed16117344f8
  UUID: b0ff4ca5-67dd-4241-a5d4-1b933163b04e
[root@M001 ~]#


以上是本次部署的环境介绍和基础环境准备阶段;下一篇文章将会分享正式的部署安装过程。


以上,内容均记录自实际的测试环境,每个人在执行的时候都可能会遇到未知的问题,积极在互联网上查询资料;问题一定都能得到妥善解决。

文章准备仓促,可能存在错别字或者表述不清甚至错误的情况,如果大家有发现文章不妥之处,真诚欢迎留言,本人会尽力修正。

喜欢本文的朋友请三连哦,谢谢!

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
  6YY0QMPUXEwu   2023年12月10日   31   0   0 linux网卡
  Ex81gqy3LOX7   2023年12月07日   22   0   0 linux
  nIt0XG0acU8j   2023年12月11日   32   0   0 linuxhtop
  nIt0XG0acU8j   2023年12月09日   36   0   0 linuxsort
W7xauqsNtuHG