K8S你学废了么1——K8S介绍与安装
  mGwyuvHyMgKE 2023年11月02日 43 0

一、背景介绍

出于技能储备与技术的热爱,近期开始学习K8S,本系列文章算是个人学习过程中的笔记,本系列文章K8S版本为v1.23.17,操作系统版本为ubuntu22.0.4。需要强调一点:所有的视频和资料都有滞后性,由于K8S更新很快,所以命令与部署时用到的资源地址经常发生变动,这也是对照老手册部署失败的原因,最好的方式还是github上看,闲言少叙,步入正题。

二、K8S部署方式介绍

K8S部署方式有两种:二进制部署和使用K8S推荐的kubeadm部署,为了便于上手,本次采用kubeadm方式进行部署。

三、部署前准备工作

  1. 本次采用3节点部署,分别是1个master节点和3个node节点,拓扑如下图所示:

K8S你学废了么1——K8S介绍与安装_k8s


名称

地址

节点网络

172.16.100.0/24

Service网络

10.96.0.0/12

Pod网络

10.244.0.0/16

  1. 设置系统主机名以及 Host 文件的相互解析(以master为例)
root@master1:~# hostnamectl  set-hostname  master1 

root@master1:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 ark
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.16.100.10 nfs
172.16.100.1 master1 master1.ark.com kubeapi.ark.com kubeapi
172.16.100.2 master2 master2.ark.com kubeapi.ark.com kubeapi
172.16.100.3 master3 master3.ark.com kubeapi.ark.com kubeapi
172.16.100.11 node1 node1.ark.com
172.16.100.12 node2 node2.ark.com
172.16.100.13 node3 node3.ark.com

#同时修改完hosts文件并同步给其他节点
  1. 安装依赖包,关闭swap分区
root@master1# apt install -y chronyc ipvsadm

root@master1:~# swapoff -a
root@master1:~# cat /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda3 during curtin installation
/dev/disk/by-uuid/630f8573-e79a-4ddb-ba93-2ff6ec3e92c5 / xfs defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/61397535-69b8-4b6b-b159-e47be69596ff /boot xfs defaults 0 1
#/swap.img      none    swap    sw      0       0
  1. 将master节点设为时间服务器,并让node节点去向master同步时间
#修改节点配置文件/etc/chrony.conf 
pool ntp1.aliyun.com       iburst
pool ntp2.aliyun.com       iburst
pool ntp3.aliyun.com       iburst
  1. 安装Docker软件,并设置加速器
root@master1# apt  install docker.io

# 配置 daemon.json加速器
root@master1# cat > /etc/docker/daemon.json <<EOF
{
 ["https://9lqbw5u6.mirror.aliyuncs.com"]
}
EOF

# 重启docker服务
root@master1# systemctl daemon-reload && systemctl restart docker && systemctl enable docker

四、安装K8S集群

  1. 添加apt源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
  1. 安装K8S容器化部署所必须的软件
apt-cache show kubeadm       #选择需要的版本
apt-get install -y kubelet=xxx kubeadm=xxx kubectl=xxx
  1. 初始化master节点
kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.26.4 \
--control-plane-endpoint=kubeapi.ark.com --apiserver-advertise-address=172.16.100.1 --pod-network-cidr=10.244.0.0/16 | tee kubeadm-init.log
  1. node节点加入集群
#执行安装日志中的加入命令即可
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
kubeadm join kubeapi.ark.com:6443 --token gu3f7v.01bablqkrttwu8h0 \
        --discovery-token-ca-cert-hash sha256:000e9732212fdd0e188409b4ca2c6f3c202c430e61952736319ef25580a591dd
  1. 部署flannel网络插件
#For Kubernetes v1.17+
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
  1. 至此,操作完成,验证结果:
  2. 设置kubeadm和kubectl命令补全,方便使用
kubeadm completion bash >> $HOME/.bash_profile
kubectl completion bash >> $HOME/.bash_profile
source $HOME/.bash_profile
  1. 此外由于K8S位于google的hub中,国内经常无法访问,可以提前下载,查看所需镜像方法如下,或者像本文中一样,在执行kubeadm init的时候,指定国内阿里的hub
root@master1:~# kubeadm config images list --kubernetes-version=1.23.17
registry.k8s.io/kube-apiserver:v1.23.17
registry.k8s.io/kube-controller-manager:v1.23.17
registry.k8s.io/kube-scheduler:v1.23.17
registry.k8s.io/kube-proxy:v1.23.17
registry.k8s.io/pause:3.6
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.8.6

五、安装中遇到的问题

1. 节点执行kubectl报错

执行使用命令时提示“The connection to the server localhost:8080 was refused - did you specify the right host or port?”

K8S你学废了么1——K8S介绍与安装_docker_02


当Kubectl命令在用户的.kube文件夹中找不到配置文件时,就会发生这种情况。以下是您需要为您的用户执行的步骤:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果您是root用户:

export KUBECONFIG=/etc/kubernetes/admin.conf

2. kubelet服务无法启动

当安装完kubelet包时,在加入K8S集群之前,此时的kubelet.service无法启动为正常现象

K8S你学废了么1——K8S介绍与安装_docker_03


进一步查看日志发现是缺少kubelet的config文件

K8S你学废了么1——K8S介绍与安装_k8s_04


当节点加入集群后,kubelet的config文件会自动生成,这时kubelet服务会恢复正常

3. kubeadm初始化失败

正常情况下,使用kubeadm在解决完其他先决条件后,如果报如下错误,可能的原因是你kubeadm的版本为1.24及以上

root@master3:~# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.27.1 --control-plane-endpoint=kubeapi.ark.com --apiserver-advertise-address=172.16.100.3 --pod-network-cidr=10.244.0.0/16 | tee kubeadm-init.log
[init] Using Kubernetes version: v1.27.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0506 15:15:08.206928    4453 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
W0506 15:16:51.477784    4453 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeapi.ark.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master3] and IPs [10.96.0.1 172.16.100.3]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [172.16.100.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [172.16.100.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0506 15:18:49.925038    4453 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.1, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

K8S早在1.20版本时候就发布,在后续版本中不再内置CRI,只会去调用CRI接口,这就要求底层容器支持CRI接口,但docker本身不具备CRI功能,如果底层容器技术还是使用docker,需要额外安装cri-dockerd包

K8S你学废了么1——K8S介绍与安装_k8s_05

K8S你学废了么1——K8S介绍与安装_k8s_06

K8S你学废了么1——K8S介绍与安装_docker_07

4. 添加新的node节点

默认情况下,初始化集群令牌的有效期为24小时,如果后期有新的node节点要加入K8S集群,需要重新生成令牌

K8S你学废了么1——K8S介绍与安装_docker_08


后期新加入node节点操作步骤如下:

  • 在msater节点上生成
root@master1:~# kubeadm token create --print-join-command
kubeadm join kubeapi.ark.com:6443 --token 83l11u.s6umxkfjip9g6y85 --discovery-token-ca-cert-hash sha256:000e9732212fdd0e188409b4ca2c6f3c202c430e61952736319ef25580a591dd
  • 在node节点上执行命令加入集群
root@node4:~# kubeadm join kubeapi.ark.com:6443 --token 83l11u.s6umxkfjip9g6y85 --discovery-token-ca-cert-hash sha256:000e9732212fdd0e188409b4ca2c6f3c202c430e61952736319ef25580a591dd
【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

mGwyuvHyMgKE