K8s集群搭建及部署dashboard (web可视化界面)
  HataGTb8c2ak 2023年11月28日 22 0

环境准备

PS:以下环境部署为一主两从 测试环境使用 


  • 安装K8s之前首先安装docker,docker安装参考下方链接。
  • docker 安装链接
  • 或者使用下列命令执行安装docker
#在三个节点中安装docker并配置镜像加速

yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
systemctl enable docker --now
docker info     //检查是否正常启动
  • 配置docker仓库地址加速
下面还额外增加了docker生产环境核心配置cgroup

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://1qy9fgqs.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
  	"max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

docker info     //再次检查 看是否成功加载出来了上面配置的镜像的仓库地址


  • 准备三台兼容Linux的主机
  • 每台机器2GB或者更多的RAM(太小的话可能会影响应用的运行内存)
  • 每台机器的CPU 大于2
  • 三台机器网络要能够链接
  • 关闭selinux防火墙
  • 禁用交换分区,为了保证kubelet正常工作。
#在三个节点中配置

[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config   //永久关闭selinux
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab   //关闭swap交换分区

#配置 允许iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system    //查看规则是否生效
  • 为了更好的区别三台服务器,为每台服务器命名
  • hostnamectl set-hostname k8s-master
  • hostnamectl set-hostname k8s-node1
  • hostnamectl set-hostname k8s-node2

master 为主节点,node1、node2为普通节点

常用命令

kubectl get nodes   				  //查看集群中的所有节点
kubectl apply -f xxxx         //根据配置文件,给集群创建资源	
kubectl get pods -A					  //查看集权部署了那些应用 (类似docker查看容器 dockers ps)

创建集群

使用kubeadm创建集群

一、安装kubelet、kubeadm、kubectl
#为三个节点配置下载的yum源

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

#下载安装kubelet、kubeadm、kubectl
yum -y install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

#启动kubelet
systemctl enable kubelet --now

PS:systemctl status  kubelet  
查看启动状态,显示失败是正常的,启动状态少,停止状态多,一直执行命令狂刷会看到启动状态。忽略即可。


二、使用kubeadm引导集群
#在三个节点中配置镜像脚本

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in "${images[@]}" ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/"$imageName"
done
EOF

#在三个节点中执行该脚本
chmod +x ./images.sh && ./images.sh
docker images    //查看是否成功下载七个镜像 如图所示
(下载镜像时若是遇到报错,可能时网络连接问题,再次执行脚本下载尝试即可。)

K8s集群搭建及部署dashboard (web可视化界面)_docker

#初始化主节点 将maser节点配置为主节点
#在三个节点中执行下列命令 注意IP地址时master节点的ip

[root@k8s-master ~]# echo "192.168.20.130 k8s-master" >> /etc/hosts
[root@k8s-master ~]# echo "192.168.20.131 k8s-node1" >> /etc/hosts
[root@k8s-master ~]# echo "192.168.20.132 k8s-node2" >> /etc/hosts
[root@k8s-master ~]# ping k8s-master   //三个节点中 ping测域名测试连通性


#主节点初始化  (在主节点中执行)

kubeadm init \
--apiserver-advertise-address=192.168.20.130 \
--control-plane-endpoint=k8s-master \
--image-repository=registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version=v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=193.168.0.0/16

###如果遇到下列报错
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
执行下面命令即可 然后重新进行初始化的命令执行
sudo sysctl net.ipv4.ip_forward=1

#注意上列IP地址要写master主节点的地址
#k8s-master 域名要更改正确
#kubernetes-version的版本最好跟上面下载的镜像的版本保持一致
#service-cid 表示网络范围
#pod-network-cidr不要跟service-cidr指定的网络范围重叠,也不能跟宿主机的IP地址重叠(即所有网络范围不能重叠有冲突)

如图所示 出现这个表示初始化成功 “Your Kubernetes control-plane has initialized successfully!”

K8s集群搭建及部署dashboard (web可视化界面)_docker_02

#初始化之后 还需按照图中提示的指令执行   
#ps: 初始化后给的命令24小时内有效
#过期之后使用这条命令在master节点中执行可以重新创建命令令牌
[root@k8s-master ~]# kubeadm token create --print-join-command //使用创建的命令令牌在worker节点中执行即可加入集群。

K8s集群搭建及部署dashboard (web可视化界面)_k8s_03

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join aabbcc:6443 --token ixnnvt.ju2amm8zucfqd2ow \
    --discovery-token-ca-cert-hash sha256:a97e544c2f9a7d843cee248d3ab350b71fea5ad1a033eb4281c19dd698f99822 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join aabbcc:6443 --token ixnnvt.ju2amm8zucfqd2ow \
    --discovery-token-ca-cert-hash sha256:a97e544c2f9a7d843cee248d3ab350b71fea5ad1a033eb4281c19dd698f99822
#使用集群首先按照步骤执行命令
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get nodes       //执行后进行验证查看主节点的状态
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   NotReady   control-plane,master   3m8s   v1.20.9
部署网络组件
[root@k8s-master ~]# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O    //下载calico配置文件

如图所示 将value 的注释删掉,并将默认的192.168.0.0/16地址改成上面--pod-network-cidr=193.168.0.0/16配置的地址

K8s集群搭建及部署dashboard (web可视化界面)_k8s_04

[root@k8s-master ~]# kubectl apply -f calico.yaml     //根据calico文件 构建网络插件资源
[root@k8s-master ~]# kubectl get pods -A     //查看验证集群资源状态
[root@k8s-master ~]# kubectl get nodes       // 如图所示 master节点已经Ready 下面配置work节点

K8s集群搭建及部署dashboard (web可视化界面)_k8s_05

### 在node1、node2节点执行上面初始化时给到的加入worker节点的命令,将两个节点加入到worker节点

[root@k8s-node1 ~]# kubeadm join aabbcc:6443 --token ixnnvt.ju2amm8zucfqd2ow \
    --discovery-token-ca-cert-hash sha256:a97e544c2f9a7d843cee248d3ab350b71fea5ad1a033eb4281c19dd698f99822

### 然后在主节点中再次验证   如图所示
[root@k8s-master ~]# kubectl get nodes     
[root@k8s-master ~]# kubectl get pods -A

K8s集群搭建及部署dashboard (web可视化界面)_docker_06


部署dashboard (web可视化界面)

1、部署
#master节点执行
[root@localhost ~]# vi dashboard.yml   //创建文件并将下列内容复制粘贴进去
[root@localhost ~]# kubectl apply -f dashboard.yml   //加载文件并等待服务创建,如图所示

K8s集群搭建及部署dashboard (web可视化界面)_dashboard_07

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.3.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
2、设置访问端口
[root@k8s-master ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard  
//如图所示 将type: ClusterIP 改为 type: NodePort

K8s集群搭建及部署dashboard (web可视化界面)_linux_08

[root@k8s-master ~]# kubectl get svc -A |grep kubernetes-dashboard  //查看映射端口

K8s集群搭建及部署dashboard (web可视化界面)_linux_09

访问: https://集群任意IP:端口 https://192.168.20.130:32759

K8s集群搭建及部署dashboard (web可视化界面)_k8s_10

3、创建访问账号
#创建访问账号,准备一个yaml文件; 

vi dash.yaml  //添加下列内容 master 节点下创建

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  
[root@k8s-master ~]# kubectl apply -f dash.yaml   //使用kubectl 应用该文件
4、令牌访问
### 获取访问令牌,用于登入上面创建的账号

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
//生成令牌 并复制图中的令牌 在页面中粘贴进行登入 如下图二所示

K8s集群搭建及部署dashboard (web可视化界面)_docker_11

K8s集群搭建及部署dashboard (web可视化界面)_docker_12

5、完成,界面验证

K8s集群搭建及部署dashboard (web可视化界面)_dashboard_13

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月28日 0

暂无评论

推荐阅读
HataGTb8c2ak