cka-moni
  bAB2KcLKpirZ 2023年11月02日 68 0


1.

任务权重:1%

kubectl您可以通过上下文从主终端访问多个集群。将所有这些上下文名称写入/opt/course/1/contexts.

接下来写一个显示当前上下文的命令进去/opt/course/1/context_default_kubectl.sh,该命令应该使用kubectl。

最后将做同样事情的第二个命令写入/opt/course/1/context_default_no_kubectl.sh. 但没有使用kubectl.
 

# -o name就是去掉了标头

kubectl config get-contexts  -o name > /opt/course/1/contexts

echo "kubectl config current-context" > /opt/course/1/context_default_kubectl.sh

echo 'cat ~/.kube/config |grep cu | sed "s/current-context: //"' > /opt/course/1/context_default_no_kubectl.sh

Task weight: 1%

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

 

2.

任务权重:3%

使用上下文:kubectl config use-context k8s-c1-H

在Namespace中创建一个单一的图像Pod。Pod应该被命名,容器应该被命名。这个Pod应该调度在一个主节点上,不要在任何节点上添加新标签。httpd:2.4.41-alpine defaultpod1pod1-container

Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a master node, do not add new labels any nodes.

3.

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources.

任务权重:1%

使用上下文:kubectl config use-context k8s-c1-H

在Namespace中命名了两个Pod。C13 管理层要求您将Pod缩减为一个副本以节省资源。o3db-* project-c13

4.

任务权重:4%

使用上下文:kubectl config use-context k8s-c1-H

在Namespace 中执行以下操作default。创建一个名为image的Pod。配置一个简单运行的 LivenessProbe 。还配置一个 ReadinessProbe 来检查 url 是否可以访问,你可以使用它。启动Pod并确认它由于 ReadinessProbe 未准备好。ready-if-service-readynginx:1.16.1-alpinetruehttp://service-am-i-ready:80wget -T2 -O- http://service-am-i-ready:80

创建一个名为image的第二个Pod ,带有 label 。已经存在的Service现在应该将第二个Pod作为端点。am-i-readynginx:1.16.1-alpineid: cross-server-ready service-am-i-ready

现在第一个Pod应该处于就绪状态,请确认。

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply runs true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.

5.

任务权重:1%

使用上下文:kubectl config use-context k8s-c1-H

所有命名空间中都有各种Pod。编写一个命令/opt/course/5/find_pods.sh,列出所有按AGE排序的 Pod ( metadata.creationTimestamp)。

编写第二个命令/opt/course/5/find_pods_uid.sh,其中列出了按 field 排序的所有Podmetadata.uid。对这两个命令都使用kubectl排序。

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

6.

任务权重:8%

使用上下文:kubectl config use-context k8s-c1-H

创建一个名为的新PersistentVolumesafari-pv。它的容量应该是2Gi、 accessMode ReadWriteOnce、 hostPath/Volumes/Data并且没有定义 storageClassName。

接下来在命名空间中创建一个新的PersistentVolumeClaim ,名为. 它应该请求2Gi存储,accessMode ReadWriteOnce,并且不应该定义 storageClassName。PVC应该正确绑定到PV。 project-tigersafari-pvc

最后在命名空间中创建一个新的部署 ,将该卷安装在. 该部署的Pod应该是 image 。safari project-tiger/tmp/safari-datahttpd:2.4.41-alpine

Task weight: 8%

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

7.

任务权重:1%

使用上下文:kubectl config use-context k8s-c1-H

指标服务器已安装在集群中。你的大学想知道 kubectl 命令:

  1. 显示节点资源使用情况
  2. 显示Pod及其容器资源使用情况

请将命令写入/opt/course/7/node.sh和/opt/course/7/pod.sh。

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:

  1. show Nodes resource usage
  2. show Pods and their containers resource usage

Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.

8.

任务权重:2%

使用上下文:kubectl config use-context k8s-c1-H

使用 SSH 进入主节点ssh cluster1-master1。检查主组件 kubelet、kube-apiserver、kube-scheduler、kube-controller-manager 和 etcd 是如何在主节点上启动/安装的。还要找出 DNS 应用程序的名称以及它是如何在主节点上启动/安装的。

将您的发现写入文件/opt/course/8/master-components.txt。该文件的结构应如下所示:

# /opt/course/8/master-components.txt

kubelet: [TYPE]

kube-apiserver: [TYPE]

kube-scheduler: [TYPE]

kube-controller-manager: [TYPE]

etcd: [TYPE]

dns: [TYPE] [NAME]

选项[TYPE]有:not-installed, process, static-pod,pod

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it's started/installed on the master node.

Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:

# /opt/course/8/master-components.txt

kubelet: [TYPE]

kube-apiserver: [TYPE]

kube-scheduler: [TYPE]

kube-controller-manager: [TYPE]

etcd: [TYPE]

dns: [TYPE] [NAME]

Choices of [TYPE] are: not-installed, process, static-pod, pod

9.

任务权重:5%

使用上下文:kubectl config use-context k8s-c2-AC

使用 SSH 进入主节点ssh cluster2-master1。暂时停止 kube-scheduler,这意味着您可以在之后重新启动它。

创建一个名为image的Pod,确认其已创建但未在任何节点上调度。manual-schedulehttpd:2.4-alpine

现在您是调度程序并拥有所有功能,可以在节点 cluster2-master1 上手动调度该Pod 。确保它正在运行。

再次启动 kube-scheduler 并通过创建名为image的第二个Pod来确认其正常运行,并检查它是否在 cluster2-worker1 上运行。manual-schedule2httpd:2.4-alpine

Task weight: 5%

Use context: kubectl config use-context k8s-c2-AC

Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm its created but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it's running.

Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-worker1.

10.

任务权重:6%

使用上下文:kubectl config use-context k8s-c1-H

在Namespace中创建一个新的ServiceAccount 。创建一个Role和RoleBinding,两者都命名。这些应该允许新SA仅在该Namespace中创建Secrets和ConfigMaps。processor project-hamsterprocessor

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

11.

任务权重:4%

使用上下文:kubectl config use-context k8s-c1-H

使用命名空间 project-tiger进行以下操作。创建一个以图像和标签命名的DaemonSet和. 它创建的Pod应该请求 10 毫核 CPU 和 10 MB 内存。该DaemonSet的Pod应该在所有节点(master 和 worker)上运行。ds-importanthttpd:2.4-alpineid=ds-importantuuid=18426a0b-5f59-4e10-923f-c0e078e82462

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, master and worker.

12.

任务权重:6%

使用上下文:kubectl config use-context k8s-c1-H

使用命名空间 project-tiger进行以下操作。创建一个以标签命名的部署(也应该有这个标签)和 3 个副本。它应该包含两个容器,第一个容器名为 container1,带有 image ,第二个容器名为 container2,带有 image 。deploy-importantid=very-importantPodsnginx:1.17.6-alpinekubernetes/pause

该部署的一个 Pod应该只在一个工作节点上运行。我们有两个工作节点:cluster1-worker1 和 cluster1-worker2。因为Deployment有 3 个副本,所以结果应该是两个节点上都有一个Pod正在运行。除非添加新的工作节点,否则不会安排第三个Pod 。

在某种程度上,我们在这里模拟了DaemonSet的行为,但使用了Deployment和固定数量的副本。

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-worker1 and cluster1-worker2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

13.

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.

Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.

Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.

Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.

Check the logs of container c3 to confirm correct setup.

任务权重:4%

使用上下文:kubectl config use-context k8s-c1-H

创建一个在Namespace中命名的Pod ,其中包含三个容器,named和。应该有一个卷附加到该Pod并安装到每个容器中,但该卷不应该被持久化或与其他Pod共享。multi-container-playground defaultc1c2c3

容器c1应该是镜像nginx:1.17.6-alpine的,并且具有运行其Pod的节点的名称作为环境变量 MY_NODE_NAME。

容器c2应该是镜像的,并且每秒将共享卷busybox:1.31.1中的命令输出写入文件。你可以用这个。datedate.logwhile true; do date >> /your/vol/path/date.log; sleep 1; done

容器c3应该是镜像的busybox:1.31.1,并且不断地将文件内容date.log从共享卷发送到标准输出。你可以用tail -f /your/vol/path/date.log这个。

检查容器的日志c3以确认正确的设置。

14.

任务权重:2%

使用上下文:kubectl config use-context k8s-c1-H

您需要了解有关集群的以下信息k8s-c1-H :

  1. 有多少个主节点可用?
  2. 有多少个工作节点可用?
  3. 什么是服务 CIDR?
  4. 配置了哪个网络(或 CNI 插件),它的配置文件在哪里?
  5. 在 cluster1-worker1 上运行的静态 Pod 将具有哪个后缀?

将您的答案写入文件/opt/course/14/cluster-info,结构如下:

# /opt/course/14/cluster-info

1: [ANSWER]

2: [ANSWER]

3: [ANSWER]

4: [ANSWER]

5: [ANSWER]

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

You're ask to find out following information about the cluster k8s-c1-H :

  1. How many master nodes are available?
  2. How many worker nodes are available?
  3. What is the Service CIDR?
  4. Which Networking (or CNI Plugin) is configured and where is its config file?
  5. Which suffix will static pods have that run on cluster1-worker1?

Write your answers into file /opt/course/14/cluster-info, structured like this:

# /opt/course/14/cluster-info

1: [ANSWER]

2: [ANSWER]

3: [ANSWER]

4: [ANSWER]

5: [ANSWER]

15.

Task weight: 3%

Use context: kubectl config use-context k8s-c2-AC

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp). Use kubectl for it.

Now kill the kube-proxy Pod running on node cluster2-worker1 and write the events this caused into /opt/course/15/pod_kill.log.

Finally kill the containerd container of the kube-proxy Pod on node cluster2-worker1 and write the events into /opt/course/15/container_kill.log.

Do you notice differences in the events both actions caused?

任务权重:3%

使用上下文:kubectl config use-context k8s-c2-AC

编写一个命令/opt/course/15/cluster_events.sh,其中显示整个集群中的最新事件,按时间排序(metadata.creationTimestamp)。kubectl为之使用。

现在杀死节点 cluster2-worker1 上运行的 kube-proxy Pod并将其导致的事件写入/opt/course/15/pod_kill.log.

最后杀死节点 cluster2-worker1 上 kube-proxy Pod的 containerd 容器,并将事件写入/opt/course/15/container_kill.log.

您是否注意到这两种行为引起的事件的差异?

16.

任务权重:2%

使用上下文:kubectl config use-context k8s-c1-H

创建一个名为.cka-master

将所有命名空间的 Kubernetes 资源(如Pod、Secret、ConfigMap ...)的名称写入/opt/course/16/resources.txt.

找到其中定义数量最多的project-* 命名空间Roles,并将其名称和角色数量写入/opt/course/16/crowded-namespace.txt.

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Create a new Namespace called cka-master.

Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt.

Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.

17.

Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.

Using command crictl:

Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txt

Write the logs of the container into /opt/course/17/pod-container.log

任务权重:3%

使用上下文:kubectl config use-context k8s-c1-H

在命名空间 project-tiger中创建一个名为image的Pod ,带有标签和. 找出Pod调度在哪个节点上。SSH 进入该节点并找到属于该Pod的 containerd 容器。tigers-reunitehttpd:2.4.41-alpinepod=containercontainer=pod

使用命令crictl:

写入容器的 ID 和info.runtimeTypein/opt/course/17/pod-container.txt

将容器的日志写入/opt/course/17/pod-container.log

18.

任务权重:8%

使用上下文:kubectl config use-context k8s-c3-CCC

kubelet 似乎有问题没有在cluster3-worker1. 修复它并确认集群cluster3-worker1在之后有处于就绪状态的节点可用。之后您应该可以安排一个Pod。cluster3-worker1

将问题的原因写入/opt/course/18/reason.txt.

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCC

There seems to be an issue with the kubelet not running on cluster3-worker1. Fix it and confirm that cluster has node cluster3-worker1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-worker1 afterwards.

Write the reason of the issue into /opt/course/18/reason.txt.

19.

Task weight: 3%

NOTE: This task can only be solved if questions 18 or 20 have been successfully implemented and the k8s-c3-CCC cluster has a functioning worker node

Use context: kubectl config use-context k8s-c3-CCC

Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time.

There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the Namespace secret and mount it readonly into the Pod at /tmp/secret1.

Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS.

Confirm everything is working.

任务权重:3%

注意: 此任务只有在问题 18 或 20 已成功实施并且 k8s-c3-CCC 集群具有正常工作的工作节点时才能解决

使用上下文:kubectl config use-context k8s-c3-CCC

在新的Namespace secret中执行以下操作。创建一个名为image的Pod ,它应该会持续运行一段时间。secret-podbusybox:1.31.1

现有Secret位于,在Namespace/opt/course/19/secret1.yaml中创建它并将其以只读方式挂载到Pod中。 secret/tmp/secret1

在命名空间中创建一个新的Secret ,它应该包含和。这些条目应该在Pod 的容器中作为环境变量 APP_USER 和 APP_PASS 可用。 secretsecret2user=user1pass=1234

确认一切正常。

20.

任务权重:10%

使用上下文:kubectl config use-context k8s-c3-CCC

您的同事说节点cluster3-worker2运行的是较旧的 Kubernetes 版本,甚至不是集群的一部分。将该节点上的 Kubernetes 更新为正在运行的确切版本cluster3-master1。然后将此节点添加到集群中。为此使用 kubeadm。

Task weight: 10%

Use context: kubectl config use-context k8s-c3-CCC

Your coworker said node cluster3-worker2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-master1. Then add this node to the cluster. Use kubeadm for this.

21.

Task weight: 2%

Use context: kubectl config use-context k8s-c3-CCC

Create a Static Pod named my-static-pod in Namespace default on cluster3-master1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.

Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if its reachable through the cluster3-master1 internal IP address. You can connect to the internal node IPs from your main terminal.

任务权重:2%

使用上下文:kubectl config use-context k8s-c3-CCC

在cluster3-master1 上的命名空间中创建一个Static Pod命名。它应该是图像并且对CPU 和内存有资源请求。my-static-pod defaultnginx:1.16-alpine10m20Mi

然后创建一个名为NodePort服务static-pod-service,该服务在端口 80 上公开该静态Pod,并检查它是否具有Endpoints以及是否可通过cluster3-master1内部 IP 地址访问。您可以从主终端连接到内部节点 IP。

22.

任务权重:2%

使用上下文:kubectl config use-context k8s-c2-AC

检查 kube-apiserver 服务器证书在cluster2-master1. 使用 openssl 或 cfssl 执行此操作。将到期日期写入/opt/course/22/expiration.

还运行正确的kubeadm命令列出到期日期并确认两种方法显示相同的日期。

kubeadm将更新 apiserver 服务器证书的正确命令写入/opt/course/22/kubeadm-renew-certs.sh.

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Check how long the kube-apiserver server certificate is valid on cluster2-master1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.

Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.

Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.

23.

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Node cluster2-worker1 has been added to the cluster using kubeadm and TLS bootstrapping.

Find the "Issuer" and "Extended Key Usage" values of the cluster2-worker1:

  1. kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
  2. kubelet server certificate, the one used for incoming connections from the kube-apiserver.

Write the information into file /opt/course/23/certificate-info.txt.

Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.

任务权重:2%

使用上下文:kubectl config use-context k8s-c2-AC

kubeadm已使用TLS 引导将节点 cluster2-worker1 添加到集群中。

找到 cluster2-worker1 的“Issuer”和“Extended Key Usage”值:

  1. kubelet客户端证书,用于与 kube-apiserver 的传出连接。
  2. kubelet服务器证书,用于来自 kube-apiserver 的传入连接。

将信息写入文件/opt/course/23/certificate-info.txt。

比较两个证书的“颁发者”和“扩展密钥使用”字段并理解它们。

24.

任务权重:9%

使用上下文:kubectl config use-context k8s-c1-H

有一个安全事件,入侵者能够从一个被黑的后端Pod访问整个集群。

为了防止这种情况,请创建一个在Namespace中调用的NetworkPolicy。它应该只允许Pod:np-backend project-snakebackend-*

  • 连接到端口 1111 上db1-* 的Pod
  • 在端口 2222 上连接到db2-* Pod

在策略中使用Podapp标签。

实施后,从backend-* Pod到端口vault-* 3333上的 Pod 的连接应该不再起作用。

Task weight: 9%

Use context: kubectl config use-context k8s-c1-H

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.

To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:

  • connect to db1-* Pods on port 1111
  • connect to db2-* Pods on port 2222

Use the app label of Pods in your policy.

After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.

25.

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCC

Make a backup of etcd running on cluster3-master1 and save it on the master node at /tmp/etcd-backup.db.

Then create a Pod of your kind in the cluster.

Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

任务权重:8%

使用上下文:kubectl config use-context k8s-c3-CCC

备份 cluster3-master1 上运行的 etcd 并将其保存在主节点上/tmp/etcd-backup.db。

然后在集群中创建一个属于您的Pod。

最后恢复备份,确认集群仍在工作并且创建的Pod不再与我们在一起。

附加问题 1

使用上下文:kubectl config use-context k8s-c1-H

检查命名空间中所有可用的Pod并找到如果节点耗尽资源(cpu 或内存)来调度所有Pod时可能首先终止的那些 Pod 的名称。将Pod名称写入. project-c13/opt/course/e1/pods-not-stable.txt

Extra Question 1

Use context: kubectl config use-context k8s-c1-H

Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the Nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.

Extra Question 2

Use context: kubectl config use-context k8s-c1-H

There is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running.

Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.sh.

附加问题 2

使用上下文:kubectl config use-context k8s-c1-H

Namespace中有一个现有的ServiceAccount 。创建一个使用此ServiceAccount命名的图像Pod。确保容器继续运行。secret-reader project-hamstercurlimages/curl:7.65.3tmp-api-contact

执行到Pod并用于curl手动访问该集群的 Kubernetes Api,列出所有可用的秘密。您可以忽略不安全的 https 连接。将此命令写入文件 /opt/course/e4/list-secrets.sh。

预览问题 1

使用上下文:kubectl config use-context k8s-c2-AC

集群管理员要求您找出有关在 cluster2-master1 上运行的 etcd 的以下信息:

  • 服务器私钥位置
  • 服务器证书到期日期
  • 是否启用客户端证书身份验证

将这些信息写入/opt/course/p1/etcd-info.txt

最后,系统会要求您在 /etc/etcd-snapshot.dbcluster2-master1 上保存 etcd 快照并显示其状态。

Preview Question 1

Use context: kubectl config use-context k8s-c2-AC

The cluster admin asked you to find out the following information about etcd running on cluster2-master1:

  • Server private key location
  • Server certificate expiration date
  • Is client certificate authentication enabled

Write these information into /opt/course/p1/etcd-info.txt

Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-master1 and display its status.

Preview Question 2

Use context: kubectl config use-context k8s-c1-H

You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:

Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time.

Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.

Find the kube-proxy container on all nodes cluster1-master1, cluster1-worker1 and cluster1-worker2 and make sure that it's using iptables. Use command crictl for this.

Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt.

Finally delete the Service and confirm that the iptables rules are gone from all nodes.

预览问题 2

使用上下文:kubectl config use-context k8s-c1-H

系统会要求您确认 kube-proxy 在所有节点上都正常运行。为此,在Namespace project-hamster中执行以下操作:

创建一个以两个容器命名的新Pod ,一个是 image ,一个是 image 。确保busybox 容器持续运行一段时间。p2-podnginx:1.21.3-alpinebusybox:1.31

创建一个名为的新服务,它在集群内部的端口 3000->80 上p2-service公开该Pod 。

在所有节点上找到 kube-proxy 容器cluster1-master1,cluster1-worker1并cluster1-worker2确保它正在使用 iptables。crictl为此使用命令。

将属于创建的Service p2-service的所有节点的 iptables 规则写入文件/opt/course/p2/iptables.txt。

最后删除Service并确认 iptables 规则已从所有节点中消失。

预览问题 3

使用上下文:kubectl config use-context k8s-c2-AC

使用 image创建一个在Namespace default中命名的Pod。在端口 80 上将其公开为名为的 ClusterIP服务。记住/输出该服务的 IP 。check-iphttpd:2.4.41-alpinecheck-ip-service

11.96.0.0/12将集群的服务 CIDR 更改为。

然后创建一个名为指向同一个Pod的第二个服务,以检查您的设置是否生效。最后检查第一个Service的IP是否发生了变化。check-ip-service2

Preview Question 3

Use context: kubectl config use-context k8s-c2-AC

Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.

Change the Service CIDR to 11.96.0.0/12 for the cluster.

Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

上一篇: mysql数据库安装 下一篇: go 插件下载失败
  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
  xaeiTka4h8LY   2024年05月17日   51   0   0 数据库JavaSQL
  2iBE5Ikkruz5   2023年12月12日   92   0   0 JavaJavaredisredis
bAB2KcLKpirZ