基于EFK实现Kubernetes集群服
  5a6ysVJd64PV 2023年11月02日 65 0

基于EFK实现Kubernetes集群监控

在Kubernetes集群中,监控是非常重要的一项任务。它可以帮助我们实时了解集群的健康状态,并及时发现和解决问题。EFK(Elasticsearch、Fluentd和Kibana)是一套开源的日志收集、存储和可视化系统,可以帮助我们实现Kubernetes集群的监控。

安装EFK

在开始之前,我们需要在Kubernetes集群中安装EFK组件。下面是一个示例部署文件,使用Helm来进行安装:

# elasticsearch.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 512Mi
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: transport

# fluentd.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
data:
  fluent.conf: |
    <source>
      @type forward
      port 24224
    </source>
    <match fluent.**>
      @type null
    </match>
    <match **>
      @type elasticsearch
      host elasticsearch.default.svc.cluster.local
      port 9200
      logstash_format true
      logstash_prefix kubernetes
      flush_interval 1s
    </match>
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
      - name: fluentd
        image: fluent/fluentd:v1.11.2-debian-1.0
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: fluentd-config
          mountPath: /fluentd/etc/fluent.conf
          subPath: fluent.conf
      volumes:
      - name: fluentd-config
        configMap:
          name: fluentd-config

# kibana.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
spec:
  selector:
    app: kibana
  ports:
    - port: 5601
      targetPort: 5601
      protocol: TCP
      name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.9.0
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
          name: http

将以上内容保存到一个文件中,例如efk.yaml,然后执行以下命令来安装EFK:

$ kubectl apply -f efk.yaml

安装完成后,我们可以使用以下命令来确认EFK组件的部署状态:

$ kubectl get pods -l app=elasticsearch
$ kubectl get pods -l app=fluentd
$ kubectl get pods -l app=kibana

收集和存储日志

在安装了EFK之后,我们需要配置Kubernetes集群中的应用程序将日志发送到Fluentd。我们可以使用以下示例来配置一个Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

5a6ysVJd64PV