k8s部署activemq5.18.2集群(也可以在物理机搭建,方法一样)
  1pC2C06v0rDZ 2023年11月02日 48 0

官网主从配置方法:https://activemq.apache.org/masterslave

通过官网我们可以知道主从集群方式支持两种,一种是共享文件系统主从(支持高性能日志),一种是jdbc主从(涉及搭建数据库,太麻烦,所以我们pass掉)

k8s部署activemq5.18.2集群(也可以在物理机搭建,方法一样)_k8s

我们从官网看到,通过leveldb这种持久化被从生产中删掉了,所以大家不要参考那种zookeeper+leveldb这种方式了(并且只支持5.16版本以下,5.18要是这样配置会直接报错语法问题),而且zookeeper+kahadb这种方式还在实验中,官网也并没有给出部署方式,所以我们只有一条路目前,就是通过共享文件系统搭建主从

k8s部署activemq5.18.2集群(也可以在物理机搭建,方法一样)_k8s_02

k8s部署activemq5.18.2集群(也可以在物理机搭建,方法一样)_activemq集群_03

首先我创建了两个configmap,然后我又通过文件的方式创建了一个configmap,总共三个configmap,老司机一般都能看懂,看不懂的新手建议看k8s官网先自行学习学习

activemq.xml中修改过的地方如下两部分,broker部分brokername改不改无所谓,不需要主机名解析,只是通过pod name来解析的,物理机部署需要改成不一样的
persistenceAdapter部分需要把kahaDB得路径改成同一个,需要手动创建一个pv,然后挂载到pod里面,这样副本之间就全都共享这一个pv,要是物理机的话需要用nfs共享出一个目录,让三个节点共用这一个(kanadb这个目录会自动在mq得data目录下生成,这里我们不用它这个目录,我们指定一个别的地方mkdir /opt/kanadb然后把这个目录映射出去)
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}" useJmx="true" advisorySupport="false" persistent="true" deleteAllMessagesOnStartup="false" useShutdownHook="false" schedulerSupport="true">
 <persistenceAdapter>
      <kahaDB directory="/opt/kanadb"/>
 </persistenceAdapter>
[root@master]#cat activemq5.18-cluster-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
   name: pv001
   labels:
     name: pv001
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle    #回收策略一般选择人工清理 Retain,delete PV后数据仍然会保留。
  storageClassName: nfs
  nfs:
    path: /nfsdata
    server: 172.16.2.29
    #readOnly: false  无需写
[root@master]#cat activemq5.18-cluster-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc001
  namespace: unis-cluster
spec:
  accessModes:
    - ReadWriteMany      
  resources:
    requests:
      storage: 2Gi
  storageClassName: nfs
[root@master]#cat activemq5.18-cluster.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: activemq-configmap
  namespace: unis-cluster
data:
  jetty-realm.properties: |
    admin: unistj, admin
    user: 123456, user
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: activemq-configmap2
  namespace: unis-cluster
data:  
  credentials.properties: |
    activemq.username=unis
    activemq.password=unisaaa
    guest.password=password
---
apiVersion: v1
kind: Service
metadata:
  name: activemq-hs
  namespace: unis-cluster
  labels:
    app: activemq
spec:
  ports:
  - port: 8161
    name: server
  clusterIP: None #headless service
  selector:
    app: activemq
---
apiVersion: v1
kind: Service
metadata:
  name: activemq-cs
  namespace: unis-cluster
  labels:
    app: activemq
spec:
  type: NodePort
  ports:
  - port: 8161
    name: command
    targetPort: 8161
  - port: 61616
    name: server
    targetPort: 61616
  selector:
    app: activemq
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: activemq-cs
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: rancher.unistj.cn
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: activemq-cs
            port:
              number: 8161
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: activemq-pdb
  namespace: unis-cluster
spec:
  selector:
    matchLabels:
      app: activemq
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: activemq
  namespace: unis-cluster
spec:
  selector:
    matchLabels:
      app: activemq
  serviceName: activemq-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: activemq
    spec:
#      affinity:
#        podAntiAffinity:
#          requiredDuringSchedulingIgnoredDuringExecution:
#            - labelSelector:
#                matchExpressions:
#                  - key: "app"
#                    operator: In
#                    values:
#                    - activemq
#              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-activemq
        imagePullPolicy: Always
        image: "lewinc/activemq:latest"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 8161
          name: command
        - containerPort: 61616
          name: server
#        command:      #这里的命令用了相对路径,是因为statefulset里加了workingDir指定了工作目录,但是volume无法套用,需要用绝对路径
#        - /bin/bash
#        - -c
#        - sed -i "s/brokerName=\"localhost\"/brokerName=\"$(hostname -f)\"/g" conf/activemq.xml; bin/activemq console;
#        - awk '{gsub(/brokerName=\"localhost\"/, "brokerName=\"'$(hostname -f)'\"/");print > "conf/activemq.xml"}' conf/activemq.xml; bin/activemq console;
        volumeMounts:
        - name: activemq-jettyrealm
          mountPath: /opt/apache-activemq-5.18.2/conf/jetty-realm.properties        #需要写绝对路径
          subPath: jetty-realm.properties
        - name: activemq-credentials
          mountPath: /opt/apache-activemq-5.18.2/conf/credentials.properties
          subPath: credentials.properties
        - name: activemq-xml
          mountPath: /opt/apache-activemq-5.18.2/conf/activemq.xml
          subPath: activemq.xml
        - name: activemqdata    #挂载kahaDB持久化为同一个目录(可自定义),activemq集群的要求
          subPath: kanadb
          mountPath: "/opt/apache-activemq-5.18.2/kanadb"
      volumes:
      - name: activemqdata
        persistentVolumeClaim:
          claimName: pvc001
      - name: activemq-jettyrealm
        configMap:
          name: activemq-configmap
          items:
            - key: jetty-realm.properties  #名字对应configmap定义的名字
              path: jetty-realm.properties #volumeMounts中conf模块里写了具体路径后,这儿就无需写了
      - name: activemq-credentials
        configMap:
          name: activemq-configmap2
          items:
            - key: credentials.properties
              path: credentials.properties 
      - name: activemq-xml
        configMap:
          name: activemq.xml
          defaultMode: 0777   #设置权限也不会生效
          items:
            - key: activemq.xml
              path: activemq.xml   
      securityContext: #这个权限root运行
        runAsUser: 0
        fsGroup: 0


【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
  uvM09mQNI0hF   2023年11月19日   22   0   0 promtaillokik8s
  cO5zyEmh8RH6   2023年12月09日   23   0   0 k8s证书
1pC2C06v0rDZ