ActiveMQ Artemis Kubernetes multi broker setup

3.2k views Asked by At

I am trying to setup ActiveMQ Artemis multi broker setup in Kubernetes environment. I am able to run single pod deployments with persistence enabled successfully. I used the Artemis docker image built from the official repo.

But if I try to setup a multi-pod deployment with same persistence volume attached (shared PV), although pods gets deployed, one pod will be successful and other pods will crash because the first Artemis container has made file lock on the directory. So I am unable to bring up multiple pod with shared storage.

I also tried JGroups and broadcast concepts to create cluster so that each broker has their own storage and then communicate to each broker internally, but I was not able to configure it successfully.

Has anyone been able to successfully deploy multi-broker Artemis in Kubernetes? There is no issue if each pod has their own storage, but there should be high availabilty for Artemis broker and brokers should communicate like in cluster so that we would not lose messages.

It would be really helpful if anyone can share resources or steps about how to achieve this.

Edit

<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>${name}</name>

${jdbc}
      <persistence-enabled>${persistence-enabled}</persistence-enabled>

                <connectors>
      <connector name="netty-connector">tcp://${ipv4addr:localhost}:61618</connector>
    </connectors>

            <broadcast-groups>
      <broadcast-group name="cluster-broadcast-group">
        <broadcast-period>5000</broadcast-period>
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <connector-ref>netty-connector</connector-ref>
      </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
      <discovery-group name="cluster-discovery-group">
        <jgroups-file>jgroups.xml</jgroups-file>
        <jgroups-channel>active_broadcast_channel</jgroups-channel>
        <refresh-timeout>10000</refresh-timeout>
      </discovery-group>
    </discovery-groups>

    <cluster-connections>
      <cluster-connection name="artemis-cluster">
        <connector-ref>netty-connector</connector-ref>
        <retry-interval>500</retry-interval>
        <use-duplicate-detection>true</use-duplicate-detection>
        <message-load-balancing>STRICT</message-load-balancing>
        <!-- <address>jms</address> -->
        <max-hops>1</max-hops>
        <discovery-group-ref discovery-group-name="cluster-discovery-group"/>
        <!-- <forward-when-no-consumers>true</forward-when-no-consumers> -->
      </cluster-connection>
    </cluster-connections>

      <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>${journal.settings}</journal-type>

      <paging-directory>${data.dir}/paging</paging-directory>

      <bindings-directory>${data.dir}/bindings</bindings-directory>

      <journal-directory>${data.dir}/journal</journal-directory>

      <large-messages-directory>${data.dir}/large-messages</large-messages-directory>

      ${journal-retention}

      <journal-datasync>${fsync}</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>10</journal-pool-files>

      <journal-device-block-size>${device-block-size}</journal-device-block-size>

      <journal-file-size>10M</journal-file-size>
      ${journal-buffer.settings}${ping-config.settings}${connector-config.settings}

      <!-- how often we are looking for how many bytes are being used on the disk in ms -->
      <disk-scan-period>5000</disk-scan-period>

      <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>

      <!-- should the broker detect dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>

      <critical-analyzer-timeout>120000</critical-analyzer-timeout>

      <critical-analyzer-check-period>60000</critical-analyzer-check-period>

      <critical-analyzer-policy>HALT</critical-analyzer-policy>

      ${page-sync.settings}

      ${global-max-section}
      <acceptors>
            
            <acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>

         <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
         <!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
                                      as duplicate detection requires applicationProperties to be parsed on the server. -->
         <!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
                                       default: 102400, -1 would mean to disable large mesasge control -->

         <!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
                    "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
                    See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->


         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://${host}:${default.port}?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=${support-advisory};suppressInternalManagementObjects=${suppress-internal-management-objects}</acceptor>
${amqp-acceptor}${stomp-acceptor}${hornetq-acceptor}${mqtt-acceptor}
      </acceptors>

${cluster-security.settings}${cluster.settings}${replicated.settings}${shared-store.settings}
      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="${role}"/>
            <permission type="deleteNonDurableQueue" roles="${role}"/>
            <permission type="createDurableQueue" roles="${role}"/>
            <permission type="deleteDurableQueue" roles="${role}"/>
            <permission type="createAddress" roles="${role}"/>
            <permission type="deleteAddress" roles="${role}"/>
            <permission type="consume" roles="${role}"/>
            <permission type="browse" roles="${role}"/>
            <permission type="send" roles="${role}"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="${role}"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>${full-policy}</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
                 
                 
                 
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>${full-policy}</address-full-policy>
            <auto-create-queues>${auto-create}</auto-create-queues>
            <auto-create-addresses>${auto-create}</auto-create-addresses>
            <auto-create-jms-queues>${auto-create}</auto-create-jms-queues>
            <auto-create-jms-topics>${auto-create}</auto-create-jms-topics>
            <auto-delete-queues>${auto-delete}</auto-delete-queues>
            <auto-delete-addresses>${auto-delete}</auto-delete-addresses>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>${address-queue.settings}
      </addresses>


     
      <broker-plugins>
         <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>
         </broker-plugin>
      </broker-plugins>


   </core>
</configuration>

This is my broker.xml configuration.

<config xmlns="urn:org:jgroups"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">

  <TCP
    enable_diagnostics="true"
    bind_addr="match-interface:eth0,lo"
    bind_port="7800"
    recv_buf_size="20000000"
    send_buf_size="640000"
    max_bundle_size="64000"
    max_bundle_timeout="30"
    sock_conn_timeout="300"

    thread_pool.enabled="true"
    thread_pool.min_threads="1"
    thread_pool.max_threads="10"
    thread_pool.keep_alive_time="5000"
    thread_pool.queue_enabled="false"
    thread_pool.queue_max_size="100"
    thread_pool.rejection_policy="run"

    oob_thread_pool.enabled="true"
    oob_thread_pool.min_threads="1"
    oob_thread_pool.max_threads="8"
    oob_thread_pool.keep_alive_time="5000"
    oob_thread_pool.queue_enabled="true"
    oob_thread_pool.queue_max_size="100"
    oob_thread_pool.rejection_policy="run"
  />

  <!-- <TRACE/> -->

  <org.jgroups.protocols.kubernetes.KUBE_PING
    namespace="${KUBERNETES_NAMESPACE:default}"
    labels="${KUBERNETES_LABELS:app=custom-artemis-service}"
  />

  <MERGE3 min_interval="10000" max_interval="30000"/>
  <FD_SOCK/>
  <FD timeout="10000" max_tries="5" />
  <VERIFY_SUSPECT timeout="1500" />
  <BARRIER />
  <pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
  <UNICAST3
    xmit_table_num_rows="100"
    xmit_table_msgs_per_row="1000"
    xmit_table_max_compaction_time="30000"
  />
  <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
  <pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/>
  <FC max_credits="2000000" min_threshold="0.10"/>
  <FRAG2 frag_size="60000" />
  <pbcast.STATE_TRANSFER/>
  <pbcast.FLUSH timeout="0"/>

</config>

This is jgroups.xml I used.

I used this config to setup multi-pods setup in k8s.I added relevant Kube ping jars in the lib folder.Although two pods were up, when I tried to access the Artemis UI,there were inconsistent behaviour.After logins,user lands on a UI page where asked to add connections.Sometimes even after successfule login,user is redirected to login page.User is not getting the UI usually gets when there is a single broker. I donot see any error logs too. Can anyone recommend the broker xml changes needed for kuberenetes deployment?

1

There are 1 answers

3
Domenico Francesco Bruscino On

ArtemisCloud.io proposes a solution with an operator to deploy an ActiveMQ Artemis Kubernetes multi broker setup, see https://artemiscloud.io/blog/using_operator/ https://artemiscloud.io/documentation/operator/deploying-brokers-operator.html