One of our application1 is using activeMQ endpoints with below configuration activemq.broker.url = failover:(tcp://master:61616,tcp://slave:61616)?randomize=false in this case our application1 is producing event and similarly there is other application2 whose consumer pointed to same set of endpoint activemq.broker.url = failover:(tcp://master:61616,tcp://slave:61616)?randomize=false is able to consumes the event, processes it and puts it back to a different queue on the same broker1. The above setup is a multi tenant service.

Now, we want to migrate to amazonMQ which is different environment with it's own underlying kahaDB. How can we achieve the migration without loosing the event?

If we make the application1 to point to new broker2 endpoints which is activemq.broker.url = failover:(tcp://master2:61616,tcp://slave2:61616)?randomize=false it will start producing the messages to broker2 which is amazonMQ.

Similarly, if we make changes in application2 who is consuming event and relaying back to broker which needs to be consumed by application1 with activemq.broker.url = failover:(tcp://master:61616,tcp://slave:61616,tcp://master2:61616,tcp://slave2:61616)?randomize=true. With this set we are able to produce the event ready to be consumed by the application1.

Problem arises : application2 can publish the event back to broker1 which is old endpoints activemq.broker.url = failover:(tcp://master:61616,tcp://slave:61616) and this event will never be consumed back ever.

How can I achieve the migration without loosing any events to amazonMQ? Is this the right way to do that or what can be done here ?

1 Answers

0
anshul Gupta On

We fixed it by writing to separate consumer for both amazonMQ and activeMQ. Later registering both of them during application startup.