Large message Amazon SQS

2.3k views Asked by At

I have an application that uses JMS to send files that are about a few megabytes large. Would it be possible to use Amazon SQS as the JMS provider for this application, as described here?

The problem here is that the max size of an SQS message is 256K. One way around this is to break up each file into multiple messages of 256K. But, if I do that, would having multiple producers send files at the same time break the architecture, as the messages from different producers become mixed up?

2

There are 2 answers

0
Yakaas On

In this scenario you cannot use the original message with SQS, you will have to use a new message with a reference to the original message. The reference can be to a S3 object or a custom location on-prem or with-in AWS. S3 option probably involves least amount of work and has best cost efficiency (building and running).

If you consider S3 option, AWS Lambdas can be used to drop the message in SQS.

On a side note, the original message considered here seems to be self contained. May be it's a good idea to revisit the contents of the message, you may find ways to trim it and send only locations around which will result a smaller payload.

2
Naveen Vijay On

If everything is in the same region - the latency and data transfer cost is very minimal. Putting an item in S3 and having the object sent in the SQS should just turn your solution handle any sized data and take off your effort on scalability of the items and size of the each item.

While I said the data transfer costs are minimal, you might still incur data storage costs in S3; which you can use S3 life cycle rules to delete them.

@D.Luffy mentioned an important and excite solution with lambda - with that you can keep adding the items in S3 - enable S3 notifications, get that to the Queue and process the queue item (transfer it to another ec2 instance etc.) - making the solution fire and forget kind of stuff.

Please do not hesitate to leverage S3 alongside with SQS