I am working in a Java EE application which require too much in-memory server side application level data (i.e. it is not user level data). By application level data, I mean that data is constant for all users e.g. master data. Till now we were using EHCache for 15-20 concurrent users on a single Windows server with 16 GB RAM. We specified heap size as 8GB for our application.
Now we need to redesign application so that it is able to support more than 500 concurrent users. This will lead to more in memory data requirement.
I would like to have your viewpoint in such a scenario so that application is scalable enough.
As per my understanding, following solutions can help -
Implement load balancing so that load is divided but in-memory data on each server will still be high as it is application level data. Though it will help to some extent.
Implementing this as a stateless operation rather than keeping data in cache. But this will have performance impact. I read somewhere that Statelessness is key to scalability. I would like to avoid this as this will require too much work.
Use BigMemory from Terracotta in combination with EHCache. It basically keeps data on disk in a special manner i.e. data access speed is still good. Please note this is not a free product , do we have any free option like this?
Opt for Cloud based memory architecture? Not very much aware about this.
Any suggestions will be highly appreciated.
You can use BigMemory Go, which would enable you to use up to 32GB of memory (RAM) per server. As, I think you slightly misunderstood what BigMemory is, it doesn't store data on Disk (it can if you want the process to be restartable), but the data is always accessed directly from memory.
See http://terracotta.org/products/bigmemorygo for more details. But given you're below that 32GB limit, it'd be all free for you...