How is KODO JDO distributed cache performance?

417 views Asked by At

Does anyone have experience with KODO JDO's distributed cache mechanism? I would like to know:

1) what is the latency like between distributed cache updates (so if two users are hitting two separate caches i.e. on two different JVMs and are using the same data and one makes an update, when will the other user, using the other cache, see the update?)

2) how much data will be transferred between JVMs? if an update is made to one cache, does it simply tell the other caches to drop the objects by telling it the primary keys of the objects to flush? (concern is the network traffic/overhead of managing the distributed cache)

3) when you have external feeds updating your database throughout the day (i.e. not coming in through your application), how easy is it to externally invoke a cache flush?

Our application runs in a Weblogic cluster of 12 JVMS and we are considering enabling the distributed cache to help with performance coming from large object graphs being pulled from our database -- which are currently not cached-- but would like to know some real-world experience with #1,2,and 3. Thanks.

1

There are 1 answers

0
stacked On

This is a partial answer, but I believe still helpful (From http://docs.oracle.com/cd/E13189_01/kodo/docs303/ref_guide_cache.html):

When used in conjunction with a kodo.event.RemoteCommitProvider, commit information is communicated to other JVMs via JMS or TCP, and remote caches are invalidated based on this information.

It is not stated whether this means that this commit is included as part of the original transaction (one would hope) or and/or what the lag time or overhead is with this operation and how well it scales (e.g. how does it perform if you're coordinating 15 JVMs and you have multiple users updating the same data)