I am having trouble with a redis cache set up to store serialized Java objects (average ~30k size) We just changed the implementation so that all the cached objects have no expiration (ttl == -1)
I then changed the redis.conf like so
set maxmemory-policy allkeys-lru (was volatile-ttl)
set maxmemory-samples 7 (was 3=default)
set maxmemory 1gb (was 300mb)
We have the following 'save' rules in place
save 900 1
save 300 10
save 60 10000
The issue is that whenever between 1000-8000 keys are saved, the whole cache is flushed to 0, starting over.
I cannot find the source for this, I tried doing
redis-cli monitor | grep "DEL"
but it shows no large number of deletes being issued I also tried
redis-cli monitor | grep flush
but this shows no output at all for a few minutes. I tried restarting the redis service after increasing the maxmemory-setting (although I shouldn't have to), but this shows no changes in behaviour either.
Has anyone seen anything like this before? NOTE: we use Redis 2.8 -if there were patches in later versions, I am willing to upgrade Please let me know if you need more details to narrow the issue down.
thanks!
The issue ended up being user error. In our client code, the program was calling redis.del(keyname) and subsequently redis.flushDB(), which the programmer had unknowingly added in an attempt to push the changes to the database.
I would have found the issue sooner if I had refined my grep slightly: