Redis (via GCP Memorystore) is out of memory although maxmemory-policy is allkeys-lru

330 views Asked by At

In a GCP App Engine application using Redis (6.x) as Memorystore, with a maxmemory-policy configured as allkeys-lru, we get out of memory errors.

Furthermore, every key that we write to Redis has a TTL of 2 hours. In spite of this, the memory usage graph is steadily growing (with occasionnal troughs), until it hits the maximum memory. When it does, it seems the LRU policy is not able to reclaim memory and all calls to Redis end up in "out of memory" errors.

It seems to me that reclaiming memory with an LRU policy is the core feature of a cache server, and I cannot imagine Redis would have an actual bug there. So is there some other configuration options that we should look into. Of all the maxmemory-policy options, allkeys-lru seems to be the most appropriate, for use as a database access cache.

1

There are 1 answers

1
patb On

The explanation is that GCP Memorystore allocates a fixed memory size of say 1GB, and by default it also sets the maxmemory of Redis to the same amount. But the way Redis handles the limit is by going past it. Quoting from the documentation (https://redis.io/docs/reference/eviction/) "We continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits". But GCP gives no leeway, so crossing the limit is not an option, so it fails with an out of memory error.

The solution is to allways set maxmemory-percent=95 on your Memorystore configuration, which means that the LRU reclaiming actions are trigered when reaching 95% of the maximum, not 100%. Of course the appropriate value may be some other value, but 95% seems ok as a first guess.

This should definitely be a default setting when a project creates a Memorystore with Redis.