I have a self-hosted MongoDB deployment on an AWS EKS cluster, version 1.24.
Every time I put some workload on the cluster, the MongoDB shards eat most of RAM of the node. I'm running on t3.medium instances, and every shard uses ~2GB. Since there are multiple shards on each node, it just fills the memory, and the node becomes unavailable.
I've tried limiting the WiredTiger cache size to 0.25GB, but it doesn't seem to work.
I've also tried manually clearing the cache with db.collection.getPlanCache().clear, but it's doing nothing.
db.collection.getPlanCache().list() returns an empty array.
I've also tried checking the storage engine. But bothdb.serverStatus().wiredTiger and db.serverStatus().storageEngine are undefined in the mongoshell.
I'm using the bitnami mongodb-sharded chart, with the current values:
mongodb-sharded:
shards: 8
shardsvr:
persistence:
resourcePolicy: "keep"
enabled: true
size: 100Gi
configsvr:
replicaCount: 2
mongos:
replicaCount: 2
configCM: mongos-configmap
The mongos config map is this one
apiVersion: v1
kind: ConfigMap
metadata:
name: mongos-configmap
data:
mongo.conf: |
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 0.25
inMemory:
engineConfig:
inMemorySizeGB: 0.25
Solved the various issues:
ConfigMap->mongo.confinstead ofmongos.conf. This meant that it was creating a different unused config file.mongosare not the ones with the storage engine: that's on themongod(the shards). So the config should be put inshardsvr.dataNode.configCMshardsvr.dataNode.mongodbExtraFlagsIn my case this is how I setup the
values.yamlAnother note: the reason
db.serverStatus().storageEngineanddb.serverStatus().wiredTigerwereundefinedwas that I was running themongoshfrom MongoDBCompass, which acutally connects to themongos(which does not have a storage engine)If instead you shell into one of the shards, and run
mongosh(in my case it's at/opt/bitnami/monogdb/bin/) the commands work properly.