In our configuration file of our secondary database we defined an oplog size of 99GB (oplogSizeMB: 99000) but after a few hours we already encounter a size of more than 120 GB (1 file with 74 GB and a second file with 65 GB).
-rw-rw-r-- 1 flugpool fp 16384 Dec 19 10:00 4-4352434068697173865.wt
-rw-rw-r-- 1 flugpool fp 16384 Dec 19 10:00 2-4352434068697173865.wt
-rw-rw-r-- 1 flugpool fp 16384 Dec 19 10:00 0-4352434068697173865.wt
-rw-rw-r-- 1 flugpool fp 69907304448 Dec 19 16:24 10-4352434068697173865.wt
-rw-rw-r-- 1 flugpool fp 36864 Dec 19 17:13 8-4352434068697173865.wt
-rw-rw-r-- 1 flugpool fp 78893129728 Dec 19 17:14 12-4352434068697173865.wt
Is this a bug in version 3.4.0? We did not encounter this problem in mongodb version 3.2.1. or do we missunderstand something about the behaviour of the oplog?
We are running mongodb on SUSE Linux Enterprise Server 11 (x86_64).
systemLog:
destination: file
path: "[...]/mongodb.log"
logAppend: true
timeStampFormat: ctime
quiet: true
processManagement:
pidFilePath: "[...]/mongodb.pid"
fork: true
operationProfiling:
slowOpThresholdMs: 10000
net:
port: 27017
http:
enabled: false
RESTInterfaceEnabled: false
storage:
dbPath: "[...]/data/db"
journal:
enabled: false
directoryPerDB: true
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 70
directoryForIndexes: true
indexConfig:
prefixCompression: true
replication:
oplogSizeMB: 99000
replSetName: "ANGPOOL_REPLSET"