We have a MongoDB collection, let's call it Accounts. Each account holds an array of devices.
const accountSchema = new mongoose.Schema(
{
...
devices: [devicesSchema],
...
}
);
const devicesSchema: Schema = new mongoose.Schema(
{
deviceId: String,
platform: String,
...
},
{ _id: false, toJSON: { virtuals: true }, toObject: { virtuals: true } }
);
We have a webhook that receives account events that update this array of devices. For example, in a happy path situation, a webhook event executes this code:
const existingDevices = account.devices;
const newDevices = existingDevices.filter(...).map(...).concat(...) // do stuff to the existingDevices array
await Account.findOneAndUpdate(
{ _id: account._id },
{
devices: newDevices
}
);
This works fine. However, when two webhooks for the same account are received, we are in danger of both webhooks trying to update the same array, resulting in a race condition where only one of the two updates are completed.
Is there a MongoDB way of solving this? I am familiar with other ways such as passing the event from a webhook to a queue that processes events sequantially, or webhook gateways that will send events to our webhook sequentially, but I was specifically interested in knowing if there is any efficient way of doing this natively in MongoDB - for example, some locking mechanism.
mongoDB does provides optimistic conurrency. here is a similar issue - https://www.mongodb.com/community/forums/t/does-mongodb-transactions-implement-optimistic-concurrency-locking-by-default/234943
you can do this, it might help but iam afraid if contention is very much high then it might conflict again.
let me know if it worked :)