Lets say I have a model Document.
Document
document_id -> int
client_id -> int
I want the document id to be auto incremented based on the client id avoiding a race condition. I have an unique constraint on document_id, client_id.
client_id document_id
1 1
1 2
2 1
1 3
Approach 1: Right now, Im setting the document id on before_create callback but it leads to race condition and sets the same value when multiple documents are received for the same client at the same time.
before_create :set_identifier def set_identifier self.document_id = client.documents.maximum(:document_id).next end
This raises the below error when 2 documents are created at the same time.
ActiveRecord::RecordNotUnique PG::UniqueViolation: ERROR: duplicate key value violates unique constraint
Approach 2: Tried using DB triggers on postgresql. It works fine. But the only problem is that, the updated value doesn't reflect initially on the assigned variable. I need to do a force reload on the variable.
DB Function:
CREATE OR REPLACE FUNCTION set_document_identifier()
RETURNS TRIGGER AS $$
DECLARE
max_identifier INTEGER;
BEGIN
SELECT MAX(document_id) INTO max_identifier
FROM documents
WHERE client_id = NEW.client_id;
IF max_identifier IS NULL THEN
max_identifier := 1;
END IF;
NEW.document_id := max_identifier;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
DB Trigger:
CREATE TRIGGER before_insert_trigger_documents
BEFORE INSERT ON documents
FOR EACH ROW
EXECUTE FUNCTION set_document_identifier();
Rails:
doc = Document.create!(client_id: 1)
doc.document_id -> nil
doc.reload.document_id -> 1
Is there any other possible OTB solution which rails provide or a workaround that can be done on the triggers so that the value is set before the transaction commit?
You obviously can't have unique incrementing values per
client_idand also have multiple users generating their own next value concurrently. That's just how the universe works.The simplest method to deal with this is (as you discovered) to have a single point where this is done - in the database.
You should also either (a) lock appropriately to prevent concurrent inserts or (b) expect duplicate key errors and handle them.