In High Performance Java Persistence book's 6.3.3.3 section it's written that Lost Update phenomena is possible in MySQL Repeatable Read isolation level. This is the screenshot:
Assuming the following(isolation level is REPEATABLE READ):
tx1 | tx2
-----------------------------------------------------------------------------------
START TRANSACTION; |
SELECT * FROM test WHERE id = 1; |
( say, DB_TRX_ID = 7 at this moment) |
|
| START TRANSACTION;
| SELECT * FROM test WHERE id = 1;
| UPDATE test SET name="x" WHERE id = 1;
| COMMIT;(say, makes DB_TRX_ID = 10)
|
UPDATE test SET name="y" WHERE id = 1;|
COMMIT;
Question:
Upon tx1 commit will MVCC detect that the row version(DB_TRX_ID) is not equal to 7 anymore(instead it's 10) and perform a rollback ? Or the commit will be succeeded causing a Lost Update ?
I'm the author of the book that's mentioned in the question.
According to the SQL standard, Repeatable Read should prevent:
The standard says nothing about lost updates because the standard was designed when 2PL (Two-Phase Locking) was the facto Concurrency Control mechanism.
If you use 2PL, then the Repeatable Read isolation level will, indeed, prevent a Lost Update.
However, MVCC can provide Repeatable Reads via multiple versions of a tuple, but in order to prevent Lost Updates, they also need the transaction scheduler to track tuple modifications for the records read by a certain transaction. Apparently, InnoDB does not work like that.
MVCC does not use any Pessimistic Locking in Repeatable Read. The only locks taken are the gap and next-key locks taken on the clustered index, but those don't prevent Lost Updates.
MySQL uses Pessimistic Locking for Serializable only, which provides a 2PL Concurrency Control model, even when using the MVCC-based InnoDB storage engine.