Since I'm having very long-lasting lock issues on my table records...
would it be correct to use the "Read-Uncommitted" transaction isolation level for all select queries of Entity Framework 6 in my ASP.Net MVC web-app with SQL-Server database?
What would be the dangers, limitations and considerations that I should take into account?
The typical "dirty read" potential issues where changes that are not committed yet to the database can be "seen" by a query, including ghost records where a transaction rolls back. This can mean that rows come back in search results prior to them being fully committed. If an EF system is the "gate keeper" for the data written to the database this may not be that big of an issue since ghosts would only appear for the duration of a SaveChanges call, though if you are using transaction scopes etc. and saving changes within the scope, ghosts / dirty data could be present through the duration of the transaction scope.
When it comes to heavy read operations like searches and reports you can opt to run these against a separate DbContext instance set up for dirty reads, but you should ensure that the DbContext
SaveChanges
method is overridden to throw an exception to guard against it being accidentally used to write. Ideally the entity declarations for dirty reads should be declared completely separate to differentiate an entity that was read in accordance to locking rules vs. one that might be dirty. (unreliable) When it comes to reports on larger systems I generally aim to run on separate replicated read-only DB instances.I would recommend first looking at leveraging projection for all large read-only operations like searches so that the executed queries are returning only the data that is necessary, and all the data that is necessary to avoid inefficient queries eager-loading data that isn't needed or tripping lazy loads due to things like serialization, again pulling data that isn't needed. This helps ensure that tables that don't need to get touched don't get touched (risking lock issues) and also helps ensure that queries run as fast as possible, reducing the window length for lock issues. This is typically enough to keep typical usage systems responsive enough and not running into lock issues for search & read/write scenarios. It also helps to refine searching to leverage indexes as much as possible and avoid overly-liberated querying by users that could result in things like table-scans. (Slow and prone to tripping lock issues)