I need a Lock object, similar to multiprocessing.Manager().Lock()
which only is allowed to be released from the process which actually has acquired it.
My manual implementation would be something similar to the following:
class KeyLock:
def __init__(self):
self._lock = Lock()
self._key: Optional[str] = None
def acquire(self, key: 'str', blocking: bool = True, timeout: float = 10.0) -> bool:
if self._lock.acquire(blocking=blocking, timeout=timeout):
self._key = key
return True
return False
def release(self, key, raise_error: bool = False) -> bool:
if self._key == key:
self._lock.release()
return True
if raise_error:
raise RuntimeError(
'KeyLock.released called with a non matchin key!'
)
return False
def locked(self):
return self._lock.locked()
To create an instance of this lock and use it from multiple processes I would use a custom manager class:
class KeyLockManager(BaseManager):
pass
KeyLockManager.register('KeyLock', KeyLock)
manager = KeyLockManager()
manager.start()
lock = manager.KeyLock()
From different processes I then can do:
lock.acquire(os.getpid())
# use shared ressource
...
lock.release(os.getpid())
This works as expected, but is seems to be a pretty big effort for a relatively simple task. So I wonder whether there is a easier way to do that?
There is
multiprocessing.RLock
that by definition can only be released by the process that acquired it. Or you might consider something like the following where theLock
instance is encapsulated and is meant to be only used as a context manager making it impossible to release it unless you have acquired it unless you violate the encapsulation. One, of course, could add extra protection to the class to protect attempts to violate encapsulation and get to theLock
instance itself. Of course, in your implementation, one could always violate encapsulation too because one can get the pid of other processes. So we assume that all the users abide by the rules.