I'm aware that Python threads can only execute bytecode one at a time, so why would the threading library provide locks? I'm assuming race conditions can't occur if only one thread is executing at a time.
The library provides locks, conditions, and semaphores. Is the only purpose of this to synchronize execution?
Update:
I performed a small experiment:
from threading import Thread
from multiprocessing import Process
num = 0
def f():
global num
num += 1
def thread(func):
# return Process(target=func)
return Thread(target=func)
if __name__ == '__main__':
t_list = []
for i in xrange(1, 100000):
t = thread(f)
t.start()
t_list.append(t)
for t in t_list:
t.join()
print num
Basically I should have started 100k threads and incremented by 1. The result returned was 99993.
a) How can the result not be 99999 if there's a GIL syncing and avoiding race conditions? b) Is it even possible to start 100k OS threads?
Update 2, after seeing answers:
If the GIL doesn't really provide a way to perform a simple operation like incrementing atomically, what's the purpose of having it there? It doesn't help with nasty concurrency issues, so why was it put in place? I've heard use cases for C-extensions, can someone examplify this?
The GIL synchronizes bytecode operations. Only one byte code can execute at once. But if you have an operation that requires more than one bytecode, you could switch threads between the bytecodes. If you need the operation to be atomic, then you need synchronization above and beyond the GIL.
For example, incrementing an integer is not a single bytecode:
Here it took four bytecodes to implement
num += 1
. The GIL will not ensure that x is incremented atomically. Your experiment demonstrates the problem: you have lost updates because the threads switched between the LOAD_GLOBAL and the STORE_GLOBAL.The purpose of the GIL is to ensure that the reference counts on Python objects are incremented and decremented atomically. It isn't meant to help you with your own data structures.