_Because synchrony is harmony_\n\nIt was a magical ‘aha’ moment for me when I learned about multithreading for the first time. The fact that I can ask my computer to do actions in a parallel manner delighted me (although it should be noted here that things don’t happen precisely in a parallel manner on a single core computer, and more importantly, they don’t precisely execute in a parallel sense in Python due to the language's Global Interpreter Lock). Multithreading opens new dimensions for computing, but with power comes responsibility.\n\nThere are obvious troubles one can imagine with multithreading — many threads trying to access the same piece of data can lead to problems like making data inconsistent or getting garbled output (like having `HWeolrldo` in place of `Hello World` on your console). Such problems can arise when we don’t tell the computer how to mange threads in an organized manner.\n\nBut how can we ‘tell’ the computer to keep the threads of our program in synchrony? We do so by using _synchronization primitives._ These are simple software mechanisms to ensure that your threads run in a harmonious manner with each other.\n\nThis post presents some of the most popular synchronization primitives in Python, defined in it’s standard `threading.py` module. Most of the blocking methods (i.e., the methods which block execution of a particular thread until some condition is met) of these primitives provide the optional functionality of timeout, but I haven’t included it here for simplicity. Also I’ve just included the principal functionalities of these objects, again for the sake of simplicity. This post assumes you have a basic knowledge of implementing multithreading using Python.\n\nWe’ll be learning about `Locks`, `RLocks`, `Semaphores`, `Events`, `Conditions` and `Barriers`. Of course, you can construct your own custom synchronization primitives by subclassing these classes. We’ll start with `Locks` as they are the simplest primitives and gradually we’ll move on to primitives with more and more sophistication.\n\n### Locks\n\n`Lock`s are perhaps the simplest synchronization primitives in Python. A `Lock` has only two states — locked and unlocked (surprise). It is created in the unlocked state and has two principal methods — `acquire()` and `release()`. The `acquire()` method locks the `Lock` and blocks execution until the `release()` method in some other coroutine sets it to unlocked. Then it locks the `Lock` again and returns `True`. The `release()` method should only be called in the locked state, it sets the state to unlocked and returns immediately. If `release()` is called in the unlocked state, a `RunTimeError` is raised.\n\nHere’s the code which uses a `Lock` primitive for securely accessing a shared variable:\n\n#lock\\_tut.py\n\nfrom threading import Lock, Thread \nlock = Lock() \ng = 0 \n \ndef add\\_one(): \n """ \n Just used for demonstration. It’s bad to use the ‘global’ \n statement in general. \n """ \n \n global g \n lock.acquire() \n g += 1 \n lock.release() \n \ndef add\\_two(): \n global g \n lock.acquire() \n g += 2 \n lock.release() \n \nthreads = \\[\\] \nfor func in \\[add\\_one, add\\_two\\]: \n threads.append(Thread(target=func)) \n threads\\[-1\\].start() \n \nfor thread in threads: \n """ \n Waits for threads to complete before moving on with the main \n script. \n """ \n thread.join() \n \nprint(g)\n\nThis simply gives an output of 3, but now we are sure that the two functions are not changing the value of the global variable `g` simultaneously although they run on two different threads. Thus, `Lock`s can be used to avoid inconsistent output by allowing only one thread to modify data at a time.\n\n### RLocks\n\nThe standard `Lock` doesn’t know which thread is currently holding the \nlock. If the lock is held, any thread that attempts to acquire it will \nblock, even if the same thread itself is already holding the lock. \nIn such cases, `RLock` (re-entrant lock) is used. You can extend the code in the following snippet by adding output statements for demonstrating how `RLock`s can prevent unwanted blocking.\n\n#rlock\\_tut.py\n\nimport threading \n \nnum = 0 \nlock = Threading.Lock() \n \nlock.acquire() \nnum += 1 \nlock.acquire() # This will block. \nnum += 2 \nlock.release() \n \n \n\\# With RLock, that problem doesn’t happen. \nlock = Threading.RLock() \n \nlock.acquire() \nnum += 3 \nlock.acquire() # This won’t block. \nnum += 4 \nlock.release() \nlock.release() # You need to call release once for each call to acquire.\n\nOne good use case for `RLock`s is recursion, when a parent call of a function would otherwise block its nested call. Thus, the main use for `RLock`s is nested access to shared resources.\n\n### Semaphores\n\nSemaphores are simply advanced counters. An `acquire()` call to a semaphore will block only after a number of threads have `acquire()`ed it. The associated counter decreases per `acquire()` call, and increases per `release()` call. A `ValueError` will occur if `release()` calls try to increment the counter beyond it’s assigned maximum value (which is the number of threads that can `acquire()` the semaphore before blocking occurs). Following code demonstrates the use of semaphores in a simple producer-consumer problem.\n\n#semaphores\\_tut.py\n\nimport random, time \nfrom threading import BoundedSemaphore, Thread\n\nmax\\_items = 5\n\n""" \nConsider 'container' as a container, of course, with a capacity of 5 \nitems. Defaults to 1 item if 'max\\_items' is passed. \n""" \ncontainer = BoundedSemaphore(max\\_items)\n\ndef producer(nloops): \n for i in range(nloops): \n time.sleep(random.randrange(2, 5)) \n print(time.ctime(), end=": ") \n try: \n container.release() \n print("Produced an item.") \n except ValueError: \n print("Full, skipping.")\n\ndef consumer(nloops): \n for i in range(nloops): \n time.sleep(random.randrange(2, 5)) \n print(time.ctime(), end=": ")\n\n """ \n In the following if statement we disable the default \n blocking behaviour by passing False for the blocking flag. \n """\n\n if container.acquire(False): \n print("Consumed an item.") \n else: \n print("Empty, skipping.")\n\nthreads = \\[\\] \nnloops = random.randrange(3, 6) \nprint("Starting with %s items." % max\\_items) \nthreads.append(Thread(target=producer, args=(nloops,))) \nthreads.append(Thread(target=consumer, args=(random.randrange(nloops, nloops+max\\_items+2),)))\n\nfor thread in threads: # Starts all the threads. \n thread.start() \nfor thread in threads: # Waits for threads to complete before moving on with the main script. \n thread.join() \nprint("All done.")\n\n!(https://hackernoon.com/hn-images/1*-BjV8tcNk4TzprX-5NJnhg.gif)\n\nsemaphore\\_tut.py in action\n\nThe `threading` module also provides the simple `Semaphore` class. A `Semaphore` provides a non-bounded counter which allows you to call `release()` any number of times for incrementing. However, to avoid programming errors, it’s usually a correct choice to use `BoundedSemaphore` , which raises an error if a `release()` call tries to increase the counter beyond it’s maximum size.\n\nSemaphores are typically used for limiting a resource, like limiting a server to handle only 10 clients at a time. In such a case, multiple thread connections compete for a limited resource (in our example, it is the server).\n\n### Events\n\nThe `Event` synchronization primitive acts as a simple communicator between threads. They are based on an internal flag which threads can `set()` or `clear()`. Other threads can `wait()` for the internal flag to be `set()`. The `wait()` method blocks until the flag becomes true. Following snippet demonstrates how `Event`s can be used to trigger actions.\n\n#event\\_tut.py\n\nimport random, time \nfrom threading import Event, Thread \n \nevent = Event() \n \ndef waiter(event, nloops): \n for i in range(nloops): \n print(“%s. Waiting for the flag to be set.” % (i+1)) \n event.wait() # Blocks until the flag becomes true. \n print(“Wait complete at:”, time.ctime()) \n event.clear() # Resets the flag. \n print() \n \ndef setter(event, nloops): \n for i in range(nloops): \n time.sleep(random.randrange(2, 5)) # Sleeps for some time. \n event.set() \n \nthreads = \\[\\] \nnloops = random.randrange(3, 6) \n \nthreads.append(Thread(target=waiter, args=(event, nloops))) \nthreads\\[-1\\].start() \nthreads.append(Thread(target=setter, args=(event, nloops))) \nthreads\\[-1\\].start() \n \nfor thread in threads: \n thread.join() \n \nprint(“All done.”)\n\n!(https://hackernoon.com/hn-images/1*tv4dRrJZTwJYw_B9zIubcA.gif)\n\nExecution of event\\_tut.py\n\n### Conditions\n\nA `Condition` object is simply a more advanced version of the `Event` object. It too acts as a communicator between threads and can be used to `notify()` other threads about a change in the state of the program. For example, it can be used to signal the availability of a resource for consumption. Other threads must also `acquire()` the condition (and thus its related lock) before `wait()`ing for the condition to be satisfied. Also, a thread should `release()` a `Condition` once it has completed the related actions, so that other threads can acquire the condition for their purposes. Following code demonstrates the implementation of another simple producer-consumer problem with the help of the `Condition` object.\n\n#condition\\_tut.py\n\nimport random, time \nfrom threading import Condition, Thread\n\n""" \n'condition' variable will be used to represent the availability of a produced \nitem. \n"""\n\ncondition = Condition()\n\nbox = \\[\\]\n\ndef producer(box, nitems): \n for i in range(nitems): \n time.sleep(random.randrange(2, 5)) # Sleeps for some time. \n condition.acquire() \n num = random.randint(1, 10) \n box.append(num) # Puts an item into box for consumption. \n condition.notify() # Notifies the consumer about the availability. \n print("Produced:", num) \n condition.release()\n\ndef consumer(box, nitems): \n for i in range(nitems): \n condition.acquire() \n condition.wait() # Blocks until an item is available for consumption. \n print("%s: Acquired: %s" % (time.ctime(), box.pop())) \n condition.release()\n\nthreads = \\[\\]\n\n""" \n'nloops' is the number of times an item will be produced and \nconsumed. \n"""\n\nnloops = random.randrange(3, 6) \nfor func in \\[producer, consumer\\]: \n threads.append(Thread(target=func, args=(box, nloops))) \n threads\\[-1\\].start() # Starts the thread.\n\nfor thread in threads: \n """Waits for the threads to complete before moving on \n with the main script. \n """ \n thread.join() \nprint("All done.")\n\n!(https://hackernoon.com/hn-images/1*tTYcI9yP6XrnZcFSRGA_vw.gif)\n\nOutput of condition\\_tut.py\n\nThere can be other uses of `Condition`s. I think they will be useful when you’re developing a streaming API which notifies a waiting client once a piece of data is available.\n\n### Barriers\n\nA barrier is a simple synchronization primitive which can be used by different threads to wait for each other. Each thread tries to pass a barrier by calling the `wait()` method, which will block until all of threads have made that call. As soon as that happens, the threads are released simultaneously. Following snippet demonstrates the use of `Barrier`s.\n\n#barrier\\_tut.py\n\nfrom random import randrange \nfrom threading import Barrier, Thread \nfrom time import ctime, sleep \n \nnum = 4 \n\\# 4 threads will need to pass this barrier to get released. \nb = Barrier(num) \nnames = \\[“Harsh”, “Lokesh”, “George”, “Iqbal”\\] \n \ndef player(): \n name = names.pop() \n sleep(randrange(2, 5)) \n print(“%s reached the barrier at: %s” % (name, ctime())) \n b.wait() \n \nthreads = \\[\\] \nprint(“Race starts now…”) \n \nfor i in range(num): \n threads.append(Thread(target=player)) \n threads\\[-1\\].start()\n\n""" \nFollowing loop enables waiting for the threads to complete before moving on with the main script. \n"""\n\nfor thread in threads: \n thread.join() \nprint() \nprint(“Race over!”)\n\n!(https://hackernoon.com/hn-images/1*CYnUEjVV8Ztq1dwwQ9EnHA.gif)\n\nHere’s the output of barrier\\_tut.py\n\nBarriers can find many uses; one of them being synchronizing a server and a \nclient — as the server has to wait for the client after initializing itself.\n\nWith that, we have reached the end of our discussion on synchronization primitives in Python. I wrote this post as a [solution to an exercise](https://github.com/schedutron/CPAP/blob/master/Chap4/sync_prim.md) in the book “Core Python Applications Programming” by Wesley Chun. If you liked this post, consider having a look at [my other works from this book](https://github.com/schedutron/CPAP) on GitHub and starring the repository 🙂. The gists for code mentioned in this article are also available at my profile.\n\nSources: [effbot.org](http://effbot.org/zone/thread-synchronization.htm), [bogotobogo.com](http://www.bogotobogo.com/python/Multithread/), [Python Docs](https://docs.python.org/3/library/threading.html)\n\nI’m new to blogging, so constructive criticism is not only welcomed, but very much wanted!\n\n[!(https://hackernoon.com/hn-images/1*lHomguUE0eH9Y7xHHH82hw.png)](http://buymeacoff.ee/schedutron)\n\nDid you like the read? Medium doesn’t offer partner program in my country―so I ask people to buy me coffee instead.