Swift code suspension point in asynchronous & synchronous code

243 views Asked by At

I keep reading that synchronous function code runs to completion without interruption but asynchronous functions can define potential suspension points via async/await. But my question is why can't synchronous function code be interrupted at any point, can't the OS scheduler suspend the thread/process at any point and give CPU to a higher priority thread/process (just like it happens in any OS)? What am I understanding wrong here?

3

There are 3 answers

2
Rob Napier On

Your intuition here is correct. The structured concurrency system does not interrupt synchronous code. But the thread scheduler, and also GCD queues, are independent of that. This is a major reason to be very careful mixing structured concurrency (async/await) and GCD or traditional threads. You should think of structured concurrency as providing a higher-level abstraction over the threading system.

And definitely, all of this is only valid within a process space. The OS is absolutely capable of suspending the process.

This is very similar to the fact that one might say a let constant is immutable, but it is possible to write code that violates that by writing directly to memory, the OS can even directly change the variable from outside the process (this is how debuggers work). It is "immutable" only in terms of Swift rules. The system is certainly able to modify the memory.

But within the structured concurrency abstraction, you can be sure that suspension will only occur at certain points, and the compiler will help make this illusion as strong as it can, particularly if you enable Strict Concurrency checking.

Alexander's comments made me remember another important point: Threads are useful even when there is only one CPU. They are fundamentally about concurrency, not parallelism. They permit logically distinct operations to run independently on the same CPU. Similarly, Tasks are useful even if there were only one thread. They permit logically distinct operations to run independently on the same thread. This is especially interesting when considering Swift on embedded devices which may not even support multiple threads. For more on the history that led to Swift's concurrency model, search for "coroutines." While Swift Concurrency has some unique features, at its heart it's based on decades of work in coroutines.

2
Alexander On

Adding to Rob Napier's answer:

Pre-emptable threads helps make better use of the CPU.

Async/await helps make better use of threads.

Threads cost memory (including kernel memory, which can't be paged IIRC), and have higher scheduling costs (because the CPU needs to context switch from your thread into kernel mode, and then context switch onto the next thread).

Async/await lets a lot of the scheduling be done in userspace, letting the system change from one blocked async Task running on a thread to another async Task on the same thread, without involving the overhead of kernel-level scheduling.

0
Rob On

There are two basic multithreading models, preemptive and cooperative. The former offers richer capabilities, including opportunities for a thread to be suspended in favor of another higher-priority thread of the same process. But it also introduces greater risk of races, deadlocks, etc. It can also make it harder to reason about our code as you have no assurance of when or where your thread was suspended in favor of some other thread. The async-await of Swift concurrency relies upon this cooperative multithreading pattern to help us reason about the thread-safety of our actors.

Admittedly, the cooperative threading model imposes certain simplifying constraints and assumptions, namely that context switches can only happen at well-defined points in our code. In practice, while it introduces certain limitations, a cooperative multithreading system makes it much easier to reason about our code while still enjoying concurrency (and even parallelism). It is a concurrency system free of some of the more subtle pitfalls that preemptive multithreading can introduce.

For more information about the cooperative threading model underlying the Swift concurrency system, see the WWDC 2021 video Swift concurrency: Behind the scenes.


For the sake of comparison, consider this example where I submitted twelve low-priority synchronous CPU-intensive tasks (.utility QoS in green), waited 0.2 seconds, submitted the same twelve tasks as medium-priority (.userInitiated QoS in orange), waited another 0.2 seconds, and then submitted the same twelve tasks as high-priority (.userInteractive or high QoS in red). This was run on a 20-core device. The L, M, and H signposts in the “submit” lane represent where the tasks in question were created. The intervals for each task represents a synchronous calculation of pi using the Liebnitz series (i.e., just some random compute-intensive calculation, each performing the same calculation to the same number of decimal places whereby the total work done for each item is equivalent).

The below Instruments’ “Points of Interest” graph shows the above process repeated for three different tech stacks. The first run is the process generated with 12 tasks in the Swift concurrency system for each of the three different priorities. The second run is the same basic idea, but submitted via GCD (with three separate queues, each with its own QoS, each queue running 12 tasks). The third run is with these 36 jobs submitted to separate, manually created, Thread instances, with three different qualityOfService settings.

enter image description here

We can see that in the Swift concurrency example, launching the “medium” priority tasks had no impact on the previously launched “low” priority tasks. But in the GCD and Thread examples, the subsequently launched on higher priority queue preempted lower priority items underway on lower priority queue. (In the GCD example, this behavior is what you experience if you dispatch these items to different queues: If you dispatch to the same low-priority queue, it will attempt to elevate the queue’s QoS to match the items submitted to it rather than preempting.) This notwithstanding, GCD and Thread cancelation is cooperative.

To illustrate that point of submitting tasks to separate concurrent dispatch queues of increasing QoS, we can watch the CPU utilization of each GCD dispatch queue, illustrating that the CPU utilization of “medium” priority GCD threads dropped after some of the “high” priority tasks were launched:

enter image description here