Does parallel programing in swift eliminate value type optimizations?

113 views Asked by At

As I understand it value types in swift can be more performant because they are stored on the stack as opposed to the heap. But if you make many calls to DispatchQueue.sync or DispatchQueue.async does this not render the benefits of value types mute because closures are stored on the heap?

2

There are 2 answers

0
Rob Napier On BEST ANSWER

As I understand it value types in swift can be more performant because they are stored on the stack as opposed to the heap.

Sometimes. Often not. For example, String includes heap-allocated storage. Many value types have hidden heap-allocated storage (this is actually really common). So you may not be getting the performance gain you're expecting for many types, but in many cases you're not losing it via closures either.

Value types are about behavior, not performance (and of course you need to distinguish between value types and value semantics, which are different, and can have impacts on performance). So the nice thing about value types and DispatchQueue is that you know you're not going to accidentally modify a value on multiple queues, because you know you have your own independent copy. By the time you've paid the overhead of dispatching to a queue (which is optimized, but still not cheap), the extra cost of copying the value type probably is not the major issue.

In my experience, it is very difficult to reason about Swift performance, particularly due to copy-on-write optimizations. But the fact that apparent "value types" can have hidden internal reference types also make performance analysis very tricky. You often have to know and rely on internal details that are subject to change. To even begin getting your head around Swift performance, you should definitely watch Understand Swift Performance (possibly a couple of times). If you're bringing any performance intuitions from C++, you have to throw almost all of that away for Swift. It just does so many things differently.

0
zneak On

I suspect that your view of performance metrics and optimization doesn't entirely match the Swift model.

First, it does look like you've got that point correctly, but in general, the term "stack-allocated" and "heap-allocated" are misleading. Value types can be part of reference types and live on the heap. Likewise, things that presumably go to the heap don't really have to go to the heap: a reference-counted object that provably doesn't need reference counting could be allocated on the stack without anyone noticing. In other languages like C++, the preferred terminology is "automatic storage" ("stack") and "dynamic storage" ("heap"). Of course, Swift doesn't have these concepts (it only has value types and reference types), but they're useful to describe performance characteristics.

Escaping closures need dynamic storage because their lifetime can't be tied to a stack frame. However, the performance price that you pay to call a function that takes an escaping closure is uniform, regardless of how many variables need to be captured, because a closure will always be allocated and that closure can have storage for any number of values.

In other words, all of your captured value-typed objects are grouped in a single dynamic allocation, and the performance cost of allocating memory does not scale with the amount that you're requesting. Therefore, you should consider that there is a (small) speed cost associated to escaping closures themselves, but that cost does not scale with the number of values that the closure captures. Aside from that unavoidable upfront cost, there should be no degradation of performance for value types.

Additionally, as Rob said, every non-trivial value type (strings, arrays, dictionaries, sets, etc) is actually a wrapper to a reference type, so for these object, value types had more of a semantic advantage than a performance advantage to begin with.