> For one thing, Swift is hardly ready for application domains like audio, video or games. No doubt it can make the development process so much faster and safer, but also less performant by exactly that amount.
I've done quite a bit of experimentation with the performance characteristics of Swift, and I think that's a slight mischaracterization of the situation.
For instance, I built a toy data-driven ECS implementation in Swift to see just what kind of performance could be squeezed out of Swift, and it was possible to achieve quite impressive performance, more in the neighborhood of C/C++ than a managed language, especially when dipping into the unsafe portion of the language for critical sections.
But it's a double edged sword: while it's possible to write high-performance swift code, it's really only possible through profiling. I was hoping to discover a rules-based approach (i.e. to avoid certain performance third-rails) and while there were some takeaways, it was extremely difficult to predict what would incur a high performance penalty.
Currently it seems like the main limiting factor in Swift is ARC: it uses atomic operations to ensure thread-safe reference counts, and this, like any use of synchronization, is very expensive. The ARC penalty can be largely avoided by avoiding reference types, and there also seems to be a lot of potential for improving its performance as discussed in this thread:
> Currently it seems like the main limiting factor in Swift is ARC: it uses atomic operations to ensure thread-safe reference counts
This is exactly what Rust avoids by having both Arc and plain-vanilla Rc. Plus reference counts are only updated when the ownership situation changes, not for any reads/writes to the object.
Rust also backs up this design with the Send and Sync traits, which statically prevent programmers from, say, accidentally sending an Rc<T> between threads when they really should have used an Arc<T> instead.
Now I'm curious, what is the difference between automated ref counting and "vanilla" ref counting? And of these two, where does the C++ shared pointer fit?
ARC as in "Atomic Reference Counting". ARC uses atomic operations to increment and decrement reference counts. That means these operations must be synchronized between threads. synchronization between threads/cores tends to be an expensive operation.
This is required for reference counting objects between threads. Otherwise, you might have one thread try to release an object at the same time another thread is trying to increment the reference count. It's just overkill for objects which are only ever referenced from a single thread.
It's an overloaded acronym to be sure. Atomic reference counting is a familiar concept in systems programming languages like C++ and Rust. It just so happens that Apple's automatic reference counting is also atomic.
It's a bit less confusing in practice - the full types are std::rc::Rc and std::sync::Arc (where std::sync is all multithreading stuff, and you have to actually use that name to get access to Arc in your code), and both are well documented (including spelling out the acronym):
...I could see this causing merry hell if trying to do advance interop between Swift and Rust, though, and it's admittedly probably going to be a minor stumbling block for Apple-first devs. (I managed to avoid confusion, but I just port to Apple targets, they're not my bread and butter.)
> GP: For one thing, Swift is hardly ready for application domains like audio, video or games.
> For instance, I built a toy data-driven ECS implementation in Swift to see just what kind of performance could be squeezed out of Swift, and it was possible to achieve quite impressive performance
I also have a pure-Swift ECS game engine [0] where I haven't had to worry about performance yet; it's meant to be 2D-only though I haven't really yet put it to the test with truly complex 2D games like massive worlds with terrain deformation like Terraria (which was/is done in C# if I'm not mistaken) or Lemmings, and in fact it's probably very sloppy, but I was surprised to see it handling 3000+ sprites on screen at 60 FPS, on an iPhone X.
- They were all distinct objects; SpriteKit sprites with GameplayKit components.
- Each entity was executing a couple components every frame.
- The components were checking other components in their entity to find the touch location and rotate their sprite towards it.
- Everything was reference types with multiple levels of inheritance, including generics.
- It was all Swift code and Apple APIs.
Is that impressive? I'm a newb at all this, but given Swift's reputation for high overhead that's perpetuated by comments like GP's, I thought it was good enough for my current and planned purposes.
And performance can only improve as Swift becomes more efficient in future versions (as it previously has). If/when I ever run into a point where Swift is the problem, I could interop with ObjC/C/C++.
SwiftUI and Combine have also given me renewed hope for what can be achieved with pure Swift.
I actually spend more time fighting Apple's bugs than Swift performance issues. :)
I've done quite a bit of experimentation with the performance characteristics of Swift, and I think that's a slight mischaracterization of the situation.
For instance, I built a toy data-driven ECS implementation in Swift to see just what kind of performance could be squeezed out of Swift, and it was possible to achieve quite impressive performance, more in the neighborhood of C/C++ than a managed language, especially when dipping into the unsafe portion of the language for critical sections.
But it's a double edged sword: while it's possible to write high-performance swift code, it's really only possible through profiling. I was hoping to discover a rules-based approach (i.e. to avoid certain performance third-rails) and while there were some takeaways, it was extremely difficult to predict what would incur a high performance penalty.
Currently it seems like the main limiting factor in Swift is ARC: it uses atomic operations to ensure thread-safe reference counts, and this, like any use of synchronization, is very expensive. The ARC penalty can be largely avoided by avoiding reference types, and there also seems to be a lot of potential for improving its performance as discussed in this thread:
https://forums.swift.org/t/swift-performance/28776