Loved this article. It showed how lacking my knowledge is in how operating systems implement concurrency primitives. It motivated me to do a bunch of research and learn more.
Notably the claim about how atomic operations clear the cache line in every cpu. Wow! Shared data can really be a performance limitation.
This article is really well written. I like how it defines a new concept (bughouse chess), then uses it to help describe an emotion they’ve been feeling wrt more popular culture.
I also think bughouse seems cool (aside from the issues mentioned), and want to give it a shot now. Probably in-person.
Came here to say the same. It’s the last sentence of their deep post suggests playing in person.
I can’t imagine playing Bughouse online. It is the most fun you’ll have playing chess and it’s all about the interpersonal experiences.
In the simpler times of the mid-90’s, on Autumn Sunday’s my college flatmates and I would drink beer, watch football, and play Bughouse. High fives, smack talking, wild sacrifice tactics… soooo much fun!
I do admire the commenter that took it to hardcore levels too — a different path.
This is a great example of how ADTs can be implemented in C by emulating classes, despite the loss in brevity.
For the first item on reference counting, batched memory management is a possible alternative that still fits the C style. The use of something like an arena allocator approximates a memory lifetime, which can be a powerful safety tool. When you free the allocator, all pages are freed at once. Not only is this less error prone, but it can decrease performance. There’s no need to allocate and free each reference counted pointer, nor store reference counts, when one can free the entire allocator after argument parsing is done.
This also decreases fallible error handling: The callee doesn’t need to free anything because the allocator is owned by the caller.
Of course, the use of allocators does not make sense in every setting, but for common lifetimes such as: once per frame, the length of a specific algorithm, or even application scope, it’s an awesome tool!
> This is a great example of how ADTs can be implemented in C by emulating classes, despite the loss in brevity.
I don't see it that way, mostly because ADTs don't require automatic destructors or GC, etc, but also because I never considered a unique/shared pointer type to be an abstract data type
> When you free the allocator, all pages are freed at once. Not only is this less error prone, but it can decrease performance.
How does it decrease performance? My experience with arenas is that they increase performance at the cost of a little extra memory usage.
While I tend towards the side of the article, I find it difficult to agree (or follow) many of the points it makes, which is a bit disappointing.
For example, under “ChatGPT Is So Popular”, they disagree with the premise then use the argument that “ChatGPT was marketed with lies” as evidence. The later argument is well researched, but is simply out of place, leaving nothing to support their disagreement.
This is an interesting study. I wonder how the LLM option might compare to human written responses in the same format (but with higher latency), or even to having a physical human in the room. Given some of the points from the conclusion about teachers being able to detect LLM inspired work, I wonder if either of these options may, at times, be better forms of learning due to improved quality.
Notably the claim about how atomic operations clear the cache line in every cpu. Wow! Shared data can really be a performance limitation.