Hacker News new | past | comments | ask | show | jobs | submit login
Concurrency Glossary (slikts.github.io)
197 points by slikts on Nov 21, 2018 | hide | past | favorite | 28 comments



I tried to make something similar, but more visual, specifically for Python (draft)[1].

Composability is what matters here, when you start to combine these par/async things into big programs, it can become a mess to reason about.

[1] https://github.com/ptytb/pyroboros/blob/master/Pyroboros.pdf


That looks nice, but Python is an odd choice for studying synchronization primitives considering it has the GIL.


Thank you, especially for mentioning GIL, I should have added it there too.


Nice resource! It explains the difference between parallelism and concurrency quite clearly.

Some things can be improved though:

> The difference between green and lightweight threads is that green threads aren't necessarily lightweight; an example of green threads is classic Java Virtual Machine threads, while Go goroutines are an example of lightweight threads.

Saying that the difference between green and lightweight is that green aren't necessarily lightweight isn't helpful. Some explanation on what they are and when they differ would be useful.


Thanks, and yeah, the part about lightweight threads stands to be expanded. Other missing parts are reactive vs interactive, linearizable vs serializable and push vs pull.


The main difference between a green and a non-green thread is that the scheduling of said thread is done in userspace. Also, JVM threads are not green since the early 2000s, there's a 1:1 mapping between Java threads and actual kernel threads, and as far as I'm aware, said java threads are never migrated between different kernel threads.


You're right, I qualified it with "classic JVM threads", but a less ancient example would be better; it's just not immediately clear which green thread implementations are also relatively heavyweight like JVM green threads used to be.


I believe the distinction between heavyweight and green threads here is not really the correct comparison. It's more of a comparison between kernel managed (non-green) threads and userspace managed (green) threads. One could argue that Goroutines are green threads that are executed on non-green threads. It's also not infeasible to imagine a system where kernel threads are actually lightweight - single address-space unikernels come to mind.


I'll rework that section; thanks for the feedback.


I found only one occurrence of the word task in the document. The entire paradigm of task parallelism is omitted.

The style is heavy. I consider myself skilled in the area but struggled to map wording to my conceptual framework or experience. For a novice, most of the text is probably totally impenetrable. As an example:

> Locking can have coarse- or fine-grained synchronization granularity, where coarse-grained granularity trades concurrency for deterministic execution while fine-grained granularity is more concurrent but less predictable.

That's a property of the program (or the programmer), not locking as such. Fine-grained locking is of course as reliable and predictable as coarse-grained but when an operation involves multiple updates that must appear as atomic (a transactional guarantee, f.e.), coarse-grained locking is straightforward and fine-grained locking is more difficult to get right. There is no way, however, the programmer should accept possibly incorrect updates on the basis that now the program runs faster.

> Declarative concurrency models reduce the need to make trade-offs between concurrency and correctness ...

Now that's a huge statement. How can that trade-off be justified in the first place (it can be!)? Do we really have a need and accept that trade-off with fine-grained locking above?

I would not insist my interpretation is right but that's exactly my point.


The trade-offs of locks are accidental locking like deadlocks; message passing removes this trade-off but is less flexible, and dataflow further constrains the use of explicit synchronization. It's part of the larger argument for declarative approaches enabling equational reasoning.

Task/data parallelism bear to be mentioned because they're common terms; thanks for pointing it out.


Woah! 112 points and no comments. Has anybody actually read it? Or is everyone upvoting in hopes that someone knowledgeable will comment about the quality of the content?


For what it's worth, the feedback I've had so far elsewhere suggests that there aren't particular inaccuracies. A somewhat contentious part is defining concurrency in terms of order independence, but the main source for that is Peter Van Roy (https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf section 4.3), so it's of good provenance.


It's mostly unstructured set of unrelated concepts. I have no idea why people would upvote that.


I know most of these concepts and the valuable part was to fill the gaps what I've missed. I would also like to read about work-stealing threadpools in a similar manner.

I liked the explanations.


The relations could definitely be made more clear, but if you could point out what specific terms you find unrelated, I could explain how they fit together. The basic organization is working up the ladder of abstraction; for example, scheduling is a fundamental concept, so it comes before other concepts that rely on it (which is all of them).


The author admitted to vote-brigading on IRC. Some 50 or so of the votes are from the author's group of upvoting friends and alternate accounts.


That's some fevered imagination.


Don't lie; it makes you look foolish.

    07:48 <xkapastel> HAX
    07:48 <xkapastel> `slikts: a self post made it to #1
    07:48 <xkapastel> what kind of witchcraft is this
    07:48 <`slikts> xkapastel: just need to get initial traction
    07:48 <xkapastel> must be nice having a botnet
    07:49 <xkapastel> tbh :)
    ...
    07:52 <simpson> `slikts: So *did* you stuff the ballot box with upvotes?
    07:52 <xkapastel> there's some voting ring
    07:52 <xkapastel> the first post was a smiley from one of his friends, but afaik people do this sort of thing a lot so it's not really a big deal
    07:52 <`slikts> simpson: very slightly
    07:53 <`slikts> I don't have anything that could be called a botnet
    07:53 <`slikts> but from past experiences it takes just a few votes to get past the initial bump
    07:53 <xkapastel> too bad, a botnet is a good application of concurrent programming
    07:54 <`slikts> also strategic timing counts
    07:54 <`slikts> around now is the prime time to share content
    07:54 <`slikts> middle of the week when US is getting up
That "first post was a smiley" indeed. [1]

[1] https://news.ycombinator.com/item?id=18502760


"Very slightly" means that I shared that I'd submitted it here; realistically a couple of upvotes could have come from that, but I don't know. The talk about botnets and whatnot is just hyperbole. You inferring alternative accounts and pulling numbers out of thin air is ridiculous.


> The internal vs external distinction also maps to other pairs of concepts like synchronous vs asynchronous, pull vs push, direct vs inverted, opaque vs transparent and local vs remote or distributed. This sentence is superfluous, as none of the listed _pairs_ are mentioned anywhere else ever again, besides sync vs async.


It's a major recurring theme; for example, anything that refers to implicit or explicit is part of it. The list you quoted is a placeholder to be elaborated.


I personally prefer to call this asynchronous computing broken into parallel (execute at the same time) and concurrent (execution is interleaved) models.


Asynchronicity is just how sequential models are extended to support concurrency. Parallelism and concurrency are orthogonal, so a concurrent model can be executed in parallel.


> Asynchronicity is just how sequential models are extended to support concurrency.

Asynchronicity just means not existing or occurring at the same time, i.e. not synchronised or without synchronisation.

How you expand that definition to computer science is a matter of opinion. I personally take the opinion that an asynchronous operation could be either concurrent or parallel.

> Parallelism and concurrency are orthogonal, so a concurrent model can be executed in parallel.

Completely agree with that.


Dichotomous pairs! Sounds simple enough.


The wording is indeed pretentious, hopefully improved in future iterations.


Skaisti :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: