> Asynchrony: the possibility for tasks to run out of order and still be correct.
> Concurrency: the ability of a system to progress multiple tasks at a time, be it via parallelism or task switching.
> Parallelism: the ability of a system to execute more than one task simultaneously at the physical level.
For more I'd look up Rob Pike's discussions for Go concurrency.
Asynchrony is when things don't happen at the same time or in the same phase, i.e. is the opposite of Synchronous. It can describe a lack of coordination or concurrence in time, often with one event or process occurring independently of another.
The correctness statement is not helpful. When things happy asynchronously, you do not have guarantees about order, which may be relevant to "correctness of your program".
Asynchrony means things happen out of order, interleaved, interrupted, preempted, etc. but could still be just one thing at a time sequentially.
Parallelism means the physical time spent is less that the sum of the total time spent because things happen simultaneously.
Okay, but don't go with this definition.
But... that's everything, and why it's included.
Undefined behavior from asynchronous computing is not worth study or investment, except to avoid it.
Virtually all of the effort for the last few decades (from super-scalar processors through map/reduce algorithms and Nvidia fabrics) involves enabling non-SSE operations that are correct.
So yes, as an abstract term outside the context of computing today, asynchrony does not guarantee correctness - that's the difficulty. But the only asynchronous computing we care about offers correctness guarantees of some sort (often a new type, e.g., "eventually consistent").
in other contexts these words don't describe disjoint sets of things so it's important to clearly define your terms when talking about software.
One issue with the definition for concurrency given in the article would seem to be that no concurrent systems can deadlock, since as defined all concurrent systems can progress tasks. Lamport uses the word concurrency for something else: "Two events are concurrent if neither can causally affect the other."
Probably the notion of (a)causality is what the author was alluding to in the "Two files" example: saving two files where order does not matter. If the code had instead been "save file A; read contents of file A;" then, similarly to the client connect/server accept example, the "save" statement and the "read" statement would not be concurrent under Lamport's terminology, as the "save" causally affects the "read."
It's just that the causal relationship between two tasks is a different concept than how those tasks are composed together in a software model, which is a different concept from how those tasks are physically orchestrated on bare metal, and also different from the ordering of events..
Therefore I think this definition makes the most sense in practical terms. Defining concurrency as the superset is a useful construct because you have to deal with the same issues in both cases. And differentiating asynchrony and parallelism makes sense because it changes the trade-off of latency and energy consumption (if the bandwidth is fixed).
For single threaded programs, whether it is JS's event loop, or Racket's cooperative threads, or something similar, if Δt is small enough then only one task will be seen to progress.
No thanks.
In ecosystems with good distributed system stories, what this looks like in practice is that concurrency is your (the application developers') problem, and parallelism is the scheduler designer's problem.
I think there needs to be a stricter definition here.
Concurrency is the ability of a system to chop a task into many tiny tasks. A side effect of this is that if the system chops all tasks into tiny tasks and runs them all in a sort of shuffled way it looks like parallelism.
Asynchrony means that the requesting agent is not blocked while submitting a request in order to wait for the result of that request.
Asynchronous abstractions may provide a synchronous way wait for the asynchronously submitted result.
It's true that it's possible - two async tasks can be bound together in sequence, just as with `Promise.then()` et al.
... but it's not necessarily the case, hence the partial order, and the "possibility for tasks to run out of order".
For example - `a.then(b)` might bind tasks `a` and `b` together asynchronously, such that `a` takes place, and then `b` takes place - but after `a` has taken place, and before `b` has taken place, there may or may not be other asynchronous tasks interleaved between `a` and `b`.
The ordering between `a`, `b`, and these interleaved events is not defined at all, and thus we have a partial order, in which we can bind `a` and `b` together in sequence, but have no idea how these two events are ordered in relation to all the other asynchronous tasks being managed by the runtime.
I don't mean "promise.then", whereby the issuance of the next request is gated on the completion of the first.
An example might be async writes to a file. If we write "abc" at the start of the file in one request and "123" starting at the second byte in the second requests, there can be a guarantee that the result will be "a123", and not "abc2", without gating on the first request completing before starting the other.
async doesn't mean out of order; it means the request initiator doesn't synchronize on the completion as a single operation.
That being said, I agree we don’t need a new term to express “Zig has a function in the async API that throws a compilation error when you run in a non-concurrent execution. Zig let’s you say that.” It’s fine to so that without proposing new theory.
For lamport concurrent does not mean what it means to us colloquially or informally (like, "meanwhile"). Concurrency in Lamport's formal definition is only about order. If one task is dependent or is affected by another, then the first is ordered after the second one. Otherwise, they are deemed to be "concurrent", even if one happens years later or before.