Overall this article is accurate and well-researched. Thanks to Daroc Alden for due diligence. Here are a couple of minor corrections:<p>> When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away.<p>While this is a legal implementation strategy, this is not what std.Io.Threaded does. By default, it will use a configurably sized thread pool to dispatch async tasks. It can, however, be statically initialized with init_single_threaded in which case it does have the behavior described in the article.<p>The only other issue I spotted is:<p>> For that use case, the Io interface provides a separate function, asyncConcurrent() that explicitly asks for the provided function to be run in parallel.<p>There was a brief moment where we had asyncConcurrent() but it has since been renamed more simply to concurrent().
Hey Andrew, question for you about something the article litely touches on but doesn't really discuss further:<p>> If the programmer uses async() where they should have used asyncConcurrent(), that is a bug. Zig's new model does not (and cannot) prevent programmers from writing incorrect code, so there are still some subtleties to keep in mind when adapting existing Zig code to use the new interface.<p>What class of bug occurs if the wrong function is called? Is it "UB" depending on the IO model provided, a logic issue, or something else?
A deadlock.<p>For example, the function is called immediately, rather than being run in a separate thread, causing it to block forever on accept(), because the connect() is after the call to async().<p>If concurrent() is used instead, the I/O implementation will spawn a new thread for the function, so that the accept() is handled by the new thread, or it will return error.ConcurrencyUnavailable.<p>async() is infallible. concurrent() is fallible.
This design seems very similar to async in scala except that in scala the execution context is an implicit parameter rather than an explicit parameter. I did not find this api to be significantly better for many use cases than writing threads and communicating over a concurrent queue. There were significant downsides as well because the program behavior was highly dependent on the execution context. It led to spooky action at a distance problems where unrelated tasks could interfere with each and management of the execution context was a pain. My sense though is that the zig team has little experience with scala and thus do not realize the extent to which this is not a novel approach, nor is it a panacea.
I think this design is very reasonable. However, I find Zig's explanation of it pretty confusing: they've taken pains to emphasize that it solves the function coloring problem, which it doesn't: it pushes I/O into an effect type, which essentially behaves as a token that callers need to retain. This is a form of coloring, albeit one that's much more ergonomic.<p>(To my understanding this is pretty similar to how Go solves asynchronicity, expect that in Go's case the "token" is managed by the runtime.)
If calling the same function with a different argument would be considered 'function coloring', every function in a program is 'colored' and the word loses its meaning ;)<p>Zig actually also had solved the coloring problem in the old and abandondend async-await solution because the compiler simply stamped out a sync- or async-version of the same function based on the calling context (this works because everything is a single compilation unit).
> Zig actually also had solved the coloring problem in the old and abandondend async-await solution because the compiler simply stamped out a sync- or async-version of the same function based on the calling context (this works because everything is a single compilation unit).<p>AFAIK this still leaked through function pointers, which were still sync or async (and this was not visible in their type)
> If calling the same function with a different argument would be considered 'function coloring', than every function in a program is 'colored' and the word loses its meaning ;)<p>Well, yes, but in this case the colors (= effects) are actually important. The implications of passing an effect through a system are nontrivial, which is why some languages choose to promote that effect to syntax (Rust) and others choose to make it a latent invariant (Java, with runtime exceptions). Zig chooses another path not unlike Haskell's IO.
The subject of the function coloring article was callback APIs in Node, so an argument you need to pass to your IO functions is very much in the spirit of colored functions and has the same limitations.
> If calling the same function with a different argument would be considered 'function coloring', than every function in a program is 'colored' and the word loses its meaning ;)<p>I mean, the concept of "function coloring" in the first place is itself an artificial distinction invented to complain about the incongruent methods of dealing with "do I/O immediately" versus "tell me when the I/O is done"--two methods of I/O that are so very different that it really requires very different designs of your application on top of those I/O methods: in a sync I/O case, I'm going to design my parser to output a DOM because there's little benefit to not doing so; in an async I/O case, I'm instead going to have a streaming API.<p>I'm still somewhat surprised that "function coloring" has become the default lens to understand the semantics of async, because it's a rather big misdirection from the fundamental tradeoffs of different implementation designs.
If your functions suddenly requires (currently)unconstructable instance "Magic" which you now have to pass in from somewhere top level, that indeed suffers from the same issue as async/await. Aka function coloring.<p>But most functions don't. They require some POD or float, string or whatever that can be easily and cheaply constructed in place.
Actually it seems like they just colored everything async and you pick whether you have worker threads or not.<p>I do wonder if there's more magic to it than that because it's not like that isn't trivially possible in other languages. The issue is it's actually a huge foot gun when you mix things like this.<p>For example your code can run fine synchronously but will deadlock asynchronously because you don't account for methods running in parallel.<p>Or said another way, some code is thread safe and some code isn't. Coloring actually helps with that.
Agreed. the Haskeller in me screams "You've just implemented the IO monad without language support".
The function coloring problem actually comes up when you implement the async part using stackless coroutines (e.g. in Rust) or callbacks (e.g. in Javascript).<p>Zig's new I/O does neither of those for now, so hence why it doesn't suffer from it, but at the same time it didn't "solve" the problem, it just sidestepped it by providing an implementation that has similar features but not exactly the same tradeoffs.
How are the tradeoffs meaningfully different? Imagine that, instead of passing an `Io` object around, you just had to add an `async` keyword to the function, and that was simply syntactic sugar for an implied `Io` argument, and you could use an `await` keyword as syntactic sugar to pass whatever `Io` object the caller has to the callee.<p>I don't see how that's <i>not</i> the exact same situation.
In the JS example, a synchronous function cannot poll the result of a Promise. This is meaningfully different when implementing loops and streams. Ex, game loop, an animation frame, polling a stream.<p>A great example is React Suspense. To suspend a component, the render function throws a Promise. To trigger a parent Error Boundary, the render function throws an error. To resume a component, the render function returns a result. React never made the suspense API public because it's a footgun.<p>If a JS Promise were inspectable, a synchronous render function could poll its result, and suspended components would not need to use throw to try and extend the language.
I see. I guess JS is the only language with the coloring problem, then, which is strange because it's one of the few with a built-in event loop.<p>This Io business is isomorphic to async/await in Rust or Python [1]. Go also has a built-in "event loop"-type thing, but decidedly does <i>not</i> have a coloring problem. I can't think of any languages besides JS that do.<p>[1]: <a href="https://news.ycombinator.com/item?id=46126310">https://news.ycombinator.com/item?id=46126310</a>
Maybe I have this wrong, but I believe the difference is that you can create an Io instance in a function that has none
In Rust, you can always create a new tokio runtime and use that to call an async function from a sync function. Ditto with Python: just create a new asyncio event loop and call `run`. That's actually exactly what an Io object in Zig is, but with a new name.<p>Looking back at the original function coloring post [1], it says:<p>> It is better. I will take async-await over bare callbacks or futures any day of the week. But we’re lying to ourselves if we think all of our troubles are gone. As soon as you start trying to write higher-order functions, or reuse code, you’re right back to realizing color is still there, bleeding all over your codebase.<p>So if this is isomorphic to async/await, it does not "solve" the coloring problem as originally stated, but I'm starting to think it's not much of a problem at all. Some functions just have different signatures from other functions. It was only a huge problem for JavaScript because the ecosystem at large decided to change the type signatures of some giant portion of all functions at once, migrating from callbacks to async.<p>[1]: <a href="https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/" rel="nofollow">https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...</a>
It's sans-io at the language level, I like the concept.<p>So I did a bit of research into how this works in Zig under the hood, in terms of compilation.<p>First things first, Zig does compile async fns to a state machine: <a href="https://github.com/ziglang/zig/issues/23446" rel="nofollow">https://github.com/ziglang/zig/issues/23446</a><p>The compiler decides at compile time which color to compile the function as (potentially both). That's a neat idea, but... <a href="https://github.com/ziglang/zig/issues/23367" rel="nofollow">https://github.com/ziglang/zig/issues/23367</a><p>> It would be checked illegal behavior to make an indirect call through a pointer to a restricted function type when the value of that pointer is not in the set of possible callees that were analyzed during compilation.<p>That's... a pretty nasty trade-off. Object safety in Rust is really annoying for async, and this smells a lot like it. The main difference is that it's vaguely late-bound in a magical way; you might get an unexpected runtime error and - even worse - potentially not have the tools to force the compiler to add a fn to the set of callees.<p>I still think sans-io at the language level might be the future, but this isn't a complete solution. Maybe we should be simply compiling all fns to state machines (with the Rust polling implementation detail, a sans-io interface could be used to make such functions trivially sync - just do the syscall and return a completed future).
There is a token you must pass around, sure, but because you use the same token for both async and sync code, I think analogizing with the typical async function color problem is incorrect.
Having used zig a bit as a hobby. Why is it more ergonomic? Using await vs passing a token have similar ergonomics to me. The one thing you could say is that using some kind of token makes it dead simple to have different tokens. But that's really not something I run into often at all when using async.
> The one thing you could say is that using some kind of token makes it dead simple to have different tokens. But that's really not something I run into often at all when using async.<p>It's valuable to library authors who can now write code that's agnostic of the users' choice of runtime, while still being able to express that asynchronicity is possible for certain code paths.
One thing the old Zig async/await system theoretically allowed me to do, which I'm not certain how to accomplish with this new io system without manually implementing it myself, is suspend/resume. Where you could suspend the frame of a function and resume it later. I've held off on taking a stab at OS dev in Zig because I was really, really hoping I could take advantage of that neat feature: configure a device or submit a command to a queue, suspend the function that submitted the command, and resume it when an interrupt from the device is received. That was my idea, anyway. Idk if that would play out well in practice, but it was an interesting idea I wanted to try.
Can you create a thread pool consisting of one thread, and suspend / resume the thread?
what's the point of implementing cooperative "multithreading" (coroutines) with preemptive one (async)?
I find this example quite interesting:<p><pre><code> var a_future = io.async(saveFile, .{io, data, "saveA.txt"});
var b_future = io.async(saveFile, .{io, data, "saveB.txt"});
const a_result = a_future.await(io);
const b_result = b_future.await(io);
</code></pre>
In Rust or Python, if you make a coroutine (by calling an async function, for example), then that coroutine will not generally be guaranteed to make progress unless someone is waiting for it (i.e. polling it as needed). In contrast, if you stick the coroutine in a task, the task gets scheduled by the runtime and makes progress when the runtime is able to schedule it. But creating a task is an explicit operation and can, if the programmer wants, be done in a structured way (often called “structured concurrency”) where tasks are never created outside of some scope that contains them.<p>From this example, if the example allows the thing that is “io.async”ed to progress all by self, then I guess it’s creating a task that lives until it finishes or is cancelled by getting destroyed.<p>This is certainly a <i>valid</i> design, but it’s not the direction that other languages seem to be choosing.
C# works like this as well, no? In fact C# can (will?) run the async function on the calling thread until a yield is hit.
This is how JS works
It's not guaranteed in Zig either.<p>Neither task future is guaranteed to do anything until .await(io) is called on it. Whether it starts immediately (possibly on the same thread), or queued on a thread pool, or yields to an event loop, is entirely dependent on the Io runtime the user chooses.
It’s not guaranteed, but, according to the article, that’s how it works in the Evented model:<p>> When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away. So, with that version of the interface, the function first saves file A and then file B. With an Io.Evented instance, the operations are actually asynchronous, and the program can save both files at once.<p>Andrew Kelley’s blog (<a href="https://andrewkelley.me/post/zig-new-async-io-text-version.html" rel="nofollow">https://andrewkelley.me/post/zig-new-async-io-text-version.h...</a>) discusses io.concurrent, which forces actual concurrency, and it’s distinctly non-structured. It even seems to require the caller to make sure that they don’t mess up and keep a task alive longer than whatever objects the task might reference:<p><pre><code> var producer_task = try io.concurrent(producer, .{
io, &queue, "never gonna give you up",
});
defer producer_task.cancel(io) catch {};
</code></pre>
Having personally contemplated this design space a little bit, I think I like Zig’s approach a bit more than I like the corresponding ideas in C and C++, as Zig at least has defer and tries to be somewhat helpful in avoiding the really obvious screwups. But I think I prefer Rust’s approach or an actual GC/ref-counting system (Python, Go, JS, etc) even more: outside of toy examples, it’s fairly common for asynchronous operations to conceptually outlast single function calls, and it’s really really easy to fail to accurately analyze the lifetime of some object, and having the language prevent code from accessing something beyond its lifetime is very, very nice. Both the Rust approach of statically verifying the lifetime and the GC approach of automatically extending the lifetime mostly solve the problem.<p>But this stuff is brand new in Zig, and I’ve never written Zig code at all, and maybe it will actually work very well.
Ah, I think we might have been talking over each other. I'm referring to the interface not guaranteeing anything, not the particular implementation. The Io interface itself doesn't guarantee that anything will have started until the call to await returns.
I’m excited to see how this turns out. I work with Go every day and I think Io corrects a lot of its mistakes. One thing I am curious about is whether there is any plan for channels in Zig. In Go I often wish IO had been implemented via channels. It’s weird that there’s a select keyword in the language, but you can’t use it on sockets.
Wrapping every IO operation into a channel operation is fairly expensive. You can get an idea of how fast it would work now by just doing it, using a goroutine to feed a series of IO operations to some other goroutine.<p>It wouldn't be quite as bad as the perennial "I thought Go is fast why is it slow when I spawn a full goroutine and multiple channel operations to add two integers together a hundred million times" question, but it would still be a fairly expensive operation. See also the fact that Go had fairly sensible iteration semantics before the recent iteration support was added by doing a range across a channel... as long as you don't mind running a full channel operation and internal context switch for every single thing being iterated, which in fact quite a lot of us do mind.<p>(To optimize pure Python, one of the tricks is to ensure that you get the maximum value out of all of the relatively expensive individual operations Python does. For example, it's already handling exceptions on every opcode, so you could win in some cases by using exceptions cleverly to skip running some code selectively. Go channels are similar; they're <i>relatively</i> expensive, on the order of dozens of cycles, so you want to make sure you're getting sufficient value for that. You don't have to go super crazy, they're not like a millisecond per operation or something, but you do want to get value for the cost, by either moving non-trivial amount of work through them or by taking strong advantage of their many-to-many coordination capability. IO often involves moving around small byte slices, even perhaps one byte, and that's not good value for the cost. Moving kilobytes at a time through them is generally pretty decent value but not all IO looks like that and you don't want to write that into the IO spec directly.)
> One thing I am curious about is whether there is any plan for channels in Zig.<p>The Zig std.Io equivalent of Golang channels is std.Io.Queue[0]. You can do the equivalent of:<p><pre><code> type T interface{}
fooChan := make(chan T)
barChan := make(chan T)
select {
case foo := <- fooChan:
// handle foo
case bar := <- barChan:
// handle bar
}
</code></pre>
in Zig like:<p><pre><code> const T = void;
var foo_queue: std.Io.Queue(T) = undefined;
var bar_queue: std.Io.Queue(T) = undefined;
var get_foo = io.async(Io.Queue(T).getOne, .{ &foo_queue, io });
defer get_foo.cancel(io) catch {};
var get_bar = io.async(Io.Queue(T).getOne, .{ &bar_queue, io });
defer get_bar.cancel(io) catch {};
switch (try io.select(.{
.foo = &get_foo,
.bar = &get_bar,
})) {
.foo => |foo| {
// handle foo
},
.bar => |bar| {
// handle bar
},
}
</code></pre>
Obviously not quite as ergonomic, but the trade off of being able to use any IO runtime, and to do this style of concurrency without a runtime garbage collector is really interesting.<p>[0] <a href="https://ziglang.org/documentation/master/std/#std.Io.Queue" rel="nofollow">https://ziglang.org/documentation/master/std/#std.Io.Queue</a>.
Have you tried Odin? Its a great language thats also a “better C” but takes more Go inspiration than Zig.
Second vote for Odin but with a small caveat.<p>Odin doesn't (and won't ever according to its creator) implement specific concurrency strategies. No async, coroutines, channels, fibers, etc... The creator sees concurrency strategy (as well as memory management) as something that's higher level than what he wants the language to be.<p>Which is fine by me, but I know lots of people are looking for "killer" features.
At least Go didn't take the dark path of having async / await keywords. In C# that is a real nightmare and necessary to use sync over async anti-patterns unless willing to re-write everything. I'm glad Zig took this "colorless" approach.
One of the harms Go has done is to make people think its concurrency model is at all special. “Goroutines” are green threads and a “channel” is just a thread-safe queue, which Zig has in its stdlib <a href="https://ziglang.org/documentation/master/std/#std.Io.Queue" rel="nofollow">https://ziglang.org/documentation/master/std/#std.Io.Queue</a>
A channel is not just a thread-safe queue. It's a thread-safe queue that can be used in a select call. Select is the distinguishing feature, not the queuing. I don't know enough Zig to know whether you can write a bit of code that says "<i>either</i> pull from this queue <i>or</i> that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.<p>Of course even if that exact queue is not itself selectable, you can still implement a Go channel with select capabilities in Zig. I'm sure one exists somewhere already. Go doesn't get access to any magic CPU opcodes that nobody else does. And languages (or libraries in languages where that is possible) can implement more capable "select" variants than Go ships with that can select on more types of things (although not necessarily for "free", depending on exactly what is involved). But it is more than a queue, which is also why Go channel operations are a bit to the expensive side, they're implementing more functionality than a simple queue.
> I don't know enough Zig to know whether you can write a bit of code that says "either pull from this queue or that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.<p>Thanks for giving me a reason to peek into how Zig does things now.<p>Zig has a generic select function[1] that works with futures. As is common, Blub's language feature is Zig's comptime function. Then the io implementation has a select function[2] that "Blocks until one of the futures from the list has a result ready, such that awaiting it will not block. Returns that index." and the generic select switches on that and returns the result. Details unclear tho.<p>[1] <a href="https://ziglang.org/documentation/master/std/#std.Io.select" rel="nofollow">https://ziglang.org/documentation/master/std/#std.Io.select</a><p>[2] <a href="https://ziglang.org/documentation/master/std/#std.Io.VTable" rel="nofollow">https://ziglang.org/documentation/master/std/#std.Io.VTable</a>
Getting a simple future from multiple queues and then waiting for the first one is not a match for Go channel semantics. If you do a select on three channels, you will receive a result from one of them, but you don't get any future claim on the other two channels. Other goroutines could pick them up. And if another goroutine does get something from those channels, that is a guaranteed one-time communication and the original goroutine now can not get access to that value; the future does not "resolve".<p>Channel semantics don't match futures semantics. As the name implies, channels are streams, futures are a single future value that may or may not have resolved yet.<p>Again, I'm sure nothing stops Zig from implementing Go channels in half-a-dozen different ways, but it's definitely not as easy as "oh just wrap a future around the .get of a threaded queue".<p>By a similar argument it should be observed that channels don't naively implement futures either. It's fairly easy to make a future out of a channel and a couple of simple methods; I think I see about 1 library a month going by that "implements futures" in Go. But it's something that has to be done because channels aren't futures and futures aren't channels.<p>(Note that I'm not making any arguments about whether one or the other is <i>better</i>. I think such arguments are actually quite difficult because while both are quite different in practice, they also both fairly fully cover the solution space and it isn't clear to me there's globally an advantage to one or the other. But they are certainly <i>different</i>.)
> channels aren't futures and futures aren't channels.<p>In my mind a queue.getOne ~= a <- on a Go channel. Idk how you wrap the getOne call in a Future to hand it to Zig's select but that seems like it would be a straightforward pattern once this is all done.<p>I really do appreciate you being strict about the semantics. Tbh the biggest thing I feel fuzzy on in all this is how go/zig actually go about finding the first completed future in a select, but other than that am I missing something?<p><a href="https://ziglang.org/documentation/master/std/#std.Io.Queue.getOne" rel="nofollow">https://ziglang.org/documentation/master/std/#std.Io.Queue.g...</a>
Maybe I'm missing something, but how do you get a `Future` for receiving from a channel?<p>Even better, how would I write my own `Future` in a way that supports this `select` and is compatible with any reasonable `Io` implementation?
If we're just arguing about the true nature of Scotsmen, isn't "select a channel" merely a convenience around awaiting a condition?
This is not a "true Scotsman" argument. It's the distinctive characteristic of Go channels. Threaded queues where you can call ".get()" from another thread, but that operation is blocking and you can't try any other queues, then you can't write:<p><pre><code> select {
case result := <-resultChan:
// whatever
case <-cxt.Done():
// our context either timed out or was cancelled
}
</code></pre>
or any more elaborate structure.<p>Or, to put it a different way, when someone says "I implement Go channels in X Language" I don't look for whether they have a threaded queue but whether they have a select equivalent. Odds are that there's already a dozen "threaded queues" in X Language anyhow, but select is less common.<p>Again note the difference between the word "distinctive" and "unique". No individual feature of Go is unique, of course, because again, Go does not have special unique access to Go CPU opcodes that no one else can use. It's the more defining characteristic compared to the more mundane and normal threaded queue.<p>Of course you can implement this a number of ways. It is not equivalent to a naive condition wait, but probably with enough work you could implement them more or less with a condition, possibly with some additional compiler assistance to make it easier to use, since you'd need to be combining several together in some manner.
It's more akin to awaiting *any* condition from a list.
What other mainstream languages have pre-emptive green threads without function coloring? I can only think of Erlang.
I'm told modern Java (loom?) does. But I think that might be an exhaustive list, sadly.
Maybe not mainstream, but Racket.
It was special. CSP wasn't anywhere near the common vocabulary back in 2009. Channels provide a different way of handling synchronization.<p>Everything is "just another thing" if you ignore the advantage of abstraction.
Is there any way to implement structured concurrency on top of the std.Io primitive?
<p><pre><code> var group: Io.Group = .init;
defer group.cancel(io);
</code></pre>
If you see this pattern, you are doing structured concurrency.<p>Same thing with:<p><pre><code> var future = io.async(foo, .{});
defer future.cancel(io);</code></pre>
Interesting to see Zig tackle async. The io_uring-first approach makes sense for modern systems, but the challenge is always making async ergonomic without sacrificing Zig's explicit control philosophy. Curious how colored functions will play out in practice.
It look like promising idea, though I'm a bit spectical that they can actually make it work with other executors like for example stackless coroutines transparently and it probably won't work with code that uses ffi anyway.
I'm excited to see where this goes. I recently did some io_uring work in zig and it was a pain to get right.<p>Although, it does seem like dependency injection is becoming a popular trend in zig, first with Allocator and now with Io. I wonder if a dependency injection framework within the std could reduce the amount of boilerplate all of our functions will now require. Every struct or bare fn now needs (2) fields/parameters by default.
> Every struct or bare fn now needs (2) fields/parameters by default.<p>Storing interfaces a field in structs is becoming a bit of an an anti-pattern in Zig. There are still use cases for it, but you should think twice about it being your go-to strategy. There's been a recent shift in the standard library toward "unmanaged" containers, which don't store a copy of the Allocator interface, and instead Allocators are passed to any member function that allocates.<p>Previously, one would write:<p><pre><code> var list: std.ArrayList(u32) = .init(allocator);
defer list.deinit();
for (0..count) |i| {
try list.append(i);
}
</code></pre>
Now, it's:<p><pre><code> var list: std.ArrayList(u32) = .empty;
defer list.deinit(allocator);
for (0..count) |i| {
try list.append(allocator, i);
}
</code></pre>
Or better yet:<p><pre><code> var list: std.ArrayList(u32) = .empty;
defer list.deinit(allocator);
try list.ensureUnusedCapacity(allocator, count); // Allocate up front
for (0..count) |i| {
list.appendAssumeCapacity(i); // No try or allocator necessary here
}</code></pre>
I think a good compromise between a DI framework and having to pass everything individually would be some kind of Context object. It could be created to hold an Allocator, IO implementation, and maybe a Diagnostics struct since Zig doesn't like attaching additional information to errors. Then the whole Context struct or parts of it could be passed around as needed.
Yes, and it's good that way.<p>Please, anything but a dependency injection framework. All parameters and dependencies should be explicit.
I think and hope that they don’t do that. As far as I remember their mantra was „no magic, you can see everything which is happening“. They wanted to be a simple and obvious language.
That's fair, but the same argument can be made for Go's verbose error handling. In that case we could argue that `try` is magical, although I don't think anyone would want to take that away.
This seems a lot like what the scala libraries Zio or Kyo are doing for concurrency, just without the functional effect part.
Love it, async code is a major pita in most languages.
This is a bad explanation because it doesn't explain how the concurrency actually works. Is it based on stacks? Is there a heavy runtime? Is it stackless and everything is compiled twice?<p>IMO every low level language's async thing is terrible and half-baked, and I hate that this sort of rushed job is now considered de rigueur.<p>(IMO We need a language that makes the call stack just another explicit data structure, like assembly and has linearity, "existential lifetimes", locations that change type over the control flow, to approach the question. No language is very close.)
> Languages that don't make a syntactical distinction (such as Haskell) essentially solve the problem by making everything asynchronous<p>What the heck did I just read. I can only guess they confused Haskell for OCaml or something; the former is notorious for requiring that all I/O is represented as values of some type encoding the full I/O computation. There's still coloring since you can't hide it, only promote it to a more general colour.<p>Plus, isn't Go the go-to example of this model nowadays?
I like Zig and I like their approach in this case.<p>From the article:<p><pre><code> std.Io.Threaded - based on a thread pool.
-fno-single-threaded - supports concurrency and cancellation.
-fsingle-threaded - does not support concurrency or cancellation.
std.Io.Evented - work-in-progress [...]
</code></pre>
Should `std.Io.Threaded` not be split into `std.Io.Threaded` and `std.Io.Sequential` instead? Single threaded is another word for "not threaded", or am I wrong here?
I like the look of this direction. I am not a fan of the `async` keyword that has become so popular in some languages that then pollutes the codebase.
In JavaScript, I love the `async` keyword as it's a good indicator that something goes over the wire.
Async always confused me as to when a function would actually create a new thread or not.
Async usually ends up being a coloring function that knows no bounds once it is used.
I’ve never really understood the issue with this. I find it quite useful to know what functions may do something async vs which ones are guaranteed to run without stopping.<p>In my current job, I mostly write (non-async) python, and I find it to be a performance footgun that you cannot trivially tell when a method call will trigger I/O, which makes it incredibly easy for our devs to end up with N+1-style queries without realizing it.<p>With async/await, devs are always forced into awareness of where these operations do and don’t occur, and are much more likely to manage them effectively.<p>FWIW: The zig approach also seems great here, as the explicit Io function argument seems likely to force a similar acknowledgement from the developer. And without introducing new syntax at that! Am excited to see how well it works in practice.
In my (Rust-colored) opinion, the async keyword has two main problems:<p>1) It tracks code property which is usually omitted in sync code (i.e. most languages do not mark functions with "does IO"). Why IO is more important than "may panic", "uses bounded stack", "may perform allocations", etc.?<p>2) It implements an ad-hoc problem-specific effect system with various warts. And working around those warts requires re-implementation of half of the language.
> Why IO is more important than "may panic", "uses bounded stack", "may perform allocations", etc.?<p>Rust could use these markers as well.
Is this Django? I could maybe see that argument there. Some frameworks and ORMs can muddy that distinction. But most the code ive written its really clear if something will lead to io or not.
I've watched many changes over time where the non async function uses an async call, then the function eventually becomes marked as async. Once majority of functions get marked as async, what was the point of that boilerplate?
Pro tip: use postfix keyword notation.<p>Eg.<p>doSomethingAsync().defer<p>This removes stupid parentheses because of precedence rules.<p>Biggest issue with async/await in other languages.