I feel like it's worthless to keep up with Zig until they reach 1.0.<p>That thing, right here, is probably going to be rewritten 5 times and what not.<p>If you are actively using Zig (for some reasons?), I guess it's a great news, but for the Grand Majority of the devs in here, it's like an announcement that it's raining in Kuldîga...<p>So m'yeah. I was following Zig for a while, but I just don't think I am going to see a 1.0 release in my lifetime.
IME Zig's breaking changes are quite manageable for a lot of application types since most of the breakage these days happens in the stdlib and not in the language. And if you just want do read and write files, the highlevel file-io interfaces are nearly identical, they just moved to a different namespace and now require a std.Io pointer to be passed in.<p>And tbh, I take a 'living' language any day over a language that's ossified because of strict backward compatibility requirements. When updating a 3rd-party dependency to a new major version it's also expected that the code needs to be fixed (except in Zig those breaking changes are in the minor versions, but for 0.x that's also expected).<p>I actually hope that even after 1.x, Zig will have a strategy to keep the stdlib lean by aggressively removing deprecated interfaces (maybe via separate stdlib interface versions, e.g. `const std = @import("std/v1");`, those versions could be slim compatibility wrappers around a single core stdlib implementation.
> I take a 'living' language any day over of a language that's ossified because of strict backward compatibility requirements<p>Maybe you would, but >95% of serious projects wouldn't. The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace, where low-level languages are commonly used, codebases typically last for over 30 years), and while such changes are manageable early on, they become less so over time.<p>You say "strict" as if it were out of some kind of stubborn princple, where in fact backward compatibility is one of the things people who write "serious" software want most. Backward compatibility is so popular that at some point it's hard to find any feature that is in high-enough demand to justify breaking it. Even in established languages there's always a group of people who want somethng badly enough they don't mind breaking compatibility for it, but they're almost always a rather small minority. Furthermore, a good record of preserving compatibility in the past makes a language more attractive even for greenfield projects written by people who care about backward compatibility, who, in "serious" software, make up the majority. When you pick a language for such a project, the expectation of how the language will evolve over the next 20 years is a major concern on day one (a startup might not care, but most such software is not written by startups).
> The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace).<p>Either those applications are actively maintained, or they aren't. Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version (e.g. when in doubt, "never change a running system"), old compiler toolchains won't suddenly stop working.<p>FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial, depending on the complexity of the code base (especially when there's UB lurking in the code, or the code depends on specific compiler bugs to be present - e.g. changing <i>anything</i> in a project setup always comes with risks attached).
> Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version<p>Of course, but you want to make that as easy as you can. Compatibility is never binary (which is why I hate semantic versioning), but you should strive for the greatest compatibility for the greatest portion of users.<p>> FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial<p>I know that well (especially for C++; in C the situation is somewhat different), and the backward compatibility of C++ compilers leaves much to be desired.
You could fix versions, and probably should. However willful disregard of prior interfaces encourages developers code to follow suit.<p>It’s not like Clojure or Common Lisp, where a decades old software still runs, mostly unmodified, the same today, any changes mainly being code written for a different environment or even compiler implementation.
This is largely because they take breaking user code way more seriously. Alot of code written in these languages seem to have similar timelessness too. Software can be “done”.
I would also add that Rust manages this very well. Editions let you do breaking changes without actually breaking any code, since any package (crate) needs to specify the edition it uses. So when in 30 years you're writing code in Rust 2055, you can still import a crate that hasn't been updated since 2015 :)
Sure, but considering that Zig is a modern C alternative, one should not and cannot afford to forget that C has been successful <i>also</i> because it stayed small and consistent for so long.<p>The entire C, C ABI and standard lib specs, combined, are probably less words than the Promise spec from ECMAScript 262.<p>A small language that stays consistent and predictable lets developers evolve it in best practices, patterns, design choices, tooling. C has achieved all that.<p>No evolving language has anywhere near that freedom.<p>I don't want an ever evolving Zig too for what is worth. And I like Zig.<p>I don't think any developer can resolve all of the design tensions a programming language has, you can't make it ergonomic on its own.<p>But a small, modern, stable C would still be welcome, besides Odin.
I really love Zig the language, but I'm distancing myself from the stdlib. I dislike the breakage, but I also started questioning the quality of the code recently. I was working on an alternative I/O framework for Zig over the last months, and I was finding many problems that eventually led to me trying to not depend on stdlib at all. Even on the code announced here, the context switching assembly is wrong, it doesn't mark all necessary registers as clobbered. I mentioned this several times to the guys. The fact that it's still unchanged just shows me lack of testing.
It sounds like Zig would benefit from someone like you on the inside, as a member or active contributor, reviewing and participating in the development of the standard library.<p>Zig is one of my favorite new languages, I really like the cross-compiler too. I'm not a regular user yet but I'm hopeful for its long-term success as a language and ecosystem. It's still early days, beta/dev level instability is expected, and even fundamental changes in design. I think community input and feedback can be particularly valuable at this stage.
I’m confused. The register clobbering is an issue in the compiler, not in the stdlib implementation right? Or are you saying the stdlib has inline assembly in these IO implementations somewhere? I couldn’t find it and I can’t think why you’d need it.<p>If it’s a compiler frontend-> LLVM interaction bug, I think you are commenting in the spot - it should go in a separate issue not in the PR about io_uring backend. Also, interaction bugs where a compiler frontend triggers a bug in LLVM aren’t uncommon since Rust was the first major frontend other than clang to exercise code paths. Indeed the (your?) fix in LLVM for this issue mentions Rust is impacted too.<p>I agree with the higher level points about stability and I don’t like Zig not being a safe language in this day and age, but I think your criticism about quality is a bit harsh if your source of this complaint is that they haven’t put a workaround for an LLVM bug.
There is the one issue which I fixed in LLVM, but it should be fixed in Zig as well, because the clobber list in Zig is typed and gives you false impression that adding x30 there is valid. But there is also another issue, x18 is a general purpose register outside of Darwin and Windows and needs to be marked as clobbered on other systems. And yes, look at the linked changes, the stdlib has inline assembly for the context switching.
To each his own, but while I can certainly understand the hesitancy of an architect to pick Zig for a project that is projected to hit 100k+ lines of code, I really think you're missing out. There <i>is</i> a business case to using Zig today.<p>True in general but in the cloud especially, saving server resources can make a significant impact on the bottom line. There are not <i>nearly</i> enough performance engineers who understand how to take inefficient systems and make improvements to move towards theoretical maximum efficiency. When the system is written in an inefficient language like Python or Node, fundamentally, <i>you have no choice</i> but to start to move the hotpath behind FFI and drop down to a systems language. At that point your choices are basically C, C++, Rust, or Zig. Of the four choices, Zig today is already simplest to learn, with fewer footguns, easier to work with, easier to read and write, and easier to test. And you're not going to write 100k LOC of optimized hotpath code. And when you understand the cost savings involved in reducing your compute needs by sometimes more than 90% by getting the hotpath optimized, you understand that there is <i>very much indeed</i> a business case to learning Zig today.
As a counter argument to this. I was able to replicate the subset of zig that I wanted, using c23. And in the end I have absolute stability unless I break things to “improve”.<p>Personally, it is a huge pain to rewrite things and update dependencies because the code I am depending on is moving out from under me. I also found this to be a big problem in Rust.<p>And another huge upside is you have access to best of everything. As an example, I am heavily using fuzz testing and I can very easily use honggfuzz which is the best fuzzer according to all research I could find, and also according to my experience so far.<p>From this perspective, it doesn’t make sense to use zig over c for professional work. If I am writing a lot of code then I don’t want to rewrite it. If am writing a very small amount of code with no dependencies, then it doesn’t matter what I use and this is the only case where I think zig might make sense.
To add another point to this. W/e people write online isn’t correct all the time. I was thinking zig compiles super fast but found that c with a good build system and well split header/implementation files is basically instant to compile. You can use thin-lto with cache to have instant recompilation for release builds.<p>Real example: I had to wait some seconds to compile and run benchmarks for a library and it re-compiles instantly (<100ms) with c.<p>Zig does have a single compilation unit and that might have some advantages but in practice it is a hard disadvantage. And I didn’t ever see someone pointing this out online.<p>I would really recommend trying to learn c with modernC book and try to do it with c for people like me building something from scratch
Also I was also thinking that breaking doesn’t matter that much, but my opinion changed around 10k lines of code very quickly. At some point I really stopped caring about every piece and wanted to forget about it and move on really
>with fewer footguns, easier to work with, easier to read and write, and easier to test.<p>With the exception of fewer foot guns, which Rust definitely takes the cake and Zig is up in second, I'd say Zig is in last place in all of these. This really screams that you aren't aware of C/C++ testing/tooling ecosystem.<p>I say this as a fan of Zig, by the way.
> ...in the cloud especially, saving server resources can make a significant impact on the bottom line. There are not nearly enough performance engineers who understand how to take inefficient systems and make improvements to move towards theoretical maximum efficiency.<p>That's a very good point, actually. However...<p>> with fewer footguns<p>..the Crab People[0] would <i>definitely</i> quibble with that particular claim of yours.<p>[0] <a href="https://en.wikipedia.org/wiki/Crab_People" rel="nofollow">https://en.wikipedia.org/wiki/Crab_People</a> of course.
I would quibble with all of the claims, other than easier to learn.<p>I really see no advantage for Zig over Rust after you get past that 2 first two weeks.
Coming from Go, I'm really disappointed in Rust compiler times. I realize they're comparable to C++, and you can structure your crates to minimize compile times, but I don't care. I want instant compilation.<p>Zig is trying to get me instant compilation and I see that as a huge advantage for Zig (even past the first 2 weeks).<p>I'll probably stick with Rust as my "low level language" due to its safety, type system, maturity, library ecosystem, and career opportunities.<p>But I remain jealous of Zig's willingness to do extreme things to make compilation faster.
On any Go production projects I worked on or near, the <i>incremental</i> compile time was slower than C++ and Rust.<p>A full build was definitely much faster, but not as useful. Especially when using a build system with shared networked caching (Bazel for example).<p>Yes those projects were a bloated mess, as it always seems to be.
The key with c++ is to keep coding while compiling. Otherwise..yeah you're blocked.
Eh, I'd say that Rust has a different set of footguns. You're correct that you won't run into use-after-free footguns, but Rust doesn't protect you from memory leaks, unsafe code is still unsafe, and the borrow checker and Rust's language complexity are their own kind of footguns.<p>But I digress. I was thinking of Zig in comparison to C when I wrote that. I don't have a problem conceding that point, but I still believe the overall argument is correct to point to Zig specifically in the case of writing code to optimize a hotpath behind FFI; it is much easier to get to more optimal code and cross-compilation is easier to boot (i.e. to support Darwin/AppleSilicon for dev laptops, and both Linux/x64 and Linux/arm64 for cloud servers).
> but Rust doesn't protect you from memory leaks<p>In theory no. In practice it really does.<p>> unsafe code is still unsafe<p>Ok, but most rust code is not unsafe while all zig code is unsafe.<p>> and the borrow checker and Rust's language complexity are their own kind of footguns<p>Please elaborate. They are something to learn but I don’t see the footgun. A footgun is a surprisingly defect that’s pointed at your foot and easy to trigger (ie doing something wrong and your foot blows off). I can’t think how the borrow checker causes that when it’s the exact opposite - you can’t ever create a footgun without doing unsafe because it won’t even compile.<p>> but I still believe the overall argument is correct to point to Zig specifically in the case of writing code to optimize a hotpath behind FFI; it is much easier to get to more optimal code and cross-compilation is easier to boot (i.e. to support Darwin/AppleSilicon for dev laptops, and both Linux/x64 and Linux/arm64 for cloud servers).<p>I agree cross compilation with zig is significantly easier but Rust isn’t <i>that</i> hard, especially with the cross-rs crate making it significantly simpler. Performance, Rust is going to be better - zig makes you choose between safety and performance and even in unsafe mode there’s various things that cause better codegen. For example zig follows the C path of manual noalias annotations which has been proven to be non scalable and difficult to make operational. Rust does this for all variables automatically because it’s not allowed in the language.
> a footgun is a surprising defect that's pointed at your foot and easy to trigger<p>Close, but not the way I think of a footgun. A footgun is code that was written in a naive way, <i>looks</i> correct, submitted, and you find out <i>after</i> submitting it that it was erroneous. Good design makes it easy for people to do the right thing and difficult to do the wrong thing.<p>In Rust it is <i>extremely</i> easy to hit the borrow checker <i>including for code which is otherwise safe and which you know is safe</i>. You walk on eggshells around the borrow checker hoping that it won't fire and shoot you in the foot and force you to rewrite. It is not a runtime footgun, it is a devtime footgun.<p>Which, to be fair, is sometimes desired. When you have a 1m+ LOC codebase and dozens of junior engineers working on it and requirements for memory safety and low latency requirements. Fair enough trade-off in that case.<p>But in Zig, you can just call defer on a deinit function. Complexity is the eternal enemy, and this is just a much simpler approach. The price of that simplicity is that you need to behave like an adult, which if the codebase (hotpath optimization) is <1k LOC I think is eminently reasonable.
> A footgun is code that was written in a naive way, looks correct, submitted, and you find out after submitting it that it was erroneous.<p>You’re contradicting yourself a bit here I think. Erroneous code generally won’t compile whereas in Zig it will happily do so. Also, Zig has plenty of foot guns (eg forgetting to call defer on a deinit but even misusing noalias or having an out of bounds result in memory corruption). IMHO the zig footgun story with respect to UB behavior is largely unchanged relative to C/C++. It’s mildly better but it’s closer to C/C++ than being a safe language and UB is a huge ass footgun in any moderate complexity codebase.
As an example to this, I was using polars in rust as a dependency in a fairly large project.<p>It has issues like panicking or segfaulting when using some data types (arrow array types) in the wrong place.<p>It is extremely difficult to write an arrow implementation in Rust.<p>It is much easier to do it in zig or c(without strict aliasing).<p>I also had the same experience with glommio in Rust.<p>Also the binary that we produce compiles in several minutes and is above 30mb. This is an insane amount of bloat. And unfortunately I don’t think there is another feasible way of doing this kind of work in rust, because it is so hard to write proper low level code.<p>I don’t agree with noalias being bad personally. I fuond it is the only way to do it. It is much harder to write code with pointers with implicit aliasing like c has by default and rust has as the only option. And you don’t ever need to use noalias except some rare places.<p>To make it clear, I mean the huge footgun in rust is producing a ton of bloat and subpar code because you can’t write much and you end up depending on too many libraries
Not the GP, but I've noticed that because if you don't anticipate how you might need to mutate or share state in the future, you can have a "footgun" that forces large-scale code changes for relatively small "feature-level" changes, because of the rust strictness. Its not a footgun in the sense that your code does what you don't expect, its a footgun in that your maintenance and ability to change code is not what you expect (and its easy to do). I'm sure if you are really expert with rust, you see it coming and don't use patterns that will cause waves of changes (but if you're expert at any language you avoid the footguns).
It's possible to do memory safety analysis for zig. I think you could pretty easily add a noalias checker on top of this:<p><a href="https://github.com/ityonemo/clr" rel="nofollow">https://github.com/ityonemo/clr</a>
> Of the four choices, Zig today is already simplest to learn,<p>Yes, with almost complete lack of documentation and learning materials it is definitely the easiest language to learn.
For reference, here's where Zig's documentation lives:<p><a href="https://ziglang.org/learn/" rel="nofollow">https://ziglang.org/learn/</a><p>I remember when learning Zig, the documentation for the language itself was extensive, complete, and easily greppable due to being all on one page.<p>The standard library was a lot less intuitive, but I suspect that has more to do with the amount of churn it's still going through.<p>The build system also needs more extensive documentation in the same way that the stdlib does, but it has a guide that got me reasonably far with what came out of the box.
For what it's worth, Bun is written in Zig (<a href="https://bun.sh/" rel="nofollow">https://bun.sh/</a>). The language isn't exactly in an early stage.
we (ZML) have been back to following Zig master since std.Io was introduced. It's not that bad tbh. Also most changes really feel like actual improvements to the language on a day to day basis.
No shame in waiting for 1.0. Specially if you want to read docs rather than the code itself.
Akctuyally, reading the code instead of a documentation is one of the nice part of Zig.<p>It is such a readable language that I found it easier learning the API than most languages.<p>Zig has this on its side. Reading the unit tests directly from the code give, most of the time, a good example too.
You're assuming that 1.0 will bring about stability. For all we know version 1.0 could make way for version 2.0 soon after.<p>Though perhaps the Zig developers have promised this will not happen.
I wouldn't have expected graphic sex slang to be acceptable as a NH user name.<p>This would translate as ~"eats pussy", where "broûter" is a verb reserved for animals feeding on grass, implying a hefty bush.
> but for the Grand Majority of the devs in here, it's like an announcement that it's raining in Kuldîga...<p>Lol, I’ll borrow this.
Please stop posting 0-information-content complaints.
I'm so sorry to hear about your diagnosis whatever it is :-P.
People might be triggered by the word "worthless" but I totally get your point.<p>I hear great things about the language but only have so many hours in the day and so many usable neurons to spend in that day. Someday it would be nice to play with it.<p>The easiest way to embrace any new language is to have a compelling use to use it. I've not hit that point yet.
I mean, you're right that still so many of us can't use the language yet, but I think we can still applaud progress towards major features when it's less than stable.<p>Kudos Zig contributors!
Pretty typical jaded HN comment there, chief. "This language's churn is more than I prefer -- why would anyone use it?" If you're not interested, just downvote and move on. Wondering out loud why anyone would actively use it ("for some reasons?") is a lame waste of bytes.
That comment you're complaining about is a useful signal for me who only watches zig from the far periphery. I feel like I'm getting good mileage out of it, just like I do from other, different ones. I'm glad it's in the mix.
An AI will be able to handle updating your code for 95% of your breaking changes
No it won't.<p>LLMs are good at dealing with things they've seen before, not at novel things.<p>When novel things arise, you will either have to burn a shed ton of tokens on "reasoning", hand hold them (so you're doing advanced find and replace in this example, where you have to be incredibly precise and detailed about your language, to the point it might be quicker to just make the changes), or you have to wait until the next trained model that has seen the new pattern emerges, or quite often, all of the above.
Apologies, but your information is either outdated from lack of experience with the latest frontier models, or you don't realize the fact that 99.9% of the work you do is not novel in all capacities. Have you only used Copilot, or something? Because that's what it sounds like. Since the performance of the latest models (Opus 4.6 max-effort, gpt-5.3-Codex) is nothing short of astonishing.<p>Real-world example: Claude isn't familiar with the latest Zig, so I had it write a language guide for 0.15.2 (here: <a href="https://gist.github.com/pmarreck/44d95e869036027f9edf332ce9a94583" rel="nofollow">https://gist.github.com/pmarreck/44d95e869036027f9edf332ce9a...</a>) which pointed out all the differences, and that's been extremely helpful in having me not even have to touch a line of code to do the updates.<p>On top of that, for any Zig dependency I pull in which is written to an earlier version, I have forked it and applied these updates correctly (or it has, under my guidance, really), 100% of the time.<p>On the off chance that guide is not in its context, it has seen the expected warning or error message, googled it, and done the correct correction 100% of the time. Which is exactly what a human would do.<p>Let's play the falsifiability game: Find me a real-world example of an upgrade to a newer API from the just-previous-to-that API that a modern LLM will fail to do correctly. Your choice of beer or coffee awaits you if you provide a link to it.
> so I had it write a language guide for 0.15.2<p>Tbh, while impressive that it appears to work, that guide looks very tailored to the Zig stdlib subset used in your projects and also looks like a lot more work than just fixing the errors manually ;) For a large code base which would amortise the cost of this guide I still wouldn't trust the automatic update without carefully reviewing each change.
I’ve been making a project in zig 0.16 with Claude as a learning experiment. It’s a fairly non trivial project (BitTorrent compliant p2p downloader for model weights on top of huggingface xet) - whenever it doesn’t know the syntax or makes errors, it literally reads the standard library code to understand and fix it. The project works too!
Just have to wait a few months until a new model with updated pretrained knowledge comes out.
Eh, I've had good luck with porting codebases to newer versions of Bevy by pointing CC to the migration guide, and that is harder to test than a language migration (as much of the changed behaviour would have been at runtime).<p>I still wouldn't want to deal with that much churn in my language, but I fully believe an agent could handle the majority of, if not all of, the migration between versions.