49 comments

  • Fiveplus1 day ago
    I&#x27;ve been following C3 for sometime now, and I really appreciate the discipline in the design philosophy here.<p>Neither does it force a new memory model on you, nor does it try to be C++. The killer feature for me is the full ABI compatibility. The fact that I no longer have to write bindings and can just mix C3 files into my existing C build system reduces the friction to near zero.<p>Kudos to the maintainer for sticking to the evolution, not revolution vision. If you are looking for a weekend language to learn that doesn&#x27;t require resetting your brain but feels more modern than C99, I highly recommend giving this a shot. Great work by the team.
    • reactordev1 day ago
      But can I still write a library in C3 and export the symbols to use in bindings?<p>The only thing stopping me from just going full C the rest of my career is cstrings and dangling pointers to raw memory that isn’t cleaned up when the process ends.
      • sureglymop1 day ago
        Maybe I misunderstand but if the process ends its entire virtual address space is gone no? Did you mean subprocess or something different?
        • reactordev1 day ago
          On some OS’s
          • loeg1 day ago
            If it isn&#x27;t cleaned up by process exit, it&#x27;s not really a process, is it? Just another co-routine running in the bare metal kernel or whatever.
          • Which one specifically does ending a process not clean up the memory?
            • reactordev23 hours ago
              Any flat memory rtos. Not everything is *nix.<p>For example microcontrollers or aerospace systems.
              • flohofwoe16 hours ago
                Tbh on such a bare bones system I would use my own trivial arena bump allocator and only do a single malloc at startup and a single free before shutdown (if at all, because why even use the C stdlib on embedded systems instead of talking directly to the OS or hardware)
              • CyberDildonics23 hours ago
                Can you link to one that has individual virtual memory processes where the memory isn&#x27;t freed? It sounds like what you&#x27;re talking about is just leaking memory and processes have nothing to do with it.
                • reactordev14 hours ago
                  virtual memory requires pages and this sucker doesn’t have them. Only a heap that you can use with <i>heap_x.c</i><p>Everything is manual.<p>I get you people are trying to be cheeky and point out all modern OS’s don’t have this problem but C runs on a crap ton of other systems. Some of these “OS” are really nothing more than a coroutine from pid 0.<p>I have 30 years experience in this field.
                  • sph14 hours ago
                    Yeah I think I get your problem. I am prototyping a message-passing actor platform running in a flat address space, and virtual memory is the only way I can do cleanup after a process ends (by keeping track of which pages were allocated to a process and freeing them when it terminates)<p>Without virtual memory, I would either need to force the use of a garbage collector (which is an interesting challenge in itself to design a GC for a flat address space full of stackless coroutines), or require languages with much stricter memory semantics such as Rust so I can be safe everything is released at the end (though all languages are designed for isolated virtual memory and not even Rust might help without serious re-engineering)<p>Do you keep notes of these types of platforms you’re working on? Sounds fun.
                    • reactordev12 hours ago
                      Not anything I can share. I’m trying to modernize these systems but man oh man was the early 80s tech brutal. Rust is something we looked into heavily and are trying to champion but bureaucracy prevents us. Flight Sims have to integrate with it in order to read&#x2F;write data and it’s 1000x worse than SimConnect from MSFS.<p>The good news is that this work is dying out. There isn’t a need to modernize old war birds anymore.
              • mort9613 hours ago
                RTOSes I&#x27;m aware of call them tasks rather than processes, specifically because they don&#x27;t provide the sort of isolation that a &quot;proper&quot; OS does.
              • avadodin21 hours ago
                Why is something running on an rtos even able to leak memory? If your design is going to be dirty, you&#x27;ve got to account for that. In 30 years, I&#x27;ve never seen a memory leak in the wild. Set up a memory pool, memory limits, garbage collectors or just switch to an OS&#x2F;language that will better handle that for you. Rust is favored among C++ users, but even Python could be a better fit for your use case.
                • reactordev14 hours ago
                  python is not an option in this environment. Correct your tone.
                  • avadodin4 hours ago
                    I don&#x27;t see anything wrong with my tone. I could have been snarky about it.<p>I provided the C solutions as well but an interpreter written in C could at least allocate objects and threads within the interpreter context and not leak memory allowing you to restart it along any services within which is apparently better than whatever framework people sharing this sentiment are using.<p>I&#x27;m genuinely curious. What kind of mission-critical embedded real-time design dynamically(!) allocates objects and threads and then loses track of them?<p>PS: On topic, I really like the decisions made in C3
                • irishcoffee18 hours ago
                  I think the short answer is that it is very hard, time-consuming, and expensive to develop and prove out formal verification build&#x2F;test toolchains.<p>I haven’t looked at C3 yet, but I imagine it can’t be used in a formally verified toolchain either unless the toolchain can compile the C3 bits somehow.
      • paulddraper1 day ago
        &gt; But can I still write a library in C3 and export the symbols to use in bindings?<p>Yes, it has the same ABI.
      • d-lisp1 day ago
        &gt; dangling pointers to raw memory that [are not] cleaned<p>How do you feel about building special constructs to automatically handle these ?
        • reactordev1 day ago
          I totally can but my gripe is about not wanting to.
          • brabel17 hours ago
            c3 has a @pool annotation that makes a block use an arena to allocate, that should help since all memory is freed upon exiting the block.
    • astrange1 day ago
      Is full ABI compatibility important? I&#x27;m having a hard time seeing why.<p>I mean… C isn&#x27;t even an unsafe language. It&#x27;s just that C implementations and ABIs are unsafe. Some fat pointers, less insanely unsafe varargs implementations, UBSan on by default, MTE… soon you&#x27;re doing pretty well! (Exceptions apply.)
      • flohofwoe16 hours ago
        How would you integrate C3 with other programming languages (not just C), or even talk to operating systems if you don&#x27;t implement a common ABI?<p>And the various system ABIs supported by C compilers are the defacto standards for that (contrary to popular belief there is no such thing as a &quot;C ABI&quot; - those ABIs are commonly defined by OS and CPU vendors, C compilers need to implement those ABIs just like any other compiler toolchain if they want to talk to operating system interfaces or call into libraries compiled with different compilers from different languages).
  • Fraterkes1 day ago
    Dumb question about contracts: I was reading the docs (<a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;language-common&#x2F;contracts&#x2F;" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;language-common&#x2F;contracts&#x2F;</a>) and this jumped out<p><i>&quot;Contracts are optional pre- and post-condition checks that the compiler may use for static analysis, runtime checks and optimization. Note that conforming C3 compilers are not obliged to use pre- and post-conditions at all.<p>However, violating either pre- or post-conditions is unspecified behaviour, and a compiler may optimize code as if they are always true – even if a potential bug may cause them to be violated.<p>In safe mode, pre- and post-conditions are checked using runtime asserts.&quot;</i><p>So I&#x27;m probably missing something, but it reads to me like you&#x27;re adding checks to your code, except there&#x27;s no guarantee that they will run and whether it&#x27;s at compile or runtime. And sometimes instead of catching a mistake, these checks will instead silently <i>introduce</i> undefined behaviour into your program. Isn&#x27;t that kinda bad? How are you supposed to use this stuff reliably?<p>(otherwise C3 seems really cool!)
    • tialaramex1 day ago
      Contracts are a way to express invariants, &quot;This shall always be true&quot;.<p>There are three main things you could do with these invariants, the exact details of how to do them, and whether people should be allowed to specify which of these things to do, and if so whether they can pick only for a whole program, per-file, per-function, or whatever, is separate.<p>1. Ignore the invariants. You wrote them down, a human can read them, but the machine doesn&#x27;t care. You might just as well use comments or annotate the documentation, and indeed some people do.<p>2. Check the invariants. If the invariant <i>wasn&#x27;t</i> true then something went wrong and we might tell somebody about that.<p>3. Assume these invariants are always true. Therefore the optimiser may use them to emit machine code which is smaller or faster but only works if these invariants were correct.<p>So for example maybe a language lets you say only that the whole program is checked, or, that the whole program can be assumed true, or, maybe the language lets you pick, function A&#x27;s contract about pointer validity we&#x27;re going to check at runtime, but function B&#x27;s contract that you must pick an odd number, we will use assumption, we did tell you about that odd number requirement, have the optimiser emit that slightly faster machine code which doesn&#x27;t work for N=0 -- because zero isn&#x27;t an odd number assumption means it&#x27;s now fine to use that code.
      • Fraterkes1 day ago
        I guess the reason I found it surprising is that I would only use 3 (ie risk introducing UB) for invariants that I was <i>very</i> certain were true, whereas I would mostly use 2 for invariants that I had reason to believe might not always be true. It struck me as odd that you&#x27;d use the same tool for scenario&#x27;s that feel like opposites, especially when you can just switch between these modes with a compiler flag
        • skavi16 hours ago
          It feels pretty clear to me that these Contracts should only be used for the “very certain” case. Writing code for a specific compiler flag seems very sketchy, so the programmer should assume the harshest interpretation.<p>The runtime check thing just sounds like a debugging feature.
      • bryanlarsen1 day ago
        In other words, in production mode it makes your code faster and less safe; in debug mode it makes your code slower and more safe.<p>That&#x27;s a valid trade-off to make. But it&#x27;s unexpected for a language that bills itself as &quot;The Ergonomic, Safe and Familiar Evolution of C&quot;.<p>Those pre&#x2F;post-conditions are written by humans (or an LLM). Occasionally they&#x27;re going to be wrong, and occasionally they&#x27;re not going to be caught in testing.<p>It&#x27;s also unexpected for a feature that naive programmers would expect to make a program more safe.<p>To be clear this sounds like a good feature, it&#x27;s more about expectations management. A good example of that done well is Rust&#x27;s unsafe keyword.
        • MarsIronPI1 day ago
          &gt; That&#x27;s a valid trade-off to make. But it&#x27;s unexpected for a language that bills itself as &quot;The Ergonomic, Safe and Familiar Evolution of C&quot;.<p>No, I think this is a very ergonomic feature. It fits nicely because it allows better compilers to use the constraints to optimize more confidently than equivalently-smart C compilers.
          • bryanlarsen1 day ago
            I&#x27;ll give you &quot;more ergonomic&quot; if you&#x27;ll give me &quot;less safe&quot;.
            • fc417fc8021 day ago
              I&#x27;d argue it&#x27;s no less safe than the status quo, just easier to use. The standard &quot;assert&quot; can be switched off. There&#x27;s &quot;__builtin_unreachable&quot;. My personal utility library has &quot;assume&quot; which switches between the two based on NDEBUG.<p>C is a knife. Knives are sharp. If that&#x27;s a problem then C is the wrong language.
              • bryanlarsen20 hours ago
                But people are looking at C3, Odin &amp; Zig because they&#x27;ve determined that C is the wrong language for them; many have determined that it&#x27;s too sharp. C3 has &quot;safe&quot; in its title, they&#x27;re expecting fewer sharp edges.<p>I&#x27;m not asking for useful optimizations like constraints to go away, I&#x27;m asking for them to be properly communicated as being sharp. If you use &quot;unsafe&quot; incorrectly in your rust code, you invite UB. But because of the keyword they chose, it&#x27;s hardly surprising.
      • lerno1 day ago
        Maybe also worth mentioning is that some static analysis is done using these contracts as well. With more coming.
      • atombender15 hours ago
        Is C3 using a different terminology than standard design by contract?<p>Design by contract (as implemented by Eiffel, Ada, etc.) divides the set of conditions into three: Preconditions, postconditions, and invariants. Pre- and postconditions are not invariants by predicate checks on input and output parameters.<p>Invariants are conditions expressed on types, and which must be checked on construction and modification. E.g. for a &quot;time range&quot; struct with start&#x2F;end dates, the invariant should be that the start must precede the end.
      • ajuc1 day ago
        So the compiler could have debug mode where it checks the invariants and release mode where it assumes they are true and optimizes around that without checking?
        • esrauch1 day ago
          Yes, and that same pattern already does exist in C and C++. Asserts that are checked in debug builds but presumed true for optimization in release builds.
          • Not unless you write your own assert macro using C23 unreachable(), GNU C __builtin_unreachable(), MSVC __assume(0), or the like. The standard one is defined[1] to either explicitly check or completely ignore its argument.<p>[1] <a href="https:&#x2F;&#x2F;port70.net&#x2F;~nsz&#x2F;c&#x2F;c11&#x2F;n1570.html#7.2" rel="nofollow">https:&#x2F;&#x2F;port70.net&#x2F;~nsz&#x2F;c&#x2F;c11&#x2F;n1570.html#7.2</a>
            • esrauch1 day ago
              Yeah, I meant it&#x27;s common for projects to make their own &#x27;assume&#x27; macros.<p>In Rust you can wrap core::hint::assert_unchecked similarly.
      • chowells1 day ago
        You&#x27;ve described three different features with three different sets of semantics. Which set of semantics is honored? Unknown!<p>This is not software engineering. This is an appeal to faith. Software engineering requires precise semantics, not whatever the compiler feels like doing. You can&#x27;t even declare that this feature has no semantics, because it actually introduces a vector for UB. This is the sort of &quot;feature&quot; that should not be in any language selling itself as an improved C. It would be far better to reduce the scope to the point where the feature can have precise semantics.
        • tialaramex1 day ago
          &gt; Which set of semantics is honored?<p>Typically it&#x27;s configurable. For example C++ 26 seems to be intending you&#x27;ll pick a compiler flag to say if you want its do-nothing semantics, or its &quot;tell me about the problem and press on&quot; semantics or just exit immediately and report that. They&#x27;re not intending (in the standard at least) to have the assume semantic because that is, as you&#x27;d expect, controversial. Likewise more fine-grained configuration they&#x27;re hoping will be taken up as a vendor extension.<p>My understanding is that C3 will likely offer the coarse configuration as part of their ordinary fast versus safe settings. Do I think that&#x27;s a good idea? No, but that&#x27;s definitely not &quot;Unknown&quot;.
          • fc417fc80223 hours ago
            Any idea how the situation is handled where third party code was written to expect a certain semantic? Is this just one more rough edge to watch out for when integrating something?
        • dnautics1 day ago
          not enforced for any given implementation is hardly &quot;unknown&quot;. presumably the tin comes with a label saying what&#x27;s inside
    • - &quot;Note that conforming C3 compilers are not obliged to use pre- and post-conditions at all.&quot; means a compiler doesn&#x27;t have to use the conditions to select how the code will be compiled, or if there&#x27;s a compile-time error.<p>- &quot;However, violating either pre- or post-conditions is unspecified behaviour, and a compiler may optimize code as if they are always true – even if a potential bug may cause them to be violated.&quot; basically, it just states the obvious. the compler assumes a true condition is what the code is meant to address. it won&#x27;t guess how to compile the code when the condition is false.<p>- &quot;In safe mode, pre- and post-conditions are checked using runtime asserts.&quot; it means that there&#x27;s a &#x27;mode&#x27; to activate the conditions during run-time analysis, which implies there&#x27;s a mode to turn it off. this allows the conditions to stay in the source code without affecting runtime performance when compiled for production&#x2F;release.
    • riazrizvi1 day ago
      It’s giving you an expression capability so that you can state your intent, in a standardized way, that other tooling can build off. But it’s recognizing that the degree of enforcement depends on applied context. A big company team might want to enforce them rigidly, but a widely used tool like Visual Studio would not want to prevent code from running, so that folks who are introducing themselves to the paradigm can start to see how it would work, through warnings, while still being able to run code.
      • SkiFire131 day ago
        This is not just expressing intent. The documentation clearly states that it&#x27;s UB to violate them, so you need to be extra careful when using them.
        • riazrizvi1 day ago
          Perhaps another helpful paradigm are traffic&#x2F;construction cones with ‘do not cross’ messages. Sometimes nothing happens, other times you run into wet concrete, other times you get a ticket. They’re just plastic objects, easy to move, but you are not meant to cross them in your vehicle. While concrete bollards are a thing, they are only preferable in some situations.
          • SkiFire1314 hours ago
            I don&#x27;t think this analogy fully respects the situation here. These pre&#x2F;post condition are not just adding a warning to not do something, they also add a potentially bigger danger if they are broken. It&#x27;s as if you also added a trap behind the construction cone which can do more damage than stepping on the wet concrete!
        • Phil_Latio1 day ago
          &gt; documentation clearly states that it&#x27;s UB to violate them<p>Only in &quot;fast&quot; mode. The developer has the choice:<p>&gt; Compilation has two modes: “safe” and “fast”. Safe mode will insert checks for out-of-bounds access, null-pointer deref, shifting by negative numbers, division by zero, violation of contracts and asserts.
          • SkiFire1314 hours ago
            &gt; The developer has the choice<p>The developer has the choice between fast or safe. They don&#x27;t have a choice for checking pre&#x2F;post conditions, or at least avoiding UB when they are broken, while getting the other benefits of the &quot;fast&quot; mode.<p>And all in all the biggest issue is that these can be misinterpreted as a safety feature, while they actually add more possibilities for UB!
            • Phil_Latio8 hours ago
              Well, the C3 developer could add more fine grained control if people need it...<p>I don&#x27;t really see what&#x27;s your problem. It&#x27;s not so much different than disabling asserts in production. Some people don&#x27;t do that, because they rather crash than walking into invalid program state - and that&#x27;s fine too. It largely depends on the project in question.
    • pkulak1 day ago
      It seems to me like a way to standardize what happens all the time anyway. Compilers are always looking for ways to optimize, and that generally means making assumptions. Specifying those assumptions in the code, instead of in flags to the compiler, seems like a win.
    • Windeycastle1 day ago
      The way I reason about it is that the contracts are more soft conditions that you expect to not really reach. If something always has to be true, even on not-safe mode, you use &quot;actual&quot; code inside the function&#x2F;macro to check that condition and fail in the desired way.
      • coldtea1 day ago
        &gt;<i>The way I reason about it is that the contracts are more soft conditions that you expect to not really reach</i><p>What&#x27;s the difference from an assert then?
        • lerno1 day ago
          The difference from an assert is that for &quot;require&quot; they are compiled into the caller frame, so things like stack traces (which is available in safe mode) will point exactly to where the violation happened.<p>Because of inlining them at the call site happens, static analysis will already pick up some obvious violations.<p>Finally, these contracts may be used to compile time check otherwise untyped arguments to macros.
      • cwillu1 day ago
        “However, violating either pre- or post-conditions is unspecified behaviour, and a compiler may optimize code as if they are always true – even if a potential bug may cause them to be violated”<p>This implies that a compiler would be permitted to remove precisely that actual code that checks the condition in non-safe mode.<p>Seems like a deliberately introduced footgun.
        • cloud-oak1 day ago
          My understanding of this was that the UB starts only after the value is passed&#x2F;returned. So if foo() has a contract to only return positive integers, the code within foo can check and ensure this, but if the calling code does it, the compiler might optimize it away.
          • cwillu16 hours ago
            Assuming that is correct, it&#x27;s still exactly the same footgun. Checks like that are introduced to guard against bugs: you are strictly safer to not declare such a constraint.
          • gku1 day ago
            <i>Unspecified</i> behavior != UB <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unspecified_behavior" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unspecified_behavior</a>
    • bluecalm1 day ago
      I think they are there to help the compiler so the optimizer might (but doesn&#x27;t have to) assume they are true. It&#x27;s sometimes very useful to be able to do so. For example if you know that two numbers are always different or that some value is always less than x. In standard C it&#x27;s impossible to do but major compilers have a way to express it as extensions. GCC for example has:<p><pre><code> if (x) __builtin_unreachable(); </code></pre> C3 makes it a language construct. If you want runtime checks for safety you can use assert. The compiler turns those into asserts in safe&#x2F;debug mode because that help catching bugs in non performance critical builds.
      • florianist1 day ago
        In the current C standard that&#x27;s unreachable() from &lt;stddef.h&gt;
        • bluecalm20 hours ago
          Thank you, I&#x27;ve just recently read the list of new features and missed this one!
    • fuzztester1 day ago
      Design by contract is good. I&#x27;ve used it in some projects.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Design_by_contract" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Design_by_contract</a><p>I first came across it when I was reading Bertrand Meyer&#x27;s book, Object-oriented Software Construction.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Object-Oriented_Software_Construction" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Object-Oriented_Software_Const...</a><p>From the start of the article:<p>[ Object-Oriented Software Construction, also called OOSC, is a book by Bertrand Meyer, widely considered a foundational text of object-oriented programming.[citation needed] The first edition was published in 1988; the second edition, extensively revised and expanded (more than 1300 pages), in 1997. Many translations are available including Dutch (first edition only), French (1+2), German (1), Italian (1), Japanese (1+2), Persian (1), Polish (2), Romanian (1), Russian (2), Serbian (2), and Spanish (2).[1] The book has been cited thousands of times. As of 15 December 2011, The Association for Computing Machinery&#x27;s (ACM) Guide to Computing Literature counts 2,233 citations,[2] for the second edition alone in computer science journals and technical books; Google Scholar lists 7,305 citations. As of September 2006, the book is number 35 in the list of all-time most cited works (books, articles, etc.) in computer science literature, with 1,260 citations.[3] The book won a Jolt award in 1994.[4] The second edition is available online free.[5] ]<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Bertrand_Meyer" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Bertrand_Meyer</a>
  • The GitHub project has more details: <a href="https:&#x2F;&#x2F;github.com&#x2F;c3lang&#x2F;c3c" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;c3lang&#x2F;c3c</a><p>Some ways C3 differs from C:<p>- No mandatory header files<p>- New semantic macro system<p>- Module-based namespacing<p>- Slices<p>- Operator overloading<p>- Compile-time reflection<p>- Enhanced compile-time execution<p>- Generics via generic modules<p>- &quot;Result&quot;-based zero-overhead error handling<p>- Defer<p>- Value methods<p>- Associated enum data<p>- No preprocessor<p>- Less undefined behavior, with added runtime checks in &quot;safe&quot; mode<p>- Limited operator overloading (to enable userland dynamic arrays)<p>- Optional pre- and post-conditions
    • sprash1 day ago
      So far so good. The feature set is bit random though. Things i personally miss is function overloading, default values in parameters and tuple returns.
      • cardanome1 day ago
        &gt; default values in parameters<p>C3 has you covered<p><a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;language-fundamentals&#x2F;functions&#x2F;#function-arguments" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;language-fundamentals&#x2F;functions&#x2F;#functio...</a><p>It also has operator overloading and methods which you could use in place of function overloading I guess.
  • loeg1 day ago
    They named Result (or Expected) <i>Optional</i>? No, no, &quot;optional&quot; means &quot;T or empty.&quot; Not &quot;T or E.&quot;<p><a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;language-fundamentals&#x2F;functions&#x2F;#functions-and-optional-returns" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;language-fundamentals&#x2F;functions&#x2F;#functio...</a>
    • turtletontine1 day ago
      I think there <i>is</i> an important difference here from both Option&lt;T&gt; and Result&lt;T, E&gt;: the C3 optional doesn’t allow an arbitrary error type, it’s just a C-style integer error code. I think that makes a lot of sense and fits perfectly with their “evolution, not revolution” philosophy. And the fact that the syntax is ‘type?’ rather than ‘Optional&lt;type&gt;’ also eases any confusion.
      • loeg1 day ago
        Sure, there is a restriction on the type of E. This is similar to Zig&#x27;s result ADT, I think?
    • jeremyjh1 day ago
      From what I can see there you never write the word “Optional” in your code. This is just what they named the feature which stays close to C semantics without the burden of *out params.
    • fwip1 day ago
      I share your distaste for arbitrarily renaming concepts. However, I think if you only have one of the two in the language, Optional is the clearer name.<p>A result is already the informal name of the outcome or return value of every regular operation or function call, whereas an Optional is clearly not a regular thing.<p>I also think, from a pragmatic systems-design point of view, it might make sense to only support the Either&#x2F;Result pattern. It&#x27;s not too much boilerplate to add a `faultdef KeyNotInMap`, and then it&#x27;s clear to the consumer why a real answer was not returned.
      • loeg1 day ago
        It&#x27;s not just arbitrary renaming (Rust Result vs C++ Expected is fine) -- it&#x27;s choosing a <i>conflicting</i> name for an extremely common abstract data type. If they wanted to call it &quot;Elephant,&quot; great, but Optional is a well-known concept and Result&#x2F;Expected isn&#x27;t the same thing.<p>(I don&#x27;t really object to the idea of skipping a real Optional&lt;T&gt; type in a language in favor of just Result&lt;T, ()&gt;.)
        • jeremyjh1 day ago
          I would guess the two languages have little overlap in who they appeal to, and this is a bit closer to C semantics which is their north star. In C for functions that don’t return a value you often return a null for success or a return code that is a defined error code, which is exactly how Optional works in C3 (but you don’t write the word Optional in the function head). If you need to return a value in C you often pass an out param pointer and then still return either null or a defined error value. The improvement is you just write ‘type?’ for the return type to enable this and you don’t need an out param, but you still need the define (faultdef).
          • fc417fc80223 hours ago
            It&#x27;s not about C semantics it&#x27;s about the name of a commonly understood concept. What you just described there is the &quot;result&quot; pattern not the &quot;optional&quot; pattern. Of course the designers are free to call it whatever they want but swapping common terminology like that is a blunder from my perspective.
            • jeremyjh22 hours ago
              This is not exactly the same as the Result pattern. It doesn&#x27;t require an ADT feature and does not require the programmer to specify a type for the possible error value.<p>Also syntactically it is quite different: it means you add exactly one character to the function head to denote that its possible to return an error.<p>So, calling that feature &quot;Result&quot; could also be confusing to people who have not yet learned this language.<p>Tell me, was it a blunder when Rust swapped &quot;Result&quot; from the commonly understood name of &quot;Either&quot; from OCaml&#x2F;Haskell ?
              • fc417fc80211 hours ago
                &gt; It doesn&#x27;t require an ADT feature and does not require the programmer to specify a type for the possible error value.<p>I don&#x27;t think that really matters? Result is &quot;A or error&quot; whereas optional is &quot;A or nil&quot;.<p>Admittedly my wording was sloppy. It&#x27;s technically a subset of the pattern when taken literally. But there&#x27;s a very strong convention for the error type in C so at least personally I don&#x27;t find the restriction off putting.<p>To me the issue is the name clash. This is most definitely not the &quot;optional&quot; pattern. I actually prefer C++&#x27;s &quot;expected&quot; over &quot;result&quot; as far as name clarity goes. &quot;Maybe&quot; would presumably also work.<p>At the end of the day it&#x27;s all a non-issue thanks to the syntax. I might not agree with what you expressed but I also realize a name that only shows up in the docs isn&#x27;t going to pose a problem in practice. Probably more than half the languages out there confuse or otherwise subtly screw up remainder, modulus, and a few closely related mathematical concepts but that doesn&#x27;t get in the way of doing things.
    • paulddraper1 day ago
      Oof.<p>You can name it &quot;Result&quot; or (questionably) &quot;Either.&quot;<p>Not &quot;Option,&quot; &quot;Optional,&quot; or &quot;Maybe;&quot; those are something else.
  • xprnio1 day ago
    Tsoding did a bunch of livestreams using C3. Over 30h worth if anyone’s interested.<p><a href="https:&#x2F;&#x2F;youtube.com&#x2F;playlist?list=PLpM-Dvs8t0VYwdrsI_O-7wpo-_MMIdeX8" rel="nofollow">https:&#x2F;&#x2F;youtube.com&#x2F;playlist?list=PLpM-Dvs8t0VYwdrsI_O-7wpo-...</a>
  • impoppy1 day ago
    I haven’t tried C3 myself, but I happened to interact a lot with Christopher Lerno, Ginger Bill and multiple Zig maintainers before. Was great to learn that C3, Odin and Zig weren’t competing with each other but instead learn from each other and discuss various trade-offs they made when designing their languages. Generally was a very pleasant experience to learn from them on how and why they implemented building differently or what itch they were scratching when choosing or refusing to implement certain features.
    • SV_BubbleTime1 day ago
      Which one for is most reasonable for embedded?
      • abyesilyurt14 hours ago
        C3 was quite easy to get running. I have a minimal project to use C3 for ESP32-C3 chips here: <a href="https:&#x2F;&#x2F;github.com&#x2F;abyesilyurt&#x2F;c3-for-c3" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;abyesilyurt&#x2F;c3-for-c3</a>
    • ivanjermakov1 day ago
      &gt; weren’t competing with each other but instead learn from each other<p>What is the difference? Using polite words to communicate?
      • impoppy1 day ago
        Competing with each other would be trying to one-up each other feature-wise, whereas what I have witnessed was things like discussing trade-offs made in different languages and juggling around ideas on if some feature from language A would make sense in language B too.
      • edvinbesic1 day ago
        You catch more flies with honey than with vinegar
        • ibuildbots1 day ago
          why would I want to attract bugs? Vinegar keeps them away from me
          • card_zero1 day ago
            It isn&#x27;t even true.<p><a href="https:&#x2F;&#x2F;xkcd.com&#x2F;357&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;357&#x2F;</a><p>&quot;Head over heels&quot; is another weird idiom. <i>I&#x27;m so in love, I&#x27;m standing in a normal orientation.</i>
            • atombender14 hours ago
              &quot;Head over heels&quot; is actually a corruption of &quot;heels over head&quot;.<p>It&#x27;s one of those corruptions which flips the meaning (ironically, in this case!) on its head, or just becomes meaningless over time as it&#x27;s reinterpreted (like &quot;the exception that proves the rule&quot; or &quot;begs the question&quot;).
        • timeon1 day ago
          But then you are left with less honey.
  • rixed1 day ago
    Just browsed the doc to get the answers to two burning questions, which I will dump here in case it saves some time to others:<p><pre><code> - uses LLVM (so: as portable as LLVM) - sadly, does not support tagged enums </code></pre> Apart from that it adds a few very desirable things, such as introspection and macros.
    • flohofwoe16 hours ago
      IMHO the downsides of tagged unions (e.g. what Rust confusingly calls &quot;enums&quot;) are big enough that they should only be used rarely if at all in a systems programming language since they&#x27;re shoehoerning a dynamic type system concept back into an otherwise statically typed language.<p>A tagged union always needs at least as much memory as the biggest type, but even worse, they nudge the programmer towards &#x27;any-types&#x27;, which basically moves the type checking from compile-time to run-time, but then why use a statically typed language at all?<p>And even if they are useful in some rare situations, are the advantages big enough to justify wasting &#x27;syntax surface&#x27; instead of rolling your own tagged unions when needed?
      • rixed10 hours ago
        tagged unions (not enums, sorry) are not a dynamic type system concept. Actually, I would not be able to name a single dynamically typed language that has them.<p>As for the memory allocation, I can&#x27;t see why any object should have the size of the largest alternative. When I do the manual equivalent of a tagged union in C (ie. a struct with a tag followed by a union) I malloc only the required size, and a function receiving a pointer to this object has better not assume any size before looking at the tag. Oh you mean when the object is automatically allocated on the stack, or stored in an array? Yes then, sure. But that&#x27;s going to be small change if it&#x27;s on the stack and for the array, well there is no way around it ; if it does not suit your design then have only the tags on the array?<p>Tagged unions are a thing, whether the language helps or not. When I program in a language that has them then it&#x27;s probably a sizeable fraction of all the types I define. I believe they are fundamental to programming, and I&#x27;d prefer the language to help with syntax and some basic sanity checks; Like, with a dynamical sizeof that to reads the tag so it&#x27;s easier to malloc the right amount, or a syntax that makes it impossible to access the wrong field (ie. any lightweight pattern matching will do).<p>In other words, I couldn&#x27;t really figure out the downside you had in mind :)
      • sph14 hours ago
        Tagged enums != <i>any</i> type (i.e. runtime casting)<p>Tagged enums are everywhere. I am writing a micro kernel in C and how I wish I had tagged enums instead of writing the same boilerplate of<p><pre><code> enum foo_type { FOO_POINTER, FOO_INT, FOO_FLOAT, }; struct foo { enum foo_type type; union { void *val_pointer; int val_int; float val_float; }; };</code></pre>
        • flohofwoe13 hours ago
          &gt; ...runtime casting...<p>...what else is a select on a tagged union than &#x27;runtime casting&#x27; though. You have a single &#x27;sum type&#x27; which you don&#x27;t know what concrete type it actually is at runtime until you look at the tag and &#x27;cast&#x27; to the concrete type associated with the tag. The fact that some languages have syntax sugar for the selection doesn&#x27;t make the runtime overhead magically disappear.
          • sph12 hours ago
            Not sure why you call it runtime <i>overhead</i>. That’s core logic, nothing fancy, to have a pointer to a number of possible types. That’s what `void *` is, and sometimes you want a little logic to restrict the space of possibilities to a few different choices.<p>Not having syntactic sugar for this ultra-common use case doesn’t make it disappear. It just makes it more tedious.<p>There are many implementations and names, and what I refer to runtime casting&#x2F;any type, which is unnecessary for low-level programming, is the one that uses types and reflection at runtime to be 100% sure you are casting to the correct type. Like Go’s pattern (syntax might be a bit off):<p><pre><code> var s *string var unknown interface{} &#x2F;&#x2F; panics at runtime if unknown is not a string pointer s = unknown.(*string) </code></pre> This is overkill for low-level programming and has much higher overhead (i.e. having to store type info in the binary, fat pointers, etc.) than tagged unions, which are the bread and butter of computing.
  • xen012 hours ago
    Reading through, something small caught me by surprise.<p><a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;language-common&#x2F;arrays&#x2F;#fixed-size-multi-dimensional-arrays" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;language-common&#x2F;arrays&#x2F;#fixed-size-multi...</a><p>Multi dimensional arrays are not declared in the same way they are accessed; the order of dimensions is reversed.<p><i>Accessing the multi-dimensional fixed array has inverted array index order to when the array was declared.</i><p>That is, the last element of &#x27;int[3][10] x = {...}&#x27; is accessed with &#x27;x[9][2]&#x27;.<p>This seems bizarre to me. What am I missing? Are there other languages that do this?
    • lerno4 hours ago
      Please consider a variable `List{int}[3] x`, this is an array of 3 List{int} containing List{int}. If we do `x[1]` we will get an element of List{int}, from the middle element in the array. If we then further index this with [5], like `x[1][5]` we will get the 5th element of that list.<p>If we look at `int*`, the dereference will peel off the `*` resulting in `int`.<p>So, the way C3 types are declared is the most inside one is to the left, the outermost to the right. Indexing or dereferencing will peel off the rightmost part.<p>C uses a different way to do this, we place `*` and `[]` not on the type but on the variable, in the order it must be unpacked. So given `int (*foo) x[4]` we first dereference it (from inside) int[4], then index from the right.<p>If we wanted to extract a standalone type from this, we&#x27;d have `int(*)[4]` for a pointer to an array of 4 integers. For &quot;left is innermost&quot;, the declaration would instead be `int[4]*`. If left-is-innermost we can easily describe a pointer to an array of int pointers (which happens in C3 since arrays don&#x27;t implicitly decay) int*[4]*. In C that be &quot;int*(*)[4]&quot;, which is generally regarded as less easy to read, not the least because you need to think of which of * or [] has priority.<p>That said, I do think that C has a really nice ordering to subscripts, but it was unfortunately not possible to retain it.
    • sph11 hours ago
      File it with the footgun of the two different array slicing syntaxes: <a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;language-common&#x2F;arrays&#x2F;#slicing-arrays" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;language-common&#x2F;arrays&#x2F;#slicing-arrays</a><p>I have already opened a discussion about this with the author, and I must say I agree to disagree that a language needs arr[start..end] (inclusive) as well as arr[start:len] (up to len-1) and if you use the wrong one, you’ve now lost a foot and your memory is corrupted.
      • xen011 hours ago
        The closed intervals for slices caught my eye as well, but I simply filed that under &#x27;that&#x27;s a weird quirk&#x27; rather than &#x27;wtf?&#x27;.<p>It would require more thinking on my end to change that to either &#x27;this is an acceptable choice&#x27; or &#x27;this is a terrible idea&#x27;.<p>But the array indices being reversed on declaration? I cannot think of an upside to that at all.
  • gritzko1 day ago
    I wonder, at which point it is worth it to make a language? I personally implemented generics, slices and error propagation in C… that takes some work, but doable. Obviously, C stdlib goes to the trash bin, but there is not much value in it anyway. Not much code, and very obsolete.<p>Meanwhile, a compiler is an enormously complicated story. I personally never ever want to write a compiler, cause I already had more fun than I ever wanted working with distributed systems. While idiomatic C was not the way forward, my choice was a C dialect and Go for higher-level things.<p>How can we estimate these things? Or let&#x27;s have fun, yolo?
    • contificate1 day ago
      &gt; Meanwhile, a compiler is an enormously complicated story.<p>I don&#x27;t intend to downplay the effort involved in creating a large project, but it&#x27;s evident to me that there&#x27;s a class of &quot;better C&quot; languages for which LLVM is very well suited.<p>On purely recreational grounds, one can get something small off the ground in an afternoon with LLVM. It&#x27;s very enjoyable and has a low barrier to entry, really.
      • norir1 day ago
        Yes, this is fine for basic exploration but, in the long run, I think LLVM taketh at least as much as it giveth. The proliferation of LLVM has created the perception that writing machine code is an extremely difficult endeavor that should not be pursued by mere mortals. In truth, you can get going writing x86_64 assembly in a day. With a few weeks of effort, it is possible to emit all of the basic x86_64 instructions. I have heard aarch64 is even easier but I only have experience with x86_64.<p>What you then realize is that it is possible to generate quality machine code much faster than LLVM and using far fewer resources. I believe both that LLVM has been holding back compiler evolution and that it is close to if not already at peak popularity. As LLMs improve, the need for tighter feedback loops will necessitate moving off the bloat of LLVM. Moreover, for all of the magic of LLVMs optimization passes, it does very little to prevent the user from writing incorrect code. I believe we will demand more from a compiler backend than LLVM can ever deliver.<p>The main selling point of LLVM is that you gain access to all of the targets, but this is for me a weak point in its favor. Firstly, one can write a quality self hosting compiler with O(20) instructions. Adding new backends should be trivial. Moreover, the more you are thinking about cross platform portability, the more you are worrying about hypothetical problems as well as the problems of people other than yourself. Get your compiler working well first on your machine and then worry about other machines.
        • contificate1 day ago
          I agree. I&#x27;ve found that, for the languages I&#x27;m interesting in compiling (strict functional languages), a custom backend is desirable simply because LLVM isn&#x27;t well suited for various things you might like to do when compiling functional programming languages (particularly related to custom register conventions, split stacks, etc.).<p>I&#x27;m particularly fond of the organisation of the OCaml compiler: it doesn&#x27;t really follow a classical separation of concerns, but emits good quality code. E.g. its instruction selection is just pattern matching expressed in the language, various liveness properties of the target instructions are expressed for the virtual IR (as they know which one-to-one instruction mapping they&#x27;ll use later - as opposed to doing register allocation strictly after instruction selection), garbage collection checks are threaded in after-the-fact (calls to caml_call_gc), its register allocator is a simple variant of Chow et al&#x27;s priority graph colouring (expressed rather tersely; ~223 lines, ignoring the related infrastructure for spilling, restoring, etc.)<p>--<p>As a huge aside, I believe the hobby compiler space could benefit from someone implementing a syntactic subset of LLVM, capable of compiling real programs. You&#x27;d get test suites for free and the option to switch to stock LLVM if desired. Projects like Hare are probably a good fit for such an idea: you could switch out the backend for stock LLVM if you want.
        • &gt;Adding new backends should be trivial.<p>Sounds like famous last words :-P<p>And I don&#x27;t really know about faster once you start to handle all the edge cases that invariably crop up.<p>Point in case: gcc
          • fc417fc80223 hours ago
            It&#x27;s the classic pattern where you redefine the task as only 80% of the original.
        • cardanome1 day ago
          That is why the Hare languages uses QBE instead: <a href="https:&#x2F;&#x2F;c9x.me&#x2F;compile&#x2F;" rel="nofollow">https:&#x2F;&#x2F;c9x.me&#x2F;compile&#x2F;</a><p>Sure it can&#x27;t do all the optimizations LLVM can but it is radically simpler and easier to use.
          • christophilus1 day ago
            Hare is a very pleasant language to use, and I like the way the code looks vs something like Zig. I also like that it uses QBE for the reasons they explained.<p>That said, I suspect it’ll never be more than a small niche if it doesn’t target Mac and Windows.
        • sixthDot22 hours ago
          If only that was only about emitting byte code in a file then calling the linker... you also have the problem of debug information, optimizers passes, the amount of tests required to prove the output byte code is valid, etc.
      • fuzztester1 day ago
        &gt;On purely recreational grounds, one can get something small off the ground in an afternoon with LLVM. It&#x27;s very enjoyable and has a low barrier to entry, really.<p>Is there something analogous for those wanting to create language interpreters, not compilers? And preferably for interpreters one wants to develop in Python?<p>Doesn&#x27;t have to literally just an afternoon, it could be even a few weeks, but something that will ease the task for PL newbies? The tasks of lexing and parsing, I mean.
        • fredrikholm1 day ago
          <a href="https:&#x2F;&#x2F;craftinginterpreters.com&#x2F;introduction.html" rel="nofollow">https:&#x2F;&#x2F;craftinginterpreters.com&#x2F;introduction.html</a><p>AST interpreter in Java from scratch, followed by the same language in a tight bytecode VM in C.<p>Great book; very good introduction to the subject.
        • contificate1 day ago
          There&#x27;s quite neat lexer and parser generators for Python that can ease the barrier to entry. For example, I&#x27;ve used PLY now and then for very small things.<p>On the non-generated side, lexer creation is largely mechanical - even if you write it by hand. For example, if you vaguely understand the idea of expressing a disjunctive regular expression as a state machine (its DFA), you can plug that into skeleton algorithms and get a lexer out (for example, the algorithm shown in Reps&#x27; &quot;“Maximal-Munch” Tokenization in Linear Time &quot; paper). For parsing, taking a day or two to really understand Pratt parsing is incredibly valuable. Then, recursive descent is fairly intuitive to learn and implement, and Pratt parsing is a nice way to structure your parser for the more expressive parts of your language&#x27;s grammar.<p>Nowadays, Python has a match (pattern matching) construct - even if its semantics are somewhat questionable (and potentially error-prone). Overall, though, I don&#x27;t find Python too unenjoyable for compiler-related programming: dataclasses (and match) have really improved the situation.
        • gritzko1 day ago
          I am a big fan of Ragel[1]. That is a high performance parser generator. In fact, it can generate different types of parsers, very powerful. Unfortunately, it takes a lot of skill to operate. I wrote a <i>parser generator generator</i> to make it all smooth[2], but after 8 years I still can&#x27;t call it effortless. A colleague of mine once &quot;broke the internet&quot; with a Ragel bug. So, think twice. Still, for weekend activities I highly recommend it, just for the way of thinking it embodies.<p>[1]: <a href="https:&#x2F;&#x2F;www.colm.net&#x2F;open-source&#x2F;ragel&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.colm.net&#x2F;open-source&#x2F;ragel&#x2F;</a><p>[2]: <a href="https:&#x2F;&#x2F;github.com&#x2F;gritzko&#x2F;librdx&#x2F;blob&#x2F;master&#x2F;rdx&#x2F;JDR.lex" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;gritzko&#x2F;librdx&#x2F;blob&#x2F;master&#x2F;rdx&#x2F;JDR.lex</a>
          • sph13 hours ago
            The worst part of designing a language is the parsing stage.<p>Simple enough to do it by hand, but there’s a lot of boilerplate and bureaucracy involved that is painfully time-wasting unless you know exactly what syntax you are going for.<p>But if you adopt a parser-generator such as Flex&#x2F;Bison you’ll find yourself learning and debugging and obtuse language that has to be forcefully bent to your needs, and I hope your knowledge of parsing theory is up-to-scratch when you’re facing with shift-reduce conflicts or have to decide whether LR or LALR(1) or whatever is most appropriate to your syntax.<p>Not even PEG is gonna come to your rescue.
          • fuzztester1 day ago
            Is this the same Ragel that Zed Shaw wrote about in one of his posts back in the day, during Ruby and Rails heydays? I vaguely remwmber that article. I think he used it for Mongrel, his web server.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;mongrel&#x2F;mongrel" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mongrel&#x2F;mongrel</a>
        • fuzztester1 day ago
          Thanks to those who replied.
    • mamcx1 day ago
      &gt; I wonder, at which point it is worth it to make a language?<p>AT ANY POINT.<p>No exist, nothing, that could yield more improvements that a new language. Is the ONLY way to make a paradigm(shift) <i>stick</i>. Is the ONLY way to turn &quot;discipline&quot; into &quot;normal work&quot;.<p>Example:<p>&quot;Everyone <i>knows</i> that is hard to mutate things&quot;:<p>* Option 1: DISCIPLINE<p>* Option 2: you have &quot;let&quot; and you have &quot;var&quot; (or equivalent) and remove MILLIONS of times where somebody somewhere must <i>think</i> &quot;this var mutates or not?&quot;.<p>&quot;Manually manage memory is hard&quot;<p>* Option 1: DISCIPLINE<p>* Option 2: Not need, for TRILLONS of objects across ALL the codebases with any form of automatic memory management, across ALL the developers and ALL their apps to very close to 100% to never worry about it<p>* Option 3: And now I can be sure about do this with more safety and across threads and such<p>---<p>Make actual progress with a language is hard, because there is a fractal of competing things that in sore need of improvement, and a big subset of users are anti-progress and prefer to suffer decades of C (example) than some gradual progress with something like pascal (where a &quot;string&quot; <i>exist</i>).<p>Plus, a language need to coordinate syntax (important) with std library (important) with how frameworks will end (important) with compile-time AND runtime outcomes (important) with tooling (important).<p>And miss dearly any of this and you blew it.<p>But, there is not other kind of project (apart from a OS, FileSystem, DBs) where the potential positive impact will extend to the future as much.
    • Windeycastle1 day ago
      This actually started of by Christoffer (C3 author) contributing to C2 but not being satisfied with the development speed there, wanting to try his own things and moving forward more quickly. Apparently together with LLVM it was doable to write a new compiler for what is a successor to C2.
    • Mawr1 day ago
      At the point you want to interface with people outside of your direct influence. That&#x27;s the value of a language — a shared understanding.<p>So long as only you use your custom C dialect, all is fine. Trouble starts when you&#x27;d like others to use it too or when you&#x27;d like to use libraries written by people who used a different language, e.g. C.
  • syn-nine1 day ago
    I&#x27;ve enjoyed using the C3 language to make some simple games [0] and found it really easy to pick up. The only thing that I got hung up on at first was the temporary memory arenas which I didn&#x27;t know existed and ultimately really liked.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;Syn-Nine&#x2F;c3-mini-games" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Syn-Nine&#x2F;c3-mini-games</a>
    • fatty_patty891 day ago
      I took a look at your github and saw you implemented the same games in multiple languages, which one did you like the most and why?
      • syn-nine23 hours ago
        To be honest, my favorite language is my own language Tentacode [0], closely followed by my recent experimental language Gar [1]. Tenta is not publicly released yet, but the source can be downloaded on github [2]. I&#x27;ve been experimenting with making games in a bunch of languages to inform the design of Tenta by seeing how much I can strip away and still successfully, efficiently make a meaningful game.<p>[0] <a href="https:&#x2F;&#x2F;tentacode.org&#x2F;docs&#x2F;language&#x2F;basic_types&#x2F;" rel="nofollow">https:&#x2F;&#x2F;tentacode.org&#x2F;docs&#x2F;language&#x2F;basic_types&#x2F;</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;Syn-Nine&#x2F;gar-lang" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Syn-Nine&#x2F;gar-lang</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;Syn-Nine&#x2F;tentacode&#x2F;tree&#x2F;llvm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Syn-Nine&#x2F;tentacode&#x2F;tree&#x2F;llvm</a>
  • antfarm1 day ago
    This is what a website for a programming language should look like.
    • GaryBluto1 day ago
      I&#x27;d argue the opposite. My first thought once the page had loaded was that it looked childish and amateurish, the addition of a Discord chat link in the site navigation only reinforcing that perspective.
      • Nothing childish about putting the important stuff up front and a link to contact the people who made it.
  • chuckadams1 day ago
    I was expecting something very bare-bones, but was pleasantly surprised to see a rich list of new and useful features. And maybe it&#x27;s not the deepest of things to highlight, but what really made me giggle is how C3&#x27;s analogue to exceptions are called Excuses.
  • cb3211 day ago
    I see from `test&#x2F;test_suite&#x2F;compile_time_introspection&#x2F;paramsof.c3t` that there is a way to get names &amp; types of function parameters [1]. The language also seems to support default values { e.g. `int foo(int a, int b = 2) {...}` } and even call with keyword arguments&#x2F;named parameters [2], but I couldn&#x27;t find any `defaultof` or similar thing in the code. Does anyone know if this is just an oversight &#x2F; temporary omission?<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;c3lang&#x2F;c3c&#x2F;blob&#x2F;master&#x2F;test&#x2F;test_suite&#x2F;compile_time_introspection&#x2F;paramsof.c3t" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;c3lang&#x2F;c3c&#x2F;blob&#x2F;master&#x2F;test&#x2F;test_suite&#x2F;co...</a><p>[2] <a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;language-fundamentals&#x2F;functions&#x2F;" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;language-fundamentals&#x2F;functions&#x2F;</a>
    • Windeycastle1 day ago
      I don&#x27;t think it is available no, and it&#x27;s the first time I heard about such an idea. Thinking on it, this would allow such cursed code (love that :D). I&#x27;ll put it up for discussion in the Discord as I&#x27;m interested in hearing whether `.defaultof` is a good idea or not.
      • cb3211 day ago
        One application of such a feature would be something like a &quot;cligen.c3&quot; (like the Nim <a href="https:&#x2F;&#x2F;github.com&#x2F;c-blake&#x2F;cligen" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;c-blake&#x2F;cligen</a> or its &#x2F;python&#x2F;cg.py port or etc.). Mostly it just seems a more complete signature extraction, though. Any other kind of documentation system might benefit.
  • maxloh1 day ago
    Google is making a similar attempt with C++ called Carbon Language: <a href="https:&#x2F;&#x2F;github.com&#x2F;carbon-language&#x2F;carbon-lang" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;carbon-language&#x2F;carbon-lang</a>
    • Windeycastle1 day ago
      C3 is more targeting C instead of C++. I&#x27;m interested to see where both C3 and Carbon will be in a few years&#x2F;decades.
  • chadcmulligan1 day ago
    It&#x27;s funny seeing the problems with C Niklaus Wirth pointed out originally still trying to be solved. He solved them with pascal and its OO successors, though for some reason it&#x27;s not cool still.<p>I suppose it has less of the ability to blow your foot off and so isn&#x27;t a very dangerous way to code, therefore not cool. If any of you younger folk haven&#x27;t looked at it, I&#x27;d suggest having a look, there is Delphi - a cross platform dev environment that addresses all these problems and compiles in less than a second, or there&#x27;s the free, open source alternative Lazarus. They also compile to mobile platforms and even the raspberry pi (Lazarus) or Arduino.<p>If you like contracts then ADA is the way to go, but I haven&#x27;t used this for many years, so not sure what is the state of the compilers.<p>[1] <a href="https:&#x2F;&#x2F;www.embarcadero.com&#x2F;products&#x2F;delphi" rel="nofollow">https:&#x2F;&#x2F;www.embarcadero.com&#x2F;products&#x2F;delphi</a><p>[2] <a href="https:&#x2F;&#x2F;www.lazarus-ide.org" rel="nofollow">https:&#x2F;&#x2F;www.lazarus-ide.org</a>
    • jll2914 hours ago
      Both Pascal and C (and their offspring) are wonderful gifts for us to receive from their designers, and I enjoy writing code in both.<p>&gt; It&#x27;s funny seeing the problems with C Niklaus Wirth pointed out originally still trying to be solved. He solved them with pascal and its OO successors, though for some reason it&#x27;s not cool still.<p>Here&#x27;s Brian Kernighan&#x27;s view on the shortcomings of Pascal resulting from a practical book project idea:<p><a href="https:&#x2F;&#x2F;www.lysator.liu.se&#x2F;c&#x2F;bwk-on-pascal.html" rel="nofollow">https:&#x2F;&#x2F;www.lysator.liu.se&#x2F;c&#x2F;bwk-on-pascal.html</a><p>Not sure to what extent the latest Oberon or Ada have addressed all of these, since I&#x27;ve not kept up with Ada news.
      • chadcmulligan13 hours ago
        Isn&#x27;t that interesting, I do vaguely recall this from many years ago. These complaints have mostly been addressed a long time ago, the solutions were mostly stolen from C where applicable, I refer to Delphi, but I think Lazarus is the same. These are the dot points from the summary:<p>- Since the size of an array is part of its type, it is not possible to write general-purpose routines, that is, to deal with arrays of different sizes. In particular, string handling is very difficult.<p>There&#x27;s a TArray&lt;T&gt; type now, it uses generics and can be declared if you like, also lots of other structured types - lists, stacks etc, though the original array type is still available for backwards compatibility. There was also an array of without size to pass as a parameter but TArray is mostly used now.<p>- The lack of static variables, initialization and a way to communicate non-hierarchically combine to destroy the ``locality&#x27;&#x27; of a program - variables require much more scope than they ought to.<p>Statics are now a thing<p>- The one-pass nature of the language forces procedures and functions to be presented in an unnatural order; the enforced separation of various declarations scatters program components that logically belong together.<p>This can be an issue still, though the one pass is why the compiler is fast.<p>- The lack of separate compilation impedes the development of large programs and makes the use of libraries impossible.<p>Not an issue any more, it has packages and libraries<p>- The order of logical expression evaluation cannot be controlled, which leads to convoluted code and extraneous variables.<p>Not an issue any more it uses the C method<p>- The &#x27;case&#x27; statement is emasculated because there is no default clause.<p>Does now, though a case with string alternatives still doesn&#x27;t exist in Delphi, Lazarus has it.<p>- The standard I&#x2F;O is defective. There is no sensible provision for dealing with files or program arguments as part of the standard language, and no extension mechanism.<p>Many different sorts of file access - random, binary etc<p>- The language lacks most of the tools needed for assembling large programs, most notably file inclusion.<p>Not true any more, it has packages and include files (though limited), and the macro facility is very limited, nothing like C&#x27;s but its not really needed, you can have inline functions for the performance boost macros would give you (stolen from C++)<p>- There is no escape.<p>This refers to the type system, you can use casts just like C now<p>Just as a counterpoint C still doesn&#x27;t have a standard string type. Delphi has generics now like C++, and many of the things that are external libraries in C&#x2F;C++ are just included. If you really need high performance then C is still better, but what I&#x27;ve done in the past is just rewrite bits in C, though the need for this is very infrequent. If you look at comparable things for Delphi in C++ like Qt&#x27;s slots and signals for example, the Delphi solution is so much more elegant, and Qt is perhaps the only comparable commercial cross platform library to Delphi&#x27;s Firemonkey. It&#x27;s really worth a look, times have changed. There&#x27;s a reason MS hired away Anders Hejlsberg to architect C# and then typescript.
    • atombender13 hours ago
      I would argue that Go is the closest spiritual descendant of Wirth&#x27;s languages. If you changed braces into BEGIN&#x2F;END and so on, it would look a ton like Oberon or Modula 2&#x2F;3.<p>It adds features (goroutines, channels, slices), changes some (modules become packages), the generics are a little different, and it eschews some of Wirth&#x27;s pragmatic type safety ideas (like range types). It even has &quot;:=&quot; for assignment.<p>The general spirt is the same, I think: Small language, simple compiler (compared to many other languages), &quot;dumb&quot; type system, GC, engineering-focused rather than-type theory-focused.
      • chadcmulligan2 hours ago
        The part of Delphi that is interesting, and isn&#x27;t really mentioned much for some reason, is the component library - VCL (windows only) and Firemonkey (Cross platform). Like the language does what&#x27;s needed, garbage collection would be nice, and is on iOS, but the really nice part is the ability to make things by dragging and dropping visual and non visual components, and making your own components in the same language.
  • jadbox1 day ago
    Does anyone who has actually used C3, Odin, and Zig talk about how to think of these three?
    • sph1 day ago
      Zig feels too much in flux, has some incredible ideas, but I really don&#x27;t like it syntactically wise, and I <i>really</i> don&#x27;t like how the author is so stubbornly in favour of unused-variables-as-errors which I believe it&#x27;s the worst thing to ever have been invented and drives me up the wall. Documentation was still pretty bad last I checked, and that&#x27;s the bare minimum before I can seriously adopt a new language.<p>C3 feels like home for C developers, there is a real market for language evolutions rather than revolutions (imagine Typescript). The issue is that pretty much nobody knows about C3, most posts about it never get any traction on HN, and it&#x27;s hard to choose a language with no mind share for anything more serious than toys.<p>Odin is quite nice, has some hype behind it, deservedly. Feels like a nice improvement over C without completely throwing the baby away with the bathwater; perhaps one negative thing might be that it&#x27;s so opinionated it feels less of a general purpose language than others (with the main dev focused on graphics, there&#x27;s a lot of syntax sugar for that use case which feels out of place for anyone that is not writing desktop UI or games). Also, while I agree with the author&#x27;s choice on not rewriting the compiler itself in Odin, as most other languages do, it doesn&#x27;t strike much confidence that the author would rather develop in C++ than eat his own dog food.<p>I must admit I don&#x27;t keep up with alternative languages much any more because I believe the Lindy effect to be a force multiplier, and for serious applications it&#x27;s better to stick with something that is known to work, despite its shortcomings. You only have a few points you can spend on innovation, and if you&#x27;re developing a complex application, at the very least you want a rock-solid base to build upon. This is why I&#x27;m still sticking with C for very low-level programming.<p>Still, all three languages are worth your time.
      • Zambyte1 day ago
        &gt; and I really don&#x27;t like how the author is so stubbornly in favour of unused-variables-as-errors<p>FWIW, they also have a goal to emit as much output as possible, even in the face of compilation errors. They have stated that even syntax errors should have the compiler exit with a non-zero exit code, but still produce an executable that will give you a syntax error at runtime. The point of this being to allow you to iterate quickly, but force things like CI to fail.
        • chuckadams12 hours ago
          The Eclipse Java Compiler is similar to this, and Haskell can defer type errors to runtime. You wouldn&#x27;t make production builds that way, but it&#x27;s otherwise a perfectly valid mode for a compiler to operate in.
        • bluecalm1 day ago
          Why not just use compile warnings and configure CI with -WError (same like your release build).<p>It seems like trying to fix the world of undisciplined developers at the cost of a common use case (experimenting and temporary accepting warnings).
  • This looks like Zig. What problems does this solve that Zig doesn&#x27;t? Potato - potaato?
    • cardanome1 day ago
      C3 is more comparable to Odin or the Better C mode in D in that it tries to be a pragmatic evolution not revolution of C.<p>Here is a comparison to Zig in terms of features: <a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;faq&#x2F;compare-languages&#x2F;#zig" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;faq&#x2F;compare-languages&#x2F;#zig</a><p>And yes, they are all system programming languages with a similar level of abstraction that are suited for similar problem. It is good to have choice. It is like asking what do you need Ruby for when you have Python.
    • Alifatisk1 day ago
      Looks can be deceiving.<p>C3 provides a module system for cleaner code organization across files, unlike Zig where files act as modules with nesting limitations.<p>C3 offers first class lambdas and dynamic interfaces for flexible runtime polymorphism without Zigs struct based workarounds.<p>C3s operator overloading enables intuitive math types like vectors, which Zig avoids to prevent hidden control flow.
    • ogogmad1 day ago
      Zig doesn&#x27;t have operator overloading. This does.
  • sea-gold1 day ago
    Previous discussions:<p>July 2025 (159 comments, 143 points): <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44532527">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44532527</a>
  • lalitmaganti1 day ago
    Was an interesting ready but sad to see that there is nothing special for matching or restructuring tagged unions (beyond the special cased optional type). That&#x27;s one of the things from Rust I miss the most in my day to day work with C&#x2F;C++.
    • lerno1 day ago
      Still looking for a good design: <a href="https:&#x2F;&#x2F;github.com&#x2F;c3lang&#x2F;c3c&#x2F;issues&#x2F;829" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;c3lang&#x2F;c3c&#x2F;issues&#x2F;829</a>
      • nxobject1 day ago
        Did you manage to get more discussion on the Discord?
        • lerno4 hours ago
          I don&#x27;t recall exactly, I think we rehashed the same points.
  • logicprog1 day ago
    This seems pretty neat! Still holding out for a language with Go&#x27;s runtime and compilation and performance characteristics, but language syntax and semantics like Gleam... Maybe one day
    • cardanome1 day ago
      It is called OCaml. Fast compilation and state of the art statically typed functional programming.<p>Sure it is a bit more complex than Gleam and the syntax is different but you can manage.
      • logicprog5 hours ago
        I like OCaml in theory a lot! This is not a bad suggestion. The problem is it doesn&#x27;t have the awesome concurrency model of Go (just barely got regular threads recently), and IMHO the build and package management situation for OCaml isn&#x27;t very good. Plus, I don&#x27;t know, I just subjectively don&#x27;t like using it, and the ecosystem isn&#x27;t very good. Ecosystem is very important for me.
    • Mawr1 day ago
      Unfortunately the current trend among new languages seems to be eschewing GC; a clear mistake IMO — we don&#x27;t really need yet another low-level systems programming language, but we badly need <i>the</i> go-to GC&#x27;d lang — one that&#x27;d take the faults of Java and Go into account.
      • brabel17 hours ago
        There is lots of languages that already do that: Kotlin, Dart, typescript, OCaml, D, Haskell and the list goes on! Non GC languages OTOH are rare and we absolutely need more of them!
    • Alifatisk1 day ago
      Borgo could be your thing? <a href="https:&#x2F;&#x2F;borgo-lang.github.io" rel="nofollow">https:&#x2F;&#x2F;borgo-lang.github.io</a>
      • logicprog1 day ago
        It looked really good! But seems kinda dead, and I don&#x27;t know the Go ecosystem well enough to know if dead things can sort of keep working forever
        • bmacho17 hours ago
          There is also Dingo <a href="https:&#x2F;&#x2F;github.com&#x2F;MadAppGang&#x2F;dingo" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;MadAppGang&#x2F;dingo</a> <a href="https:&#x2F;&#x2F;dingolang.com" rel="nofollow">https:&#x2F;&#x2F;dingolang.com</a> (heavy AI project)
    • sparky4pro1 day ago
      Xgo
    • pelorat1 day ago
      That would be C# ?
      • throwaway17_1723 hours ago
        I understood the sibling comment recommending Ocaml and to a lesser extent Borgo, but OP is looking for a high level functional programming language based on giving Gleam as the reference point. How does C# fit here.<p>I do think the compilation speed and runtime is at least in the same ballpark, but C#, while a perfectly fine language, is definitely not a functional language in syntax or semantics.
  • netbioserror1 day ago
    This is neat and I wish C3 well. But using Nim has shown me the light on maybe the most important innovation I&#x27;ve seen in a native-compiled systems language: Everything, even heap-allocated data, having value semantics by default.<p>In Nim, strings and seqs exist on the heap, but are managed by simple value-semantic wrappers on the stack, where the pointer&#x27;s lifetime is easy to statically analyze. Moves and destroys can be automatic by default. All string ops return string, there are no special derivative types. Seq ops return seq, there are no special derivative types. Do you pay the price of the occasional copy? Yes. But there are opt-in trapdoors to allocate RC- or manually-managed strings and seqs. Otherwise, the default mode of interacting with heap data is an absolute breeze.<p>For the life of me, I don&#x27;t know why other languages haven&#x27;t leaned harder into such a transformative feature.
    • sirwhinesalot1 day ago
      NOTE: I&#x27;m a fan of value semantics, mostly devil&#x27;s advocate here.<p>Those implicit copies have downsides that make them a bad fit for various reasons.<p>Swift doesn&#x27;t enforce value semantics, but most types in the standard library do follow them (even dictionaries and such), and those types go out of their way to use copy-on-write to try and avoid unnecessary copying as much as possible. Even with that optimization there are too many implicit copies! (it could be argued the copy-on-write makes it worse since it makes it harder to predict when they happen).<p>Implicit copies of very large datastructures are almost always unwanted, effectively a bug, and having the compiler check this (as in Rust or a C++ type without a copy constructor) can help detect said bugs. It&#x27;s not all that dissimilar to NULL checking. NULL checking requires lots of extra annoying machinery but it avoids so many bugs it is worthwhile doing.<p>So you have to have a plan on how to avoid unnecessary copying. &quot;Move-only&quot; types is one way, but then the question is which types do you make move-only? Copying a small vector is usually fine, but a huge one probably not. You have to make the decision for each heap-allocated type if you want it move-only or implicitly copyable (with the caveats above) which is not trivial. You can also add &quot;view&quot; types like slices, but now you need to worry about tracking lifetimes.<p>For these new C alternative languages, implicit heap copies are a big nono. They have very few implicit calls. There are no destructors, allocators are explicit. Implicit copies could be supported with a default temp allocator that follows a stack discipline, but now you are imposing a specific structure to the temp allocator.<p>It&#x27;s not something that can just be added to any language.
      • netbioserror1 day ago
        And so the size of your data structures matters. I&#x27;m processing lots of data frames, but each represents a few dozen kilobytes and, in the worst case, a large composite of data might add up to a couple dozen megabytes. It&#x27;s running on a server with tons processing and memory to spare. I could force my worst case copying scenario in parallel on each core, and our bottleneck would still be the database hits before it all starts.<p>It&#x27;s a tradeoff I am more than willing to take, if it means the processing semantics are basically straight out of the textbook with no extra memory-semantic noise. That textbook clarity is very important to my company&#x27;s business, more than saving the server a couple hundred milliseconds on a 1-second process that does not have the request volume to justify the savings.
        • sirwhinesalot1 day ago
          It&#x27;s not just the size of the data but also the amount of copies. Consider a huge tree structure: even if each node is small, doing individual &quot;malloc-style&quot; allocations for millions of nodes would cause a huge performance hit.<p>Obviously for your use case it&#x27;s not a problem but other use cases are a different story. Games in particular are very sensitive to performance spikes. Even a naive tracing GC would do better than hitting such an implicit copy every few frames.
  • WalterBright1 day ago
    The front page reads like a D language checklist :-)
    • sureglymop1 day ago
      Check it out on the comparisons page: <a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;faq&#x2F;compare-languages&#x2F;#d" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;faq&#x2F;compare-languages&#x2F;#d</a><p>I think they&#x27;re aware of and like D :)
      • WalterBright22 hours ago
        It was very nice of the C3 crowd to write that comparison. Thanks for pointing it out to me!<p>I agree that D has gotten a bit complex. We&#x27;re introducing the notion of &quot;editions&quot; in order dispose of obsolete and unnecessary features.
        • gautamcgoel12 hours ago
          What old features would you like to dispose of in a modern redesign of D? What would this new edition look like?
  • I see &#x27;fn void main()&#x27;. There&#x27;s probably a good reasson but why the &#x27;fn&#x27;? It doesn&#x27;t really add anything because &#x27;void main()&#x27; already communicates it&#x27;s a function.<p>The main draw of C (to me) is it&#x27;s terseness and it&#x27;s avoidance of &#x27;filler&#x27; syntax words.<p>I admit I didn&#x27;t (yet) look much further into it, but this first thing jumped out to me and slightly diminished my desire to look further into C3...
    • chippiewill1 day ago
      It&#x27;s to remove a syntax ambiguity with c-style function declarations <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Most_vexing_parse" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Most_vexing_parse</a><p>The syntax ambiguity adds a lot of complexity to the grammar that makes parsing a lot more complicated than it needs to be.<p>Sticking `fn` in front fixes a lot of problems.
    • cardanome1 day ago
      <a href="https:&#x2F;&#x2F;c3.handmade.network&#x2F;blog&#x2F;p&#x2F;8886-why_does_c3_use_%2527fn%2527" rel="nofollow">https:&#x2F;&#x2F;c3.handmade.network&#x2F;blog&#x2F;p&#x2F;8886-why_does_c3_use_%252...</a>
  • ulimn1 day ago
    I am not a C programmer but I always find these low level languages intriguing.<p>I am wondering though: when does one pick C3 for a task&#x2F;problem?
    • Windeycastle1 day ago
      C3 is pretty much aimed at those who would use C for a task but prefer something a bit more modern. I find it more fun to program in compared to plain C, and it&#x27;s definitely more simple than reaching for C++.
  • mr_00ff001 day ago
    Anyone use zig vs C3? Seems like a lot of overlap, curious about people’s experiences with both
    • lerno1 day ago
      Maybe these two could be interesting?<p><a href="https:&#x2F;&#x2F;lowbytefox.dev&#x2F;blog&#x2F;from-zig-to-c3&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lowbytefox.dev&#x2F;blog&#x2F;from-zig-to-c3&#x2F;</a><p><a href="https:&#x2F;&#x2F;alloc.dev&#x2F;2025&#x2F;05&#x2F;29&#x2F;learning_c3" rel="nofollow">https:&#x2F;&#x2F;alloc.dev&#x2F;2025&#x2F;05&#x2F;29&#x2F;learning_c3</a>
  • stacktraceyo1 day ago
    My university had us program in c++ with resolve. Which looks very similar to the contract stuff here.
  • epage1 day ago
    I think the switch statement design is a foot gun: defaults to fall-through when empty and break when there is a body.<p><a href="https:&#x2F;&#x2F;c3-lang.org&#x2F;language-overview&#x2F;examples&#x2F;#enum-and-switch" rel="nofollow">https:&#x2F;&#x2F;c3-lang.org&#x2F;language-overview&#x2F;examples&#x2F;#enum-and-swi...</a>
    • gkbrk1 day ago
      This feels very natural though, in a &quot;principle of least surprise&quot; kinda way. This is what you&#x27;d expect a properly designed switch statement to do.
      • Least surprise to who? Are there any other mainstream languages that behave this way?<p>I think consistency is the best correlate of least surprise, so having case statements that sometimes fall though, sometimes not, seems awful.
      • trinix9121 day ago
        Sadly many of us are so used to the C fall-through behavior that this would be quite a surprise.<p>Personally, I&#x27;d rather see a different syntax switch (perhaps something like the Java pattern switch) or no switch at all than one that looks the same as in all C-style languages but works just slightly differently.
      • epage1 day ago
        It reads naturally but I can see people getting tripped up writing this. Worse for changing existing code. Refactor in a way that removes a body? Likely forget to add a breake
    • riazrizvi1 day ago
      If I aimed and shot a gun at my foot and a bullet didn’t go through it, I would trash the gun.
      • all21 day ago
        To be fair, I would probably toss the gun away if a bullet went through my foot.
      • fuzztester1 day ago
        Er, Forth guy as a, would foot trash I the and bullet gun the. Word!
    • bluecalm1 day ago
      I agree it&#x27;s not the best choice. I mean it&#x27;s true that you almost always want fall-through when the body is empty and break where it isn&#x27;t but maybe it would be better to at least require explicit break (or fall-through keyword) and just make it a compiler error if one is missing and the body is not empty. That would be the least surprising design imo.
  • I keep thinking about perhaps LLMs would make writing code in these lower-level-but-far-better-performing languages in vogue. Why have claude generate a python service when you could write a rust or C3 service with compiler doing a lot of heavy lifting around memory bugs?
    • dotancohen1 day ago
      <p><pre><code> &gt; Why have claude generate a python service when you could write a rust or C3 service with compiler doing a lot of heavy lifting around memory bugs? </code></pre> The architecture of my current project is actually a Python&#x2F;Qt application which is a thin wrapper around an LLM generated Rust application. I go over almost every line of the LLM generated Rust myself, but that machine is far more skilled at generating quality Rust than I currently am. But I am using this as an opportunity to learn.
      • all21 day ago
        &gt; that machine is far more skilled at generating quality Rust than I currently am. But I am using this as an opportunity to learn.<p>I&#x27;m currently doing this with golang. It is not that bad of an experience. LLMs do struggle with concurrency, though. My current project has proved to be pretty challenging for LLMs to chew through.
    • notimetorelax1 day ago
      Having worked with rust in the past couple years, I can say that it hands down much better fit for LLMs than Python thanks to its explicitness and type information. This provides a lot of context for LLM to incrementally grow the codebase. You still have to watch it, of course. But the experience is very pleasant.
    • klysm1 day ago
      Because there’s more python on the internet to interpolate from. LLMs are not equally good at all languages
      • yencabulator1 day ago
        You can throw Claude at a completely private Rust code base with very specific niche requirements and conventions that are not otherwise common in Rust and it will demonstrate a remarkably strong ability to explain it and program according to the local idioms. I think your statement is based on liking a popular language, not on evidence..
        • all21 day ago
          I find that having a code-base properly scaffolded really, really helps a model handle implementing new features or performing bug-fixes. There&#x27;s this grey area between greenfield and established that I hit every time I try to take a new project to a more stable state. I&#x27;m still trying to sort out how to get through that grey area.
          • yencabulator1 day ago
            I had Claude nearly one-shot (well, sequence-of-oneshots) a fairly complex multi-language file pretty-printer, but only after giving it a very specific 150-line TODO file with examples of correct results, so I think pure greenfield is very achievable if you steer it well enough. I did have to really focus on writing the tasks to be such that there wasn&#x27;t much room for going off the rails, thought about their ordering, etc; it was pretty far from vibecoding, produced a strict data-driven test suite, etc.<p>But ultimately, I agree with you, in most projects, having enough existing style, arranged in a fairly specific way, for Claude to imitate makes results a lot better. Or at least, until you get to that &quot;good-looking codebase&quot;, you have to steer it a lot more explicitly, to the level of telling it what function signatures to use, what files to edit, etc.<p>Currently on another project, I&#x27;ve had Claude make ~10 development spikes on specific ~5 high-uncertainty features on separate branches, without ever telling it what the main project structure really is. Some of the spikes implement the same functionality with e.g. different libraries, as I&#x27;m exploring my options (ML inference as a library is still a shitshow). I think that approach has some whiff of &quot;future of programming&quot; to it. Previously I would have spent more effort studying the frameworks up front and committed to a choice harder, now it&#x27;s &quot;let&#x27;s see if this is good enough&quot;.
      • venuur1 day ago
        That’s been my experience. LLMs excel at languages that are popular. JavaScript and Python are two great examples.
    • sonnig1 day ago
      I think the same. It sounds quite more practical to have LLMs code in languages whose compilers provide as much compile-time guardrails as possible (Rust, Haskell?). Ironically in some ways this applies to humans writing code as well, but there you run into the (IMO very small) problem of having to write a bit more code than with more dynamic languages.
    • gitaarik22 hours ago
      You still want to be able to easily review the LLM generated code. At least I want to.
    • HighGoldstein1 day ago
      It seems cynically fitting that the future we&#x27;re getting and deserve is one where we&#x27;ve automated the creation of memory bugs with AI.
  • shevy-java17 hours ago
    Its successor will be C4!
  • jksmith1 day ago
    Windows is a horse that is becoming less and less rideable. Be great to get this to build on ReactOS as just a hobby or side effort.
  • masavik22 hours ago
    Does c3 use llvm?
  • pelorat1 day ago
    Delete &quot;fn&quot; and I might get onboard
    • hmry1 day ago
      The price of not requiring fn is mandatory forward declarations, header files, and a slower parser (because then the syntax requires knowing whether every symbol is a type or a function&#x2F;variable to be parsed correctly)<p>I think that trade-off is absolutely not worth it. I&#x27;ll take order-independent declarations and fast modules over strictly sticking to C syntax any day.
    • cardanome1 day ago
      It has many usability benefits: <a href="https:&#x2F;&#x2F;c3.handmade.network&#x2F;blog&#x2F;p&#x2F;8886-why_does_c3_use_%2527fn%2527" rel="nofollow">https:&#x2F;&#x2F;c3.handmade.network&#x2F;blog&#x2F;p&#x2F;8886-why_does_c3_use_%252...</a>
  • gigatexal1 day ago
    Why have features but then the compiler doesn’t make programs that enforce it…<p>I’m seeing a lot of this in the docs:<p>“However, just like for const the compiler might not detect whether the annotation is correct or not! This program might compile, but will behave strangely:”
  • turnsout1 day ago
    Looks interesting, but operator overloading is an anti-pattern. Leading with that is like leading with &quot;full support for null pointers.&quot;
  • oulipo21 day ago
    Interesting! If today I wanted to start a project in a C-like language, I&#x27;d have chosen Zig. Would you tell a rough overview of how C3 differs from something like Zig?
  • bluecalm1 day ago
    I like the idea of strict improvements to C without taking anything away or making any controversial (as far as language design goes) choices.<p>One thing I am wondering is why new low level languages remove goto (Zig, C3, Nim). I think it&#x27;s sometimes the cleanest, most readable and most maintainable solution. I get that&#x27;s rare but especially when you are expressing low level algorithm operating on arrays&#x2F;blocks&#x2F;bit streams it can be useful. It&#x27;s not that you can&#x27;t express it with &quot;structured&quot; constructs but sometimes it&#x27;s just not the best way. I get removing backwards goto when you provide alternative constructs for state machines but forward one is useful in other contexts.<p>Is it a purely ideological choice or does it make the compiler simpler&#x2F;faster?
    • lerno1 day ago
      In C3 it&#x27;s complicated. On one hand the lack of goto means defers are more straightforward, but the biggest problem is that once you have a different way to handle cleanup and you have labelled break&#x2F;continue, and you have the nextcase to jump to arbitrary cases, there&#x27;s very little left for goto to do.<p>It is limited to when you want to jump from within an if statement out across some statements and run the remaining code. It saves one level of indentation, but being so rare, it&#x27;s hard to justify the complexity.<p>I keep going back and see if I find some usecase that could motivate putting goto back in but it so far nothing. The &quot;nextcase&quot; allows C3 to express arbitrary jumps back and forth, although not as straightforward as goto.
      • bluecalm20 hours ago
        Looking at my own code one case I wouldn&#x27;t get with defer and better switch is avoiding a flag pattern.<p>For example when you iterate over a block and check if positions are 0 (+ do some work) and once you encounter a non zero you jump to a different &quot;non-empty&quot; section but if it&#x27;s zeros to the end you jump over non-empty to go to end section. Without goto you need to set a flag and add another conditional there.<p>Other than that what you mentioned: flatting the if structure is nice. When you have a few simple cases and then a complicated one and a finishing session at the end it&#x27;s just cleaner and easier to read with goto. It could be handled with a switch statement but not everything is &quot;switchable&quot; and the way most people write it it&#x27;s another 2 indentation levels (1 with a convention of not indenting cases but I see C3 docs avoid it).<p>I get it&#x27;s rare but goto (other than error handling) is rare and I don&#x27;t think people have a tendency to abuse it. If anything people abuse &quot;structured&quot; construct building an arrow pattern with multiple indentation levels for no good reason.
        • lerno3 hours ago
          That&#x27;s the kind of thing I was thinking about. You can solve that with a switch in C3, but it&#x27;s not as nice. However, this accounts for no more than 1% of all my goto uses (from a quick inspection), which is too little to build a feature from (discipline is needed to prevent a language from ballooning, it&#x27;s hard to say no). I am looking for some use for it that can redeem its inclusion.
  • It seems great to have something that is C but cleaned up, although clay had all that with templates and move semantics.<p>I think leaving out move semantics and destructors is inexcusable at this point. It is not only fundamental, but doesn&#x27;t affect things like standard libraries, runtimes, or ABIs.
  • zephen1 day ago
    &quot; the C-like for programmers who like C.&quot;<p>Sounds intriguing. But then, the first thing I noticed in their example is a double-colon scope operator.<p>I understand that it&#x27;s part of the culture (and Rust, C#, and many other languages), but I find the syntax itself ugly.<p>I dunno. Maybe I have the visual equivalent of misophonia, in addition to the auditory version, but :: and x &lt;&lt; y &lt;&lt; z &lt;&lt; whatever and things like that just grate.<p>I like C. But I abhor C++ with a passion, partly because of what, to me, is jarring syntax. A lot of languages have subsequently adopted this sort of syntax, but it really didn&#x27;t have that much thought put into it at the beginning, other than that Stroustrup went out of his way to use different symbols for different kinds of hierarchies, because some people were confused.<p>Source: <a href="https:&#x2F;&#x2F;medium.com&#x2F;@alexander.michaud&#x2F;the-double-colon-operator-75543555fc08" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;@alexander.michaud&#x2F;the-double-colon-opera...</a><p>Think about that. The designer of a language that is practically focused on polymorphism went out of his way to _not_ overload the &#x27;.&#x27; operator for two things that are about as close semantically as things ever get (hierarchical relationships), simply because some of his customers found that overloading to be confusing. (And yet, &#x27;&lt;&lt;&#x27; is used for completely disparate things in the same language, but, of course, apparently, that is not at all confusing.)<p>I saw in another comment here just now that one of the differentiators between zig and C3 is that C3 allows operator overloading.<p>Honestly, that&#x27;s in C3&#x27;s favor (in my book), so why don&#x27;t they start by overloading &#x27;.&#x27; and get rid of &#x27;::&#x27; ?
    • lerno1 day ago
      Two reasons, the second being the important: (1) If I read &quot;io.print&quot;, is this &quot;the print function in the module io&quot; or &quot;the print method for the variable io&quot;. There tends to be an overlap in naming here so that&#x27;s a downside (2) parsing and semantic checking is much easier if the namespace is clear from the grammar.<p>In particular, C3&#x27;s &quot;path shortening&quot;, where you&#x27;re allowed to write `file::open(&quot;foo.txt&quot;)` rather than having to use the full `std::io::file::open(&quot;foo.txt&quot;)` is only made possible because the namespace is distinct at the grammar level.<p>If we play with changing the syntax because it isn&#x27;t as elegant as `file.open(&quot;foo.txt&quot;)`, we&#x27;d have to pay by actually writing `std.io.file.open(&quot;foo.txt&quot;)` or change to a flat module system. That is a fairly steep semantic cost to pay for a nicer namespace separator.<p>I might have overlooked some options, if so - let me know.
      • alcover1 day ago
        &gt; (1) If I read &quot;io.print&quot;, is this &quot;the print function in the module io&quot; or &quot;the print method for the variable io&quot;<p>I don&#x27;t see the issue. Just look up the id ? Moreover, if modules are seen as objects, the meaning is quite the same.<p>&gt; checking is much easier if the namespace is clear from the grammar.<p>Again (this time by the checker) just look up the symbol table ?
        • lerno1 day ago
          Let&#x27;s say you have find foo::bar(), then we know that the path is &lt;some path&gt;::foo, the function is `bar` consequently we search for all modules matching the substring ::foo, and depending on whether (1) we get multiple matches (2) we only get a match that is not properly visible (3) we get a match that isn&#x27;t imported, (4) we get no match or (5) we get a visible match, we print different things. In the case 1-4, we give good errors to allow the user to take the proper action.<p>If instead we had foo.bar(), we cannot know if this is the method &quot;bar&quot; on local or global foo, or a function &quot;bar()&quot; in a path matching the substring &quot;foo&quot;. Consequently we cannot properly issue 4, since we don&#x27;t know what the intent was.<p>So far, not so bad. Let&#x27;s say it&#x27;s instead foo::baz::bar(). In the :: case, we don&#x27;t have any change in complexity, we simply match ::foo::baz instead.<p>However, for foo.baz.bar(), we get more cases, and let us also bring in the possibility of a function pointer being invoked: 1. It is invoking the method bar() on the global baz is a module that ends with &quot;foo&quot; 2. It is calling a function pointer stored in member bar on the global variable baz is a module that ends with &quot;foo&quot; 3. It is calling the function bar() in a module that ends with &quot;foo.baz&quot; 4. It is calling the function pointer stored in the global bar in a module that ends with &quot;foo.baz&quot; 5. It is invoking the method bar on the member baz of the local foo 6. It is calling a function pointer stored in the member bar in the member baz of the local foo<p>This might seem doable, but note that for every module we have that has a struct, we need to speculatively dive into it to see if it might give a match. And then give a good error message to the user if everything fails.<p>Note here that if we had yet another level, `foo.bar.baz.abc()` then the number of combinations to search increases yet again.
          • zephen23 hours ago
            I think you are overcomplicating this.<p>This is exactly the syntax Python uses, and there is no &quot;search&quot; per se.<p>Either an identifier is in the current namespace or not.<p>And if it is in the current namespace, there can only be one.<p>The only time multiple namespaces are searched is when you are scoped within a function or class which might have a local variable or member of the same name.<p>&gt; find foo::bar(), then we know that the path is &lt;some path&gt;::foo, the function is `bar` consequently we search for all modules matching the substring ::foo,<p>The only reason you need to have a search and think about all the possibilities is that you are deliberately allowing implicit lookups. Again, in Python:<p>1) Everything is explicit; but 2) you can easily create shorthand aliases when you want.<p>&gt; note that for every module we have that has a struct, we need to speculatively dive into it to see if it might give a match. And then give a good error message to the user if everything fails.<p>Only if you rely on search, as opposed to, you know, if you &#x27;import foo&#x27; then &#x27;foo&#x27; refers to what you imported.
      • zephen1 day ago
        I have never found either (1) or (2) to be a problem in hundreds of thousands of lines of Python.<p>&gt; In particular, C3&#x27;s &quot;path shortening&quot; ... we&#x27;d have to pay by actually writing `std.io.file.open(&quot;foo.txt&quot;)` or change to a flat module system.<p>You can easily and explicitly shorten paths in other languages. For example, in Python &quot;from mypackage.mysubpackage import mymodule; mymodule.myfunc()&quot;<p>Python even gracefully handles name collisions by allowing you to change the name of the local alias, e.g. &quot;from my_other_package.mysubpackage import mymodule as other_module&quot;<p>I find the &quot;from .. import&quot; to be really handy to understand touchpoints for other modules, and it is not very verbose, because you can have a comma-separated list of things that you are aliasing into the current namespace.<p>(You can also use &quot;from some_module import *&quot; to bring everything in, which is highly useful for exploratory programming but is an anti-pattern for production software.)
        • lerno1 day ago
          Of course you can explicitly shorten paths. I was talking about C3&#x27;s path shortening which is doing this for you. This means you do not <i>need</i> to alias imports, which is otherwise how languages do it.<p>I don&#x27;t want to get too far into details, but it&#x27;s understandable that people misunderstand it if they haven&#x27;t used it, as it&#x27;s a novel approach not used by any other language.
          • zephen23 hours ago
            Oh, I understand it. I just think that (a) explicit is better than implicit; and (b) the amount of characters that Python requires to keep imports explicit is truly minimal, and is a huge aid to figuring out where things came from.
    • As a long time lisper I can&#x27;t stand how much syntax languages have and I think of excess syntax as a sign of a childish mind, but what can you do?
      • It&#x27;s horses for courses, right? Pick the right language for the job. LISP was designed for 60&#x27;s era GOFAI, designed for that with code not differentiated from data, but a COBOL or FORTRAN even BASIC programmer would presumably (and justifiably from the perspective of those typical use cases) regard LISP as the toy&#x2F;unserious language.
      • zephen1 day ago
        As I get older, I realize that everybody&#x27;s sweet spot is a little different.<p>Lisp and APL both have their adherents.<p>I personally find a bit more syntax than lisp to be nice. Occasionally I long for the homoiconicity of lisp; otoh, many of the arguments for it fall flat with me. For example, DSLs -- yeah, no, it&#x27;s hard enough to get semi-technical people to use DSLs to start with, never mind lisp-like ones.
    • Namespaces to me are more about naming conflict resolution and code readability, and I think of them more as prefixes to namespace member names, as opposed to those member names being part of a hierarchy.<p>It also helps code readability to know that a::b is referring to a namespace, without having to go lookup the definition of &quot;a&quot;, while a.b is a variable access.
      • zephen1 day ago
        &gt; Namespaces to me are more about naming conflict resolution and code readability, and I think of them more as prefixes to namespace member names, as opposed to those member names being part of a hierarchy.<p>That&#x27;s a perspective. Are we talking about the &#x27;bar&#x27; that comes from &#x27;foo&#x27; or are we talking about the &#x27;bar&#x27; that comes from &#x27;baz&#x27;?<p>But another perspective is that &#x27;foo&#x27; is important and provides several facilities that are intimately related to foo, so &#x27;bar&#x27; is simply one of the features of foo.<p>&gt; It also helps code readability to know that a::b is referring to a namespace<p>For you, perhaps. As someone who reads a lot of Python, I don&#x27;t personally find this argument persuasive.
        • Right - I just wanted to point out that not everyone is going to conceptualize namespaces and members, the same as structs and members, as both being about &quot;hierarchy&quot;.<p>I&#x27;m generally of the camp that code is written once, read many times, and that anything that adds to readability is therefore a win.
          • zephen1 day ago
            &gt; anything that adds to readability is therefore a win.<p>Right, the entire question is whether &#x27;::&#x27; ever adds to readability.<p>For me, it&#x27;s a huge negative.<p>Obviously, YMMV.
  • We have solved the better C issue, but nobody seems keen on solving the better compiler issue<p>Why do we still have to recompile the whole program everytime we make a change, the only project i am aware of who wants to tackle this is Zig with binary patching, and that&#x27;s imo where we should focus our effort on..<p>C3 does look interesting tho, the idea of ABI compatibility with C is pretty ingenious, you get to tap into C&#x27;s ecosystem for free
    • flohofwoe1 day ago
      &gt; Why do we still have to recompile the whole program everytime we make a change<p>That problem was solved decades ago via object files and linkers. Zig needs a different approach because its language features depend on compiling the entire source code as a single compilation unit, but I don&#x27;t think that C3 has that same &quot;restriction&quot; (not sure though).
    • norir1 day ago
      To a large extent, this problem is primarily due to slow compilation. It is possible to write a direct to machine code compiler that compiles at greater than one million lines per second. That is more code than I am likely to write in my lifetime. A fast compiler with no need for incremental compilation is a superior default and can always be adapted to add incrementalism when truly needed.
    • sirwhinesalot1 day ago
      C3 doesn&#x27;t have a recompile everything model, in fact it&#x27;s pretty much designed around supporting separate compilation and dynamic linking (unlike Zig and Odin), it even supports Objective-C style dynamic calls.
    • muth024461 day ago
      Separate compilation is one solution to the problem of slow compilation.<p>Binary patching is another one. It feels a bit messy and I am sceptical that it can be maintained assuming it works at all.<p>I think a much better approach would be too make the compilers faster. Why does compiling 1M LOC take more than 1s in unoptimized mode for any language? My guess is part of blame lies with bloated backends and meta programming (including compile time evaluation, templates, etc.)
      • norir1 day ago
        Ha, I did not see your post before making mine. You are correct in your assessment of the blame.<p>Moreover, I view optimization as an anti-pattern in general, especially for a low level language. It is better to directly write the optimal solution and not be dependent on the compiler. If there is a real hotspot that you have identified through profiling and you don&#x27;t know how to optimize it, then you can run the hotspot through an optimizing compiler and copy what it does.
    • &gt; Why do we still have to recompile the whole program everytime we make a change<p>Are you talking about compiling, or linking, or both?<p>GNU ld has supported incremental linking for ages, and make systems only recompile things based on file level dependencies.<p>I guess recompilation could perhaps be smarter based on what changed or was added&#x2F;deleted to a module definition (e.g C header file), but this would seem difficult to get right. Maybe you just add a new function to a module, so no need to recompile other modules that use it, right? Except what if there is now a name clash and they would fail if recompiled?
    • jlarocco1 day ago
      &gt; Why do we still have to recompile the whole program everytime we make a change, the only project i am aware of who wants to tackle this is Zig<p>Lisp solved that problem 60 years ago.<p>A meta answer to your question, I guess.
  • fatty_patty891 day ago
    not being able to do alias i32 = int; was the biggest turn off for me
  • Rohansi21 hours ago
    c3 is as easy as 1,2,3
  • maximgeorge1 day ago
    [dead]
  • bigbadfeline1 day ago
    [flagged]
  • gdsdfe1 day ago
    Honestly if your programming language does not compile to wasm I don&#x27;t care for it. That&#x27;s my new rule.
    • Alifatisk1 day ago
      &gt; We do want WASM to be working really well, so if you’re interested in writing something in WASM please reach out to the C3 development team and we’ll help you get things working.
    • lerno1 day ago
      It does compile to WASM.
  • TZubiri1 day ago
    If I had a dollar for every C successor...
  • bofia1 day ago
    Does this compile to Rust with an LLVM?
    • Alifatisk1 day ago
      No, C3 does not compile to Rust or use Rust in its compilation process.