This is a good write up and I agree with pretty much all of it.<p>Two comments:<p>- LLVM IR is actually remarkably stable these days. I was able to rebase Fil-C from llvm 17 to 20 in a single day of work. In other projects I’ve maintained a LLVM pass that worked across multiple llvm versions and it was straightforward to do.<p>- LICM register pressure is a big issue especially when the source isn’t C or C++. I don’t think the problem here is necessarily licm. It might be that regalloc needs to be taught to rematerialize
> It might be that regalloc needs to be taught to rematerialize<p>It knows how to rematerialize, and has for a long time, but the backend is generally more local/has less visibility than the optimizer. This causes it to struggle to consistently undo bad decisions LICM may have made.
> It knows how to rematerialize<p>That's very cool, I didn't realize that.<p>> but the backend is generally more local/has less visibility than the optimizer<p>I don't really buy that. It's operating on SSA, so it has exactly the same view as LICM in practice (to my knowledge LICM doesn't cross function boundary).<p>LICM can't possibly know the cost of hoisting. Regalloc does have decent visibility into cost. Hence why this feels like a regalloc remat problem to me
<i>"LLVM IR is actually remarkably stable these days."</i><p>I'm by no means an LLVM expert but my take away from when I played with it a couple of years ago was that it is more like the union of different languages. Every tool
and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands. The IR is more like a common vocabulary than a common language.<p>My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.<p>Do you think I misunderstood?
> like the union of different languages<p>No. Here are two good ways to think about it:<p>1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.<p>2. It's a low level representation. It's suitable for lowering other languages to. Theoretically, you could lower anything to it since it's Turing-complete. Practically, it's only suitable for lowering sufficiently statically-typed languages to it.<p>> Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands.<p>Definitely not. All of those tools have a shared understanding of what happens when LLVM executes on a particular target and data layout.<p>The only flexibility is that you're allowed to alter <i>some</i> of the semantics on a per-target and per-datalayout basis. Targets have limited power to change semantics (for example, they cannot change what "add" means). Data layout is its own IR, and that IR has its own semantics - and everything that deals with LLVM IR has to deal with the data layout "IR" and has to understand it the same way.<p>> My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.<p>Not parsing this statement very well, but bottom line: LLVM IR is remarkably stable because of Hyrum's law within the LLVM project's repository. There's a TON of code in LLVM that deals with LLVM IR. So, it's super hard to change even the smallest things about how LLVM IR works or what it means, because any such change would surely break at least one of the many things in the LLVM project's repo.
> 1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.<p>This is becoming steadily less true over time, as LLVM IR is growing somewhat more divorced from C/C++, but that's probably a good way to start thinking about it if you're comfortable with C's corner case semantics.<p>(In terms of frontends, I've seen "Rust needs/wants this" as much as Clang these days, and Flang and Julia are also pretty relevant for some things.)<p>There's currently a working group in LLVM on building better, LLVM-based semantics, and the current topic du jour of that WG is a byte type proposal.
> This is becoming steadily less true over time, as LLVM IR is growing somewhat more divorced from C/C++, but that's probably a good way to start thinking about it if you're comfortable with C's corner case semantics.<p>First of all, you're right. I'm going to reply with amusing pedantry but I'm not really disagreeing<p>I feel like in some ways LLVM is becoming more like C-in-SSA...<p>> and the current topic du jour of that WG is a byte type proposal.<p>That's a case of becoming more like C! C has pointer provenance and the idea that byte copies can copy "more" than just the 8 bits, somehow.<p>(The C provenance proposal may be in a state where it's not officially part of the spec - I'm not sure exactly - but it's <i>effectively</i> part of the language in the sense that a lot of us already consider it to be part of the language.)
The C pointer provenance is still in TS form and is largely constructed by trying to retroactively justify the semantics of existing compilers (which all follow some form of pointer provenance, just not necessarily coherently). This is still an area where we have a decent idea of what we want the semantics to be but it's challenging to come up with a working formalization.<p>I'd have to double-check, but my recollection is that the current TS doesn't actually require that you be able to implement user-written memcpy, rather it's just something that the authors threw their hands up and said "we hope compilers support this, but we can't specify how." In that sense, byte type is going beyond what C does.
> The C pointer provenance is still in TS form and is largely constructed by trying to retroactively justify the semantics of existing compilers<p>That's my understanding too<p>> I'd have to double-check, but my recollection is that the current TS doesn't actually require that you be able to implement user-written memcpy, rather it's just something that the authors threw their hands up and said "we hope compilers support this, but we can't specify how."<p>That's also my understanding<p>> In that sense, byte type is going beyond what C does.<p>I disagree, but only because I probably define "C" differently than you.<p>"C", to me, isn't what the spec describes. If you define "C" as what the spec describes, then almost zero C programs are "C". (Source: in the process of making Fil-C, I experimented with various points on the spectrum here and have high confidence that to compile any real C program you need to go far beyond what the spec promises.)<p>To me, when we say "C", we are really talking about:<p>- What real C programs expect to happen.<p>- What real C compilers (like LLVM) make happen.<p>In that sense, the byte type is a case of LLVM hardening the guarantee that it already makes to real C programs.<p>So, LLVM having a byte type is a necessary component of LLVM supporting C-as-everyone-practically-it.<p>Also, I would guess that we wouldn't be talking about the byte type if it wasn't for C. Type safe languages with well-defined semantics have no need for allowing the user to write a byte-copy loop that does the right thing if it copies data of arbitrary type<p>(Please correct me if I'm wrong, this is fun)
bytewise copy just works with the TS. What it does not support is tracking provenance across the copy and doing optimization based on this. What we hope is that compilers drop these optimizations, because they are unsound.
Thanks for your detailed answer. You encouraged me to give it another try and have closer look this time.
This take makes sense in the context of MLIR creation which introduces dialects which are namespaces within the IR. Given it was created by Chris Lattner I would guess he saw these problems with LLVM as well.
There is a rematerialize pass, there is no real reason to couple it with register allocation. LLVM regalloc is already somewhat subpar.<p>What would be neat is to expose all right knobs and levers so that frontend writers can benchmark a number of possibilities and choose the right values.<p>I can understand this is easier said than done of course.
I asked the guy working on compiler-rt to change one boolean so the LLVM 18 build would work on macOS, and he locked the whole issue down as "heated" and it's still not fixed four years later.<p>I love LLVM though. clang-tidy, ASAN, UBSAN, LSAN, MSAN, and TSAN are AMAZING. If you are coding C and C++ and NOT using clang-tidy, you are doing it wrong.<p>My biggest problem with LLVM rn is that -fbounds-safety is only available on Xcode/AppleClang and not LLVM Clang. MSAN and LSAN are only available on LLVM and not Xcode/AppleClang. Also Xcode doesn't ship clang-tidy, clang-format, or llvm-symbolizer. It's kind of a mess on macOS rn. I basically rolled my own darwin LLVM for LSAN and clang-tidy support.<p>The situation on Linux is even weirder. RHEL doesn't ship libcxx, but Fedora does ship it. No distro has libcxx instrumented for MSAN at the moment which means rolling your own.<p>What would be amazing is if some distro would just ship native LLVM with all the things working out of the box. Fedora is really close right now, but I still have to build compiler-rt manually for MSAN support..
Given some of the discussions I've been stuck in over the past couple of weeks, one of the things I especially want to see built out for LLVM is a comprehensive executable test suite that starts not from C but from LLVM IR. If you've ever tried working on your own backend, one of the things you notice is there's not a lot of documentation about all of the SelectionDAG stuff (or GlobalISel), and there is also a lot of semi-generic "support X operation on top of Y operation if X isn't supported." And the precise semantics of X or Y aren't clearly documented, so it's quite easy to build the wrong thing.
> This is somewhat unsurprising, as code review … may not provide immediate value to the person reviewing (or their employer).<p>If you get “credit” for contributing when you review, maybe people (and even employers, though that is perhaps less likely) would find <i>doing</i> reviews to be more valuable.<p>Not sure what that looks like; maybe whatever shows up in GitHub is already enough.
Honestly, the same phenomenon is a problem inside companies as well. My employer credits review quality and quantity relatively well (i.e., in annual performance review), but it still isn't a strong enough motivator to really get the rate up to a satisfactory level.
Six years ago I was building LLVM pretty regularly on an 8GB Dell 9360 laptop whilst on a compiler related contract. (Still have it actually - that thing is weirdly indestructible for a cheap ultrabook.)<p>Build time wasn’t great, but it was tolerable, so long as you reduced link parallelism to squeeze inside the memory constraints.<p>Is it still possible to compile LLVM on such a machine, or is 8Gb no longer workable at all?
> or is 8Gb no longer workable<p>llvm compiles in less than an hour on my old m1 mac in all the build configurations I have tried so far
If you don't build with parallelism and have a couple gigs of swap available, it should work (although you might need to set some command line flags to use the right linker settings).
>Compilation time<p>I remember part of the selling point of LLVM during its early stage was compilation time being so much faster than GCC.<p>LLVM started about 15 years after GCC. Considering LLVM is 23 years old already. I wonder if something new again will pop up.
A few months ago someone wrote a much faster -O0 backend for LLVM, though it seems it didn't get much attention upstream: <a href="https://discourse.llvm.org/t/tpde-llvm-10-20x-faster-llvm-o0-back-end/86664" rel="nofollow">https://discourse.llvm.org/t/tpde-llvm-10-20x-faster-llvm-o0...</a><p>Discussion: <a href="https://news.ycombinator.com/item?id=45072481">https://news.ycombinator.com/item?id=45072481</a><p>There are also codegen projects that don't use LLVM IR that are faster like Cranelift: <a href="https://github.com/bytecodealliance/wasmtime/tree/main/cranelift" rel="nofollow">https://github.com/bytecodealliance/wasmtime/tree/main/crane...</a>
> Considering LLVM is 23 years old already. I wonder if something new again will pop up<p>LLVM is actually really really good at what it does (compiling c/c++ code). Not perfect, but good enough that it would take tens of thousands of competent man hours to match it
FWIW, the article says "Frontends are somewhat insulated from this because they can use the largely stable C API." but that's not been my/our experience. There are parts of the API that are somewhat stable, but other parts (e.g. Orc) that change wildly.
Yes, the Orc C API follows different rules from the rest of the C API (<a href="https://github.com/llvm/llvm-project/blob/501416a755d1b85ca10507301aac273843a15a39/llvm/include/llvm-c/Orc.h#L20-L23" rel="nofollow">https://github.com/llvm/llvm-project/blob/501416a755d1b85ca1...</a>).
I know, but even if it's not breaking promises, the constant stream of changes still makes it still rather painful to utilize LLVM. Not helped by the fact that unless you embed LLVM you have to deal with a lot of different LLVM versions out there...
FWIW eventual stability is a goal, but there's going to be more churn as we work towards full arbitrary program execution (<a href="https://www.youtube.com/watch?v=qgtA-bWC_vM" rel="nofollow">https://www.youtube.com/watch?v=qgtA-bWC_vM</a> covers some recent progress).<p>If you're looking for stability in practice: the ORC LLJIT API is your best bet at the moment (or sticking to MCJIT until it's removed).
> There are thousands of contributors and the distribution is relatively flat (that is, it’s not the case that a small handful of people is responsible for the majority of contributions.)<p>This certainly varies across different parts of llvm-project. In flang, there's very much a "long tail". 80% of its 654K lines are attributed to the 17 contributors responsible for 1% or more of them, according to "git blame", out of 355 total.
That was ambiguously phrased. The point I was trying to make here is that we don't have the situation that is very common for open-source projects, where a project might nominally have a 100 contributors, but in reality it's one person doing 95% of the changes.<p>LLVM of course has plenty of contributors that only ever landed one change, but the thing that matters for project health is that that the group of "top contributors" is fairly large.<p>(And yes, this does differ by subproject, e.g. lld is an example of a subproject where one contributor is more active than everyone else combined.)
My main concern with LLVM is that it adds 30+ million lines of code dependency to any language that relies on it.<p>Part of the reason I'm not ready to go all in on Rust is that I'm not willing to externalize that much complexity in the programs I make.
What language do you typically use?
QBE might be more to your liking: <a href="https://c9x.me/compile/" rel="nofollow">https://c9x.me/compile/</a><p>It is used in the Hare language
Hey Nikita, if you're reading this, Thank You! for your contributions to PHP!<p>We miss you!
Comptimes aee an issue, not only for LLVM itself, but also for users, as a prime example: Rust. Rust has horrible comptimes for anything larger, what makes its a real PITA to use.
[dead]
It's amazing to me that this is trusted to build so much of software. It's basically impossible to audit yet Rust is supposed to be safe. It's a pipe dream that it will ever be complete or Rust will deprecate it. I think infinite churn is the point.
Rust does its own testing, and regularly helps fix issues in LLVM (which usually also benefits clang users and other LLVM languages).<p>Optimizing compilers are basically impossible to audit, but there are tools like alive2 for checking them.
Go is sometimes criticised for not using LLVM but I think they made the right choice.<p>For starters the tooling would be much slower if it required LLVM.
> I think infinite churn is the point.<p>That would require the LLVM devs to be stupid and/or evil. As that is not the case, your supposition is not true either. They might be willing to accept churn in the service of other goals, but they don't have churn as a goal unto itself.