For a good example of this sort of pattern in the real world, take a look at the Zig compiler source code. I'm sure others might do it but Zig definitely does. I have a now very outdated series on some of the Zig internals: <a href="https://mitchellh.com/zig/parser" rel="nofollow">https://mitchellh.com/zig/parser</a> And Andrew's old DoD talk is very good and relevant to this: <a href="https://vimeo.com/649009599" rel="nofollow">https://vimeo.com/649009599</a><p>More generally, I believe its fair to call this a form of handle-based designs: <a href="https://en.wikipedia.org/wiki/Handle_(computing)" rel="nofollow">https://en.wikipedia.org/wiki/Handle_(computing)</a> Which are EXTREMELY useful for a variety of reasons and imo woefully underused above the lowest system level.
My hypothesis is that handles are underused because programming languages make it very easy to dereference a pointer (you just need the pointer) whereas "dereferencing" a handle requires also having the lookup table in hand at the same time, and that little bit of extra friction is too much for most people. It's not that pointers <i>don't</i> require extra machinery to be dereferenced, it's just that that machinery (virtual memory) is managed by the operating system, and so it's invisible in the language.<p>My current research is about how to make handles just as convenient to use as pointers are, via a form of context: like a souped-up version of context in Odin or Jai if one is familiar with those, or like a souped-up version of coeffects if one has a more academic background.
I think that it's a generic programming problem: pointers are easier because the type of the pointee is easy to get (a deref) and also its location (memory) but with index-based handles into containers you can no longer say that given a handle `H` (type H = u32) I can use it to get a type `T` and not only that, you've also introduced the notion of "where", that even if for each type `T` there exists a unique handle type `H` you don't know into which container instance does that handle belong. What you need is a unique handle type per container instance. So "Handle of Pool<T>" != "Handle of Pool<T>" unless the Pool is bound to the same variable.<p>As far as I know no language allows expressing that kind of thing.
> What you need is a unique handle type per container instance.<p>You can do this with path-dependent types in Scala, or more verbosely with modules in OCaml. The hard part is keeping the container name in scope wherever these handle types are used: many type definitions will need to reference the container handle types. I'm currently trying to structure code this way in my pet compiler written in OCaml.
I think actually Scala does exactly this style of inferring the container instance from its type: <a href="https://docs.scala-lang.org/scala3/book/ca-context-parameters.html" rel="nofollow">https://docs.scala-lang.org/scala3/book/ca-context-parameter...</a><p>But from what I understand (being a nonexpert on Scala), this scheme actually causes a lot of problems. I think I've even heard that it adds more undecidability to the type system? So I'm exploring ways of managing context that don't depend on inferring backward from the type.
Great summary and I think your argument is sound.
What I like about this writeup is that it surfaces a tension most “let’s build a compiler” tutorials skip: the AST is both a data structure and a UX boundary. Super-flat layouts are fantastic for cache and memory, but they’re hostile to all the things humans care about (debuggable shapes, easy instrumentation, ad-hoc traversals, “just print this node and its children” in a debugger). A lot of production compilers quietly solve that by having two tiers: a nice, inefficient tree for diagnostics and early passes, and increasingly flattened / interned / arena-allocated forms as you move toward optimization and codegen.<p>The interesting question for me is where the crossover is now that IDEs and incremental compilation dominate the workload. If your front-end is effectively a long-running service, it might be worth keeping a friendlier AST around and only using a super-flat representation for hot paths like analysis passes or bulk refactors. Otherwise you risk saving a few hundred MB while spending engineer-months making every new pass fight the layout.
It looks like, overall, this design gets the parser about twice as fast as a simple one that creates tree-like ASTs.<p>That's not nothing. But a parser is rarely the most time-intensive part of a production compiler. And the parser <i>does</i> get iterated on a <i>lot</i> in languages that are evolving and adding new syntax.<p>Given that, I'd be inclined to take the performance hit and stick with a simpler AST representation if that yields a more hackable, maintainable compiler front end.
That's a good caution. However, traversing a flat AST (iterating a "struct of arrays" rather than a pointer-based tree) is also going to be faster. So the next steps of the compiler, say type checking and code emitting, will also be faster. But how much, or whether it's worth it even then, I'm not sure.
Usually yes, but it's still a neat trick to be aware of. For interpreted scripting languages, parsing can actually be a significant slowdown. Even more so when we start going into text-based network protocols, which also need a parser (is CSS a programming language or a network protocol? :) )
(author here) I agree that it's a lot of complexity, and I acknowledge this in the article: You can get quite far with just a bump allocator.<p>I didn't go into this at all, but the main benefit of this design is how well it interacts with CPU cache. This has almost no effect on the parser, because you're typically just writing the AST, not reading it. I believe that subsequent stages benefit much more from faster traversal.<p>(By the way, I am a huge fan of your work. Crafting interpreters was my introduction to programming languages!)
It'd also have been interesting to see some overall profiling data of the initial program & some discussion of which optimisations to investigate based on that profiling data.<p>When investigating performance issues its often very helpful to run with profiling instrumentation enabled and start by looking at some top-down "cumulative sum" profiler output to get a big picture view of which functions/phases are consuming most of the running time, to see where it may be worth spending some effort.<p>Getting familiar with linux's perf [1] tool is also helpful, both in terms of interpreting summary statistics from perf stat (instructions per cycle, page faults, cache misses, etc) that can give clues what to focus on, but also being able to use it to annotate source line by line with time spent.<p>I'm not familiar with rust, but e.g. the rustc compiler dev guide has a tutorial on how to profile rustc using perf [2]<p>[1] Brendan Gregg's Linux perf examples is an excellent place to start <a href="https://www.brendangregg.com/perf.html" rel="nofollow">https://www.brendangregg.com/perf.html</a>
[2] <a href="https://rustc-dev-guide.rust-lang.org/profiling/with_perf.html" rel="nofollow">https://rustc-dev-guide.rust-lang.org/profiling/with_perf.ht...</a>
For anyone confused by why the text says the performance is improving between each graph but the lines don't seem to show that - the color for each key and the scale changes between graphs.
FWIW I think Clang IR does something like this in a lot of places. It is relatively common to see child nodes stored inline following parent nodes. The APIs more or less abstract this away from consumers like static analysis tools, though.<p>E.g., <a href="https://github.com/llvm/llvm-project/blob/62e00a03fba029f82d421fb3c814a6a02d59b8c8/clang/include/clang/AST/Expr.h#L2887-L2927" rel="nofollow">https://github.com/llvm/llvm-project/blob/62e00a03fba029f82d...</a><p>and<p><a href="https://github.com/llvm/llvm-project/blob/62e00a03fba029f82d421fb3c814a6a02d59b8c8/llvm/include/llvm/Support/TrailingObjects.h#L10-L21" rel="nofollow">https://github.com/llvm/llvm-project/blob/62e00a03fba029f82d...</a>
We need to flatten everything. Thanks for the great write up
I want to find some way to link this work to Sea of Nodes representations, but I'm a bit out of my depths to try to do so.
<a href="https://v8.dev/blog/leaving-the-sea-of-nodes" rel="nofollow">https://v8.dev/blog/leaving-the-sea-of-nodes</a>
Personally I think this is a neat trick to organize memory, but don't these kinds of objects packed together in flat buffers bypass the entire lifetime and safety mechanism of Rust?<p>I mean if you do an off by one error on indices, essentially you are readin the pointer of another node.
This is a common argument about Rust. Unlike pointer confusion, with index confusion, you still get bounds checking in the containing collection, and you also avoid type confusion (the wrong index element will still have the same type as the object you intended to access). So there are still some benefits.
> This is a common argument about Rust.<p>Because it is a correct one. std::vector can do this in debug mode, Zig and a bunch of other languages do as well. But that's not the point of why memory safety's important. You opt out of all aliasing and lifetime checks this way, which means an easy off-by one indexing bug (which would be a compiler error otherwise), silently propagates through the system, and you can reference wrong nodes, invalid nodes, uninitalized nodex.<p>It's even worse in some aspects that a dangling pointer, because you can quickly tell a dangling pointer contains garbage data, but here, the stuff looks plausible.<p>I am not sure this is a critique of Rust - this certainly goes against the grain of how the language has been designed - which in the case of Rust, might make things easier, since lifetimes are not checked for individual elements, but also less safe.