One thing nobody's named in the thread yet: WASM's validator is linear-time, single-pass, with no dataflow joins. That constraint is what gives the operand stack its weird shape.
Every block, loop, and if carries a function-type signature. The operand stack at block entry has to match the parameter type, and at exit has to match the result type. Inside a block the validator only sees pushes and pops within that frame; it never has to merge stacks from sibling control-flow paths, because each path's exit type is independently checked against the same expected signature.<p>JVM went the other way: arbitrary control flow plus a verifier that does dataflow with type merges across joins. That's expensive enough that JVMs do it once at class load and cache the verified state. WASM specifically didn't want that bill; fast startup was a hard requirement.<p>So the prefix/postfix debate elsewhere in the thread is downstream of this. The encoded form is postfix because that's what trivially admits a linear validator; the textual LISP form is sugar for the same expression trees inside a typed frame. dup isn't missing for aesthetic reasons either: local.tee n followed by local.get n already gives you dup-equivalence through typed locals, and any stack op that didn't reduce to typed locals would either duplicate what locals already do, or break the validator's linearity guarantee.
The whole point of having a type system is to endow program syntax with verifiable annotations that make validation easier. So if you wished to allow for otherwise "expensive" validation without overly impacting startup speed, the natural way of doing that is to extend the type system itself, in a way that offloads burden from the verifier to the producer. Arguably, this is exactly what WASM did when it implemented SSA phi-nodes via block return values and then extended those to multiple-value returns.
Calling Wasm a stack machine is misleading.<p>It’s closer to a structured IR that uses a stack encoding, not a machine where the stack is the primary state. The absence of real stack operations (dup, swap, etc.) is not accidental — it shows the stack isn’t meant to be observable.<p>If you build even a tiny real stack CPU (simulator + assembler + traces), the difference becomes obvious very quickly: the stack stops being syntax and starts being semantics.<p>So the real question isn’t “is Wasm a stack machine?” but “why does it avoid being one?”<p>My take: because it’s designed for validation and compilation, not execution as a first-class machine model.<p>And that’s fine — but then we should call it what it is.
I'm trying to implement a WASM to C compiler, and because of that not-quite-so-stack behavior, I can actually guarantee that it will always build an expression and I don't have to discard or reset stack value! Everything stays within that function, which is very neat, and I think it is one of the reason WAT, the textual format is so neat, that you can represent it with a S-Expression.
Check out wastrel <a href="https://codeberg.org/andywingo/wastrel" rel="nofollow">https://codeberg.org/andywingo/wastrel</a><p>Of note WASM 3 has garbage collection (GC), multi-memory, exception handling, tail calls and more which can be challenging to implement.
Compiling WASM to C is a really good option: <a href="https://00f.net/2023/12/11/webassembly-compilation-to-c/" rel="nofollow">https://00f.net/2023/12/11/webassembly-compilation-to-c/</a>
Shameless plug… compiling it to Go is a great option too: <a href="https://github.com/ncruces/wasm2go" rel="nofollow">https://github.com/ncruces/wasm2go</a><p>I've used it to translate SQLite (with a few extensions) and, that I know of, it's been used (to varying degrees of success) to translate the MARISA trie library (C++), libghostty (Zig), zlib, Perl, and QuickJS.<p>More on-topic, I use a mix of an unevaluated expression stack and a stack-to-locals approach to translate Wasm.
But how do you handle arguments or loop index variables? Your liveness is the entire function? You have to compile <i>all</i> the WASM chunks together in order to do any optimization? That seems ... problematic.<p>Edit: Yep. In article referenced from the original: <a href="http://troubles.md/posts/wasm-is-not-a-stack-machine/" rel="nofollow">http://troubles.md/posts/wasm-is-not-a-stack-machine/</a><p>Double edit: Some of this has already been fixed in WASM: <a href="https://github.com/WebAssembly/multi-value" rel="nofollow">https://github.com/WebAssembly/multi-value</a>
I like to read assembly a lot, and I don't really see the point in WASM trying to be a stack machine. None of our computers are stack machines.<p>Not to mention that compiler backends are missing tons of optimizations even on mainstream targets on real hardware, I just don't think WASM makes sense economically. They should have just picked RISC-V and called it a day.
The series of articles linked at the end (troubles.md/posts/wasm-is-not-a-stack-machine/) is even more interesting, imo.<p>Very well articulated and concise critique by somebody who seems to have a great amount of knowledge and experience with the topics.
The author seems to complain about a lack of stack manip expressions like dup and rot, but at least for me that's what I would expect from an average programming language stack machine. Even Java, which does have those instructions, doesn't use them --- reuse happens via local variables.<p>The way I see it, the difference between register and stack vms is all about the instruction encoding. Register VMs have fatter instructions in exchange for needing fewer LOAD and STORE operations. Despite the name, register VMs also have a stack.
> Despite the name, register VMs also have a stack.<p>Out of curiosity what do you think about this - in spite of the name, stack machines also have yet another stack. Ok I don't like that wording, but locals are basically the stack frames people know of from their computer arch class I think.<p>It doesn't change the fact that Wasm operations have to have the execution stack as one or more of the operands. Seems like a stack machines to me too, though I don't know more details on why the specific design of Wasm would make optimizing compilers harder to write than JVM as the article suggests (I think?).
As you said, it's more like CPU stack frames. In a register VM all instructions can read and write to any position in the stack frame. In a stack VM, most instructions only read and write to the top but they are often combined with LOAD and STORE instructions, which can read and write to any position in the stack frame.
Java does use dup in some cases, e.g.<p><pre><code> public static void test() {
new Object();
}
0: new #2 // class java/lang/Object
3: dup
4: invokespecial #1 // Method java/lang/Object."<init>":()V
7: pop
8: return</code></pre>
I dont really disagree with the main premise of the article, which is that WASM is not really a stack <i>language</i>, but this part just gave me pause:<p>> In textual Wasm, for example, they are instead represented in a LISP-like notation – not any less or more efficient<p>The Text format, at least when it comes to instructions, it 1 to 1 with the binary format. The LISP-like syntax is mainly just syntax sugar[1].<p><pre><code> ‘(’ plaininstr instrs ‘)’ ≡ instrs plaininstr
</code></pre>
So (in theory, as far as I understand it) you can just do `(local.get 2 local.get 0 local.get 1)` to mean `local.get 0 local.get 1 local.get 2`, and it works for (almost) any instruction.<p>Unfortunately, in my limited testing, tools like `wat2wasm` and Binaryen's `wasm-as` don't seem to adhere to (my perhaps faulty understanding of) the spec, and demand all instructions in a folded block be folded and have the "correct" amount of arguments, which makes Binaryen do weird things like<p><pre><code> (return
(tuple.make ;; Binaryen only pseudoinstruction
(local.get 0) ;; or w/e expression
(local.get 1) ;; or w/e expression
)
)
</code></pre>
when this is perfectly valid<p><pre><code> local.get 0
local.get 1
return
</code></pre>
tl;dr: the LISP syntax is just syntax sugar. The textual format is as "stack-like" as the binary format.<p>Edit: An example that is easily done with the stack syntax and not with lisp syntax is the following:<p><pre><code> call function_that_returns_multivalue
local.set 2 ;; last return
local.set 1 ;;
local.set 0 ;; first return
</code></pre>
In LISP syntax this would be<p><pre><code> (local.set 0
(local.set 1
(local.set 2
(call function_that_returns_multivalue
( ;; whatever input paramters
)))))
</code></pre>
I have not yet tried this with Binaryen but I doubt it flies.<p>[1]: <a href="https://webassembly.github.io/spec/core/text/instructions.html#folded-instructions" rel="nofollow">https://webassembly.github.io/spec/core/text/instructions.ht...</a>
FWIW if you are looking for examples of WebAssembly written in the textual format, take a look at:<p><a href="https://raw.githubusercontent.com/soegaard/webracket/refs/heads/main/runtime-wasm.rkt" rel="nofollow">https://raw.githubusercontent.com/soegaard/webracket/refs/he...</a><p>As a small example, here is a definition of `$car` which extracts the first value from a pair.<p><pre><code> (func $car (type $Prim1)
(param $v (ref eq))
(result (ref eq))
(if (result (ref eq))
(ref.test (ref $Pair) (local.get $v))
(then (struct.get $Pair $a (ref.cast (ref $Pair) (local.get $v))))
(else (call $raise-pair-expected (local.get $v))
(unreachable))))</code></pre>
I can't speak to Binaryen, but afaik WABT's wat2wasm and wasm-tools's wat2wasm (aka wasm-tools parse) are both 100% spec-correct in this respect. Parsing the Wasm text format doesn't require any knowledge of the type of each instruction. If you have a counterexample would love to see it!<p>There are some cool edge cases if you want to <i>print</i> a mismatched multi-value instruction sequence in the folded form (which WABT and wasm-tools again handle "correctly," but not identically to each other, and not particularly meaningfully).
> tl;dr: the LISP syntax is just syntax sugar. The textual format is as "stack-like" as the binary format.<p>Not that you're technically wrong, but I think you're begging the question.<p>Stack-based languages/encodings, in a colloquial sense, are equated to postfix notation, e.g. `a b +` instead of the infix `a + b`. Both LISP and textual Wasm use prefix notation, e.g. `(+ a b)`. Neither of the three is any more foundational than the other -- all notations can encode all expression trees, and postfix and prefix notations in particular have the same coding efficiency.<p>So sure, the LISP syntax is sugar, but for <i>what</i>? It's not sugar for a stack program, because prefix notation in general can't represent an arbitrary stack program; it's sugar for a mathematical expression. Which is encoded in postfix notation in binary, sure, but that's just an implementation detail, and prefix notation could've been selected when Wasm was born with little adversarial consequences.
I am saying that textual wasm uses `a b +` (justl ike binary wasm) and `(+ a b)` is just a nicety.<p>It is explicity sugar for the stack operations, per my reading of the spec.
I have reread this several times but might be missing so I am begging the question, what exactly makes the LISP syntax sugar for something that isn't a stack machine? Or did I misread that?<p>If not, I think the OP is making the same point we all are, any program can be translated for execution on any machine - so bringing it up in the blog seems weak, which I agree with.
The lack of a dup opcode in Wasm as mentioned in the post is quite annoying when trying to generate compact code. I wish something like it had made it into the spec.
I am sad about WASM. It was a promise for epic greatness.<p>It has failed to deliver that - so much is clear now. You
rarely see any awesome success story shown with regard to
WASM nowadays. What happened to the old promises? "Electron
will be SUPER fast thanks to WASM" or "use any language,
WASM unifies it all for the larger browser ecosystem".<p>It feels as if WASM is on a step towards exctinction. Sure,
it is mentioned, it is used, but let's be honest - only few
people really use it. And that won't change either.
You're just feeling the "trough of disillusionment", after the initial hype of the technology wears off and reality sets in. If you pay attention and actually look at what it's being used for, it's clear that Wasm has been very successful and will be around for decades. It's not perfect, it has problems both solvable and not, but it has not failed and its usage is growing year after year. Let's be honest - you don't know what you don't know, and making grandiose statements backed up by no experience or understanding is worse than useless. I suggest you learn about the thing, if you're interested in what's good about it.
Just recently I've compiled my side-project Rust game engine to WASM and it runs beautifully in the browser, as well as SSH2 to have a fully featured SSH implementation in the browser over a websocket transport.<p>It can obviously do amazing things, but the expectation for it to do replace webdev frontend code was always a huge misconception. Though recent developments have made DOM access without a JavaScript translation layer possible, so that might change!<p>I'd say the hype is still very much alive.
There used to be hype about Wasm, now it's a technology as any other. It's still used, and used a lot; it just doesn't get focused on as much.
Well there is Google Sheets, Microsoft Office, Figma, and some other heavier web apps.
The only failure of WASM is that it was overhyped beyond all reason. It's "just another" virtual instruction set, and for that it turned out pretty great. It's supported by Clang and by all browsers. That's already enough to make the whole idea work.<p>You don't hear much about it because for the people using it, web+wasm is "just another porting target", like Windows, macOS or Linux. WASM has become 'normal' and that's a good thing.<p>The main risk these days for WASM is feature creep, the spec is getting bloated with optional features (garbage collection etc...).
Why would you write in C++ then wrap it in a C++ to JavaScript wrapper then wrap it in a JavaScript to C++ wrapper
Looks like you're getting down voted, but the folks at Mozilla seem like they agree and are working towards making WASM more first class in the browser: <a href="https://hacks.mozilla.org/2026/02/making-webassembly-a-first-class-language-on-the-web/" rel="nofollow">https://hacks.mozilla.org/2026/02/making-webassembly-a-first...</a>
That's specifically about string-marshalling overhead, which is only a problem when trying to talk to the DOM from the WASM side (which arguably is a silly idea to begin with, but to each their own I guess).