When I studied compiler theory, a large part of the compilation involved a lexical analyser (e.g. `flex`) and a syntax analyser (e.g. `bison`), that would produce an internal representation of the input code (the AST), used to generate the compiled files.<p>It seems that the terminology as evolved, as we speak more broadly of frontends and backends.<p>So, I'm wondering if Bison and Flex (or equivalent tools) are still in use by the modern compilers? Or are they built directly in GCC, LLVM, ...?
The other answers are great, but let me just add that C++ <i>cannot</i> be parsed with conventional LL/LALR/LR parsers, because the syntax is ambiguous and requires disambiguation via type checking (i.e., there may be multiple parse trees but at most one will type check).<p>There was some research on parsing C++ with GLR but I don't think it ever made it into production compilers.<p>Other, more sane languages with unambiguous grammars may still choose to hand-write their parsers for all the reasons mentioned in the sibling comments. However, I would note that, even when using a parsing library, almost every compiler in existence will use its own AST, and not reuse the parse tree generated by the parser library. That's something you would only ever do in a compiler class.<p>Also I wouldn't say that frontend/backend is an evolution of previous terminology, it's just that parsing is not considered an "interesting" problem by most of the community so the focus has moved elsewhere (from the AST design through optimization and code generation).
Note that depending on what parsing lib you use, it may produce nodes of your own custom AST type<p>Personally I love the (Rust) combo of logos for lexing, chumsky for parsing, and ariadne for error reporting. Chumsky has options for error recovery and good performance, ariadne is gorgeous (there is another alternative for Rust, miette, both are good).<p>The only thing chumsky is lacking is incremental parsing. There is a chumsky-inspired library for incremental parsing called incpa though
GLR C++ parsers were for a short time in use on production code at Mozilla, in refactoring tools: Oink (and it's fork, pork). Not quite sure what ended that, but I don't think it was any issue with parsing.
I disagree. It is interesting, that is why there many languages out there without an LSP.
This was in the olden days when your language's type system would maybe look like C's if you were serious and be even less of a thing when you were not.<p>The hard part about compiling Rust is not really parsing, it's the type system including parts like borrow checking, generics, trait solving (which is turing-complete itself), name resolution, drop checking, and of course all of these features interact in fun and often surprising ways. Also macros. Also all the "magic" types in the StdLib that require special compiler support.<p>This is why e.g. `rustc` has several different intermediate representations. You no longer have "the" AST, you have token trees, HIR, THIR, and MIR, and then that's lowered to LLVM or Cranelift or libgccjit. Each stage has important parts of the type system happen.
Not sure about GCC, but in general there has been a big move away from using parser generators like flex/bison/ANTLR/etc, and towards using handwritten recursive descent parsers. Clang (which is the C/C++ frontend for LLVM) does this, and so does rustc.
I don't know a single mainstream language that uses parser generators. Python used to, and even they have moved.<p>AFAIK the reason is solely error messages: the customization available with handwritten parsers is just way better for the user.
I believe that GCC also moved to a handwritten parser, at least for c++, a couple of decades ago.
Table-driven parsers with custom per-statement tokenizers are still common in surviving Fortran compilers, with the exception of flang-new in LLVM. I used a custom parser combinator library there, inspired by a prototype in Haskell's Parsec, to implement a recursive descent algorithm with backtracking on failure. I'm still happy with the results, especially with the fact that it's all very strongly typed and coupled with the parse tree definition.
Not really. Here’s a comparison of different languages: <a href="https://notes.eatonphil.com/parser-generators-vs-handwritten-parsers-survey-2021.html" rel="nofollow">https://notes.eatonphil.com/parser-generators-vs-handwritten...</a><p>Most roll their own for three reasons: performance, context, and error handling. Bison/Menhir et al. are easy to write a grammar and get started with, but in exchange you get less flexibility overall. It becomes difficult to handle context-sensitive parts, do error recovery, and give the user meaningful errors that describe exactly what’s wrong. Usually if there’s a small syntax error we want to try to tell the user how to fix it instead of just producing “Syntax error”, and that requires being able to fix the input and keep parsing.<p>Menhir has a new mode where the parser is driven externally; this allows your code to drive the entire thing, which requires a lot more machinery than fire-and-forget but also affords you more flexibility.
If you're parsing a <i>new</i> language that you're trying to define, I do recommend using a parser generator to check your grammar, even if your "real" parser is handwritten for good reasons. A parser generator will insist on your grammar being unambiguous, or at least tell you where it is ambiguous. Without this sanity check, your unconstrained handwritten parser is almost guaranteed to not actually parse the language you think it parses.
"Frontend" as used by mainstream compilers is slightly broader than just lexing/parsing.<p>In typical modern compilers "frontend" is basically everything involving analyzing the source language and producing a compiler-internal IR, so lexing, parsing, semantic analysis and type checking, etc. And "backend" means everything involving producing machine code from the IR, so optimization and instruction selection.<p>In the context of Rust, rustc is the frontend (and it is already a very big and complicated Rust program, much more complicated than just a Rust lexer/parser would be), and then LLVM (typically bundled with rustc though some distros package them separately) is the backend (and is another very big and complicated C++ program).