31 comments

  • simonw18 hours ago
    I think the most interesting thing about this is how it demonstrates that a very particular kind of project is now massively more feasible: library porting projects that can be executed against implementation-independent tests.<p>The big unlock here is <a href="https:&#x2F;&#x2F;github.com&#x2F;html5lib&#x2F;html5lib-tests" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;html5lib&#x2F;html5lib-tests</a> - a collection of 9,000+ HTML5 parser tests that are their own independent file format, e.g. this one: <a href="https:&#x2F;&#x2F;github.com&#x2F;html5lib&#x2F;html5lib-tests&#x2F;blob&#x2F;master&#x2F;tree-construction&#x2F;blocks.dat" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;html5lib&#x2F;html5lib-tests&#x2F;blob&#x2F;master&#x2F;tree-...</a><p>The Servo html5ever Rust codebase uses them. Emil&#x27;s JustHTML Python library used them too. Now my JavaScript version gets to tap into the same collection.<p>This meant that I could set a coding agent loose to crunch away on porting that Python code to JavaScript and have it keep going until that enormous existing test suite passed.<p>Sadly conformance test suites like html5lib-tests aren&#x27;t that common... but they do exist elsewhere. I think it would be interesting to collect as many of those as possible.
    • avsm7 hours ago
      The html5lib conformance tests when combined with the WHATWG specs are even more powerful! I managed to build a typed version of this in OCaml in a few hours ( <a href="https:&#x2F;&#x2F;anil.recoil.org&#x2F;notes&#x2F;aoah-2025-15" rel="nofollow">https:&#x2F;&#x2F;anil.recoil.org&#x2F;notes&#x2F;aoah-2025-15</a> ) yesterday, but I also left an agent building a pure OCaml HTML5 _validator_ last night.<p>This run has (just in the last hour) combined the html5lib expect tests with <a href="https:&#x2F;&#x2F;github.com&#x2F;validator&#x2F;validator&#x2F;tree&#x2F;main&#x2F;tests" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;validator&#x2F;validator&#x2F;tree&#x2F;main&#x2F;tests</a> (which are a complex mix of Java RELAX NG stylesheets and code) in order to build a low-dependency pure OCaml HTML5 validator with types and modules.<p>This feels like formal verification in reverse: we&#x27;re starting from a scattered set of facts (the expect tests) and iterating towards more structured specifications, using functional languages like OCaml&#x2F;Haskell as convenient executable pitstops while driving towards proof reconstruction in something like Lean.
      • leafmeal57 minutes ago
        This totally makes me thing of Martin Kleppmann&#x27;s recent blog post about how AI will make verified software much easier to use in practice! <a href="https:&#x2F;&#x2F;martin.kleppmann.com&#x2F;2025&#x2F;12&#x2F;08&#x2F;ai-formal-verification.html" rel="nofollow">https:&#x2F;&#x2F;martin.kleppmann.com&#x2F;2025&#x2F;12&#x2F;08&#x2F;ai-formal-verificati...</a>
    • Havoc6 hours ago
      Was struggling yesterday with porting something (python-&gt;rust). LLM couldn&#x27;t figure out what was wrong with rust one no matter how I came at it (even gave it wireshark traces). And being vibecoded I had no idea either. Eventually copied in python source into rust project asked it to compare...immediate success<p>Turns out they&#x27;re quite good at that sort of pattern matching cross languages. Makes sense from a latent space perspective I guess
    • gwking17 hours ago
      I’ve idly wondered about this sort of thing quite a bit. The next step would seem to be taking a project’s implementation dependent tests, converting them to an independent format and verifying them against the original project, then conducting the port.
      • skissane14 hours ago
        Give coding agent some software. Ask it to write tests that maximise code coverage (source coverage if you have source code; if not, binary coverage). Consider using concolic fuzzing. Then give another agent the generated test suite, and ask it to write an implementation that passes. Automated software cloning. I wonder what results you might get?
        • gaigalas11 hours ago
          &gt; Ask it to write tests that maximise code coverage<p>That is significantly harder to do than writing an implementation from tests, especially for codebases that previously didn&#x27;t have any testing infrastructure.
          • skissane10 hours ago
            Give a coding agent a codebase with no tests, and tell it to write some, it will - if you don’t tell it which framework to use, it will just pick one. No denying you’ll get much better results if an experienced developer provides it with some prompting on how to test than if you just let it decide for itself.
            • joshstrange19 minutes ago
              This is a hilariously naive take.<p>If you’ve actually tried this, and actually read the results, you’d know this does not work well. It might write a few decent tests but get ready for an impressive number of tests and cases but no real coverage.<p>I did this literally 2 days ago and it churned for a while and spit out hundreds of tests! Great news right? Well, no, they did stupid things like “Create an instance of the class (new MyClass), now make sure it’s the right class type”. It also created multiple tests that created maps then asserted the values existed and matched… matched the maps it created in the test… without ever touching the underlying code it was supposed to be testing.<p>I’ve tested this on new codebases, old codebases, and vibe coded codebases, the results vary slightly and you absolutely can use LLMs to help with writing tests, no doubt, but “Just throw an agent at it” does not work.
            • gaigalas9 hours ago
              Have you tried? Beyond the first tests, going all the way up to decent coverage.
      • pbowyer11 hours ago
        I think I&#x27;ve asked this before on HN but is there a language-independent test format? There are multiple libraries (think date&#x2F;time manipulation for a good example) where the tests should be the same across all languages, but every library has developed its own test suite.<p>Having a standard test input&#x2F;output format would let test definitions be shared between libraries.
        • sfjailbird7 hours ago
          Like Cucumber?<p><a href="https:&#x2F;&#x2F;www.google.com&#x2F;search?q=cucumber+testing+framework" rel="nofollow">https:&#x2F;&#x2F;www.google.com&#x2F;search?q=cucumber+testing+framework</a>
        • sciurus5 hours ago
          <a href="https:&#x2F;&#x2F;testanything.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;testanything.org&#x2F;</a> ?
        • k__7 hours ago
          Maybe tape?
      • cr125rider15 hours ago
        I’ve got to imagine a suite of end to end tests (probably most common is fixture file in, assert against output fixture file) would be very hard to nail all of the possible branches and paths. Like the example here, thousands of well made tests are required.
    • pplonski867 hours ago
      This is amazing. Porting library from one language to one language are easy for LLMs, LLMs are tired-less and aware of coding syntax very well. What I like in machine learning benchmarks is that agents develop and test many solutions, and this search process is very human-alike. Yesterday, I was looking into MLE-Bench for benchamrking coding Agents on machine learning tasks from Kaggle <a href="https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;mle-bench" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;openai&#x2F;mle-bench</a> There are many projects that provide agents which performance is simply incredible, they can solve several Kaggle competitions under 24 hours and be on medal place. I think this is already above human level. I was reading ML-Master article and they describe AI4AI where AI is used to create AI systems: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2506.16499" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2506.16499</a>
    • exclipy5 hours ago
      Can you port tsc to go in a few hours?
    • bzmrgonz7 hours ago
      I see it as a learning or training tool for AI. The same way we use mock exams&#x2F;tests, to verify our skill and knowledge absorption ans prepare for the real thing or career. This could one of many obstacles in an obstacle course which a coding AI would have to navigate in order to &quot;graduate&quot;
    • tracnar11 hours ago
      If you&#x27;re porting a library, you can use the original implementation as an &#x27;oracle&#x27; for your tests. Which means you only need a way to write&#x2F;generate inputs, then verify the output matches the original implementation.<p>It doesn&#x27;t work for everything of course but it&#x27;s a nice way to bug-for-bug compatible rewrites.
    • aadishv16 hours ago
      I wonder if this makes AI models particularly well-suited to ML tasks, or at least ML <i>implementation</i> tasks, where you are given a target architecture and dataset and have to implement and train the given architecture on the given dataset. There are strong signals to the model, such as loss, which are essentially a slightly less restricted version of &quot;tests&quot;.
      • montroser14 hours ago
        We&#x27;ve been doing this at work a bunch with great success. The most impressive moment to me was when the model we were training did a type of overfitting, and rather than just claiming victory (as it all too often) this time Claude went and just added a bunch more robust, human-grade examples to our training data and hold out set, and kept iterating until the model effectively learned the actual crux of what we were trying to teach it.
      • simonw16 hours ago
        I&#x27;m certain this is the case. Iterating on ML models can actually be pretty tedious - lots of different parameters to try out, then you have to wait a bunch, then exercise the models, then change parameters and try again.<p>Coding agents are <i>fantastic</i> at these kinds of loops.
    • heavyset_go17 hours ago
      This is one of the reasons I&#x27;m keeping tests to myself for a current project. Usually I release libraries as open source, but I&#x27;ve been rethinking that, as well.
      • simonw17 hours ago
        Oddly enough my conclusion is the opposite: I should invest <i>more</i> of my open source development work in creating language-independent test suites, because they can be used to quickly create all sorts of useful follow-on projects.
        • heavyset_go15 hours ago
          I&#x27;m not that generous with my time lol
          • cortesoft15 hours ago
            Isn&#x27;t the point that you might be one of the people who benefits from one of those follow on projects? That is kind of the whole point of open source.<p>Why are you making your stuff open source in the first place if you don&#x27;t want other people to build off of it?
            • heavyset_go14 hours ago
              &gt; <i>Why are you making your stuff open source in the first place if you don&#x27;t want other people to build off of it?</i><p>Because I enjoy the craft. I will enjoy it less if I know I&#x27;m being ripped off, likely for profit, hence my deliberate choices of licenses, what gets released and what gets siloed.<p>I&#x27;m happy if someone builds off of my work, as long as it&#x27;s on my own terms.
            • nicoburns5 hours ago
              If you don&#x27;t trust the AI generated code yourself, then you wont benefit from it. And in fact all it does is take resources from the project that you work on, the one that&#x27;s generating all the value in the first place.<p>There are strong parallels to the image generation models that generate images in the style of studio ghibli films. Does that benefit studio ghibli? I&#x27;d argue not. And if we&#x27;re not careful, it will to undermine the business model that produced the artwork in the first place (which the AI is not currently capable of doing).
            • bgwalter14 hours ago
              Open source has three main purposes, in decreasing order of importance:<p>1) Ensuring that there is no malicious code and enabling you to build it yourself.<p>2) Making modifications <i>for yourself</i> (Stallman&#x27;s printer is the famous example).<p>3) Using other people&#x27;s code in your own projects.<p>Item 3) is wildly over-propagandized as the sole reason for open source. Hard forks have traditionally led to massive flame wars.<p>We are now being told by corporations and their &quot;AI&quot; shills that we should diligently publish everything for free so the IP thieves can profit more easily. There is no reason to oblige them. Hiding test suites in order to make translations more difficult is a great first step.
              • inejge12 hours ago
                &gt; Hard forks have traditionally led to massive flame wars.<p>Provided that the project is popular <i>and</i> has a community, especially a contributor community (the two don&#x27;t have to go together.) Most projects aren&#x27;t that prominent.
              • visarga13 hours ago
                I think the only non-slop parts of the web are: open source, wikipedia, arXiv, some game worlds and social network comments in well behaved&#x2F;moderated communities. What do they share in common? They all allow building on top, they are social first, people come together for interaction and collaboration.<p>The rest is enshittified web, focused on attention grabbing, retention dark patterns and misinformation. They all exist to make a profit off our backs.<p>A pattern I see is that we moved on from passive consumption and now want interactivity, sociality and reuse. We like to create together.
    • cies18 hours ago
      This is an interesting case. It may be good to feed it to other model and see how they do.<p>Also: it may be interesting to port it to other languages too and see how they do.<p>JS and Py are but runtime-typed and very well &quot;spoken&quot; by LLMs. Other languages may require a lot more &quot;work&quot; (data types, etc.) to get the port done.
  • cxr18 hours ago
    Few know that Firefox&#x27;s HTML5 parser was originally written in Java, and only afterward semi-mechanically translated (pre-LLMs) to the dialect of C++ used in the Gecko codebase.<p>This blog post isn&#x27;t really about HTML parsers, however. The JustHTML port described in this blog post was a worthwhile exercise as a demonstration on its own.<p>Even so, I suspect that for this particular application, it would have been more productive&#x2F;valuable to port the Java codebase to TypeScript rather than using the already vibe coded JustHTML as a starting point. Most of the value of what is demonstrated by JustHTML&#x27;s existence in either form comes from Stenström&#x27;s initial work.
    • simonw17 hours ago
      Whoa... it looks like the Firefox HTML5 parser is still maintained as Java to this day!<p>Here&#x27;s the relevant folder:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;mozilla-firefox&#x2F;firefox&#x2F;tree&#x2F;main&#x2F;parser&#x2F;html&#x2F;java" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mozilla-firefox&#x2F;firefox&#x2F;tree&#x2F;main&#x2F;parser&#x2F;...</a><p><pre><code> make translate # perform the Java-to-C++ translation from the remote # sources </code></pre> And active commits to that javasrc folder - the last was in November: <a href="https:&#x2F;&#x2F;github.com&#x2F;mozilla-firefox&#x2F;firefox&#x2F;commits&#x2F;main&#x2F;parser&#x2F;html&#x2F;javasrc" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mozilla-firefox&#x2F;firefox&#x2F;commits&#x2F;main&#x2F;pars...</a>
      • cxr17 hours ago
        I have secretly held the belief for a while that the Java implementation should be mechanically translated to TypeScript and then fixed up, annotated, and maintained not just primarily but entirely in that form; the requisite R&amp;D&#x2F;tooling should be created to:<p>(a) permit a fully mechanical, on-the-fly rederivation of the canonical TypeScript sources into Java, for Java consumers that need it (a lot like the ts-&gt;js step that happens for execution on JS engines), and<p>(b) compiler support that can go straight from the TypeScript subset used in the parser to a binary that&#x27;s as performant as the current native implementation, without requiring any intermediate C++ form to be emitted or reviewed&#x2F;vetted&#x2F;maintained by hand<p>(Sidenote: Hejlsberg is being weird&#x2F;not entirely forthcoming about the overall goals wrt the announcement last year about porting the TypeScript compiler to Go. We&#x27;re due for an announcement that they&#x27;ve done something like lifted the Go compilers&#x27; backends out of the golang.org toolchain, strapped the legacy tsc frontend on top, allowing the TypeScript compiler to continue to be developed and maintained in TypeScript while executing with the performance previously seen mostly with tools written in Go vs those making do with running on V8.)<p>I agree with the overall conclusion of the post that what is demonstrated there is a good use case for LLMs. It might even be the best use for them, albeit something to be undertaken&#x2F;maintained as part of the original project. It wouldn&#x27;t be hugely surprising if that turned out to be the dominant use of LLM-powered coding assistants when everything shakes out (all the other promises that have been made for and about them notwithstanding).<p>No real reason that they couldn&#x27;t play a significant role in the project I outlined above.
      • simonw16 hours ago
        I just blogged about this <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2025&#x2F;Dec&#x2F;17&#x2F;firefox-parser&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2025&#x2F;Dec&#x2F;17&#x2F;firefox-parser&#x2F;</a><p>... and then when I checked the henri-sivonen tag <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;tags&#x2F;henri-sivonen&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;tags&#x2F;henri-sivonen&#x2F;</a> found out I&#x27;d previously written about the exact same thing 16 years earlier!
        • po11 hours ago
          It&#x27;s very nice to have written for so long... I often think I should write more for myself than for others.
    • simonw18 hours ago
      There are certainly dozens of better ways to do what I did here.<p>I picked JustHTML as a base because I really liked the API Emil had designed, and I also thought it would be darkly amusing to take his painstakingly (1,000+ commits, 2 months+ of work) constructed library and see if I could port it directly to Python in an evening, taking advantage of everything he had already figured out.
    • QuantumNomad_16 hours ago
      IANAL. In my opinion, porting code to a different language is still derivative work of the code you are porting it from. Whether done by hand or with an LLM. And in my opinion, the license of the original code still applies. Which means that not only should one link to the repo for the code that was ported, but also make sure to adhere to the terms to the license.<p>The MIT family of licenses state that the copyright notice and terms shall be included in all copies of the software.<p>Porting code to a different language is in my opinion not much different from forking a project and making changes to it, small or big.<p>I therefore think the right thing to do is to keep the original copyright notice and license file, and adding your additional copyright line to it.<p>So for example if the original project had an MIT license file that said<p>Copyright 2019 Suchandsuch<p>Permission is hereby granted and so on<p>You should keep all of that and add your copyright year and author name on the next line after the original line or lines of the authors of the repo you took the code from.
      • simonw16 hours ago
        I added Emil to my license file: <a href="https:&#x2F;&#x2F;github.com&#x2F;simonw&#x2F;justjshtml&#x2F;blob&#x2F;main&#x2F;LICENSE" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;simonw&#x2F;justjshtml&#x2F;blob&#x2F;main&#x2F;LICENSE</a><p>I&#x27;m not certain I should add the html5ever copyright holders, since I don&#x27;t have a strong understanding of how much of their IP ended up in Emil&#x27;s work - see <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46264195#46267059">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46264195#46267059</a>
        • EmilStenstrom11 hours ago
          My feeling is that my code depends more on the html5lib-tests work than on html5ever. While inspired by, I think the macro-based Rust code is different enough from the source so that its new work. I’m guessing we’ll never know.
    • fergie7 hours ago
      Surely for debugging and auditing it&#x27;s always better to write libs in JavaScript? Also, given that much of TypeScripts utilty is for improving the developer experience- is it still as relevant for machine-generated code?
  • aster0id16 hours ago
    &gt; Code is so cheap it’s practically free. Code that works continues to carry a cost, but that cost has plummeted now that coding agents can check their work as they go.<p>I personally think that even before LLMs, the cost of code wasn&#x27;t necessarily the cost of typing out the characters in the right order, but having a human actually understand it to the extent that changes can be made. This continues to be true for the most part. You can vibe code your way into a lot of working code, but you&#x27;ll inevitably hit a hairy bug or a real world context dependency that the LLM just cannot solve, and that is when you need a human to actually understand everything inside out and step in to fix the problem.
    • monkpit15 hours ago
      I wonder if we will trend towards a world where maintainability is just a waste of time and money, when you can just knock together a new flimsy thing quicker and cheaper than maintaining one thing over multiple iterations.
      • doganugurlu10 hours ago
        Without maintainability, adding a new type of input or feature will break existing features.<p>Doesn’t matter how quick it is to write from scratch, if you want varying inputs handled by the same piece of code, you need maintainability.<p>In a way, software development is all about adding new constraints to a system and making sure the old constraints are still satisfied.
      • killingtime7411 hours ago
        I don&#x27;t think most business processes can afford to have that many issues with their code. Customers and contracts will be lost. Reputations will be lost
      • skydhash14 hours ago
        I don’t think that will ever be true. Let’s take a shell session as an example of ad-hoc code: People are still writing programs and scripts. Stuff doesn’t really change that often to warrant starting from scratch. Easier to add a new format to a music player than writing a new player from scratch.
  • jackfranklyn1 hour ago
    The oracle approach mentioned downthread is what makes this practical even without conformance test suites. Run the original, capture input&#x2F;output pairs, use those as your tests. Property-based testing tools like Hypothesis can generate thousands of edge cases automatically.<p>For solo devs this changes the calculus entirely. Supporting multiple languages used to mean maintaining multiple codebases - now you can treat the original as canonical and regenerate ports as needed. The test suite becomes the actual artifact you maintain.
    • solvedd32 minutes ago
      I wonder if I could actually build an app entirely from a set of working acceptance tests...
  • f311a18 hours ago
    From original repository:<p><pre><code> Verified Compliance: Passes all 9k+ tests in the official html5lib-tests suite (used by browser vendors). </code></pre> Yes, browsers do you use it. But they handle a lot of stuff differently.<p><pre><code> selectolax 68% No Very Fast CSS selectors C-based (Lexbor). Very fast but less compliant. </code></pre> The original author compares selectolax to html5lib-tests, but the reality is that when you compare selectolax to Chrome output, you get 90%+.<p>One of the tests:<p><pre><code> INPUT: &lt;svg&gt;&lt;foreignObject&gt;&lt;&#x2F;foreignObject&gt;&lt;title&gt;&lt;&#x2F;svg&gt;foo </code></pre> It fails for selectolax:<p><pre><code> Expected: | &lt;html&gt; | &lt;head&gt; | &lt;body&gt; | &lt;svg svg&gt; | &lt;svg foreignObject&gt; | &lt;svg title&gt; | &quot;foo&quot; Actual: | &lt;html&gt; | &lt;head&gt; | &lt;body&gt; | &lt;svg&gt; | &lt;foreignObject&gt; | &lt;title&gt; | &quot;foo&quot; </code></pre> But you get this in Chrome and selectolax:<p><pre><code> &lt;html&gt;&lt;head&gt;&lt;&#x2F;head&gt;&lt;body&gt;&lt;svg&gt;&lt;foreignObject&gt;&lt;&#x2F;foreignObject&gt;&lt;title&gt;&lt;&#x2F;title&gt;&lt;&#x2F;svg&gt;foo &lt;&#x2F;body&gt;&lt;&#x2F;html&gt;</code></pre>
    • EmilStenstrom12 hours ago
      This is a <i>namespacing</i> test. The reason the tag is &lt;svg title&gt; is that the parser is handling the title tag as the svg version of it. SVG has other handling rules, so unless the parser knows that it won&#x27;t work right. I would be interesting to run the tests against Chrome as well!<p>You are also looking at the test format of the tag, when serialized to HTML the svg prefixes will disappear.
  • seinecle11 hours ago
    Remarkable that it echoes, from a different angle, this post from just a few days ago on HN:<p><a href="https:&#x2F;&#x2F;martinalderson.com&#x2F;posts&#x2F;has-the-cost-of-software-just-dropped-90-percent&#x2F;" rel="nofollow">https:&#x2F;&#x2F;martinalderson.com&#x2F;posts&#x2F;has-the-cost-of-software-ju...</a><p>This last post was largely dismissed in the comments here on HN. Simon&#x27;s experiment brings new ground for the argument.
    • akie10 hours ago
      The reason is that the post you link to is overly simplistic. The only reason why Simon&#x27;s experiment works is because there is a pre-existing language agnostic testing framework of 9000 tests that the agent can hold itself accountable to. Additionally, there is a pre-existing API design that it can reuse&#x2F;reappropriate.<p>These two preconditions don&#x27;t generally apply to software projects. Most of the time there are vague, underspecified, frequently changing requirements, no test suite, and no API design.<p>If all projects came with 9000 pre-existing tests and fleshed-out API, then sure, the article you linked to could be correct. But that&#x27;s not really the case.
      • jillesvangurp9 hours ago
        If you start with some working software, you could make an LLM generate a lot of tests for the existing functionality and ensure they pass against the existing software and have excellent test coverage. Generating tests and specifications from existing software is relatively easy. It&#x27;s very tedious to do manually but LLMs excel at that type of job.<p>Once you have that, you port over the tests to a new language and generate an implementation that passes all those tests. You might want to do some reviews of the tests but it&#x27;s a good approach. It will likely result in bug for bug compatible software.<p>Where it gets interesting is figuring out what to do with all the bugs you might find along the way.
      • baq8 hours ago
        &gt; pre-existing language agnostic testing framework of 9000 tests<p>if there exists a language specific test harness, you can ask the LLMs to port it before porting the project itself.<p>if it doesn&#x27;t, you can ask the LLM to build one first, for the original project, according to specs.<p>if there are no specs, you can ask the LLM to write the specs according to the available docs.<p>if there are no docs, you can ask the LLM to write them.<p>if all the above sounds ridiculous, I agree. it&#x27;s also effective - go try it.<p>(if there is no source, you can attempt to decompile the binaries. this is hard, but LLMs can use ghidra, too. this is probably unreasonable and ineffective <i>today</i>, though.)
        • philipwhiuk5 hours ago
          &gt; if it doesn&#x27;t, you can ask the LLM to build one first, for the original project, according to specs.<p>And you have no idea if that is necessary and sufficient at this point.<p>You are building on sand.
  • ulrischa1 hour ago
    For converting html to markdown in php markydown is pretty good: <a href="https:&#x2F;&#x2F;devkram.de&#x2F;markydown&#x2F;" rel="nofollow">https:&#x2F;&#x2F;devkram.de&#x2F;markydown&#x2F;</a>
  • minimaxir17 hours ago
    My opinion on the ending open questions:<p>&gt; Does this library represent a legal violation of copyright of either the Rust library or the Python one? Even if this is legal, is it ethical to build a library in this way?<p>Currently, I am experimenting with two projects in Claude Code: a Rust&#x2F;Python port of a Python repo which necessitates a full rewrite to get the desired performance&#x2F;feature improvements, and a Rust&#x2F;Python port of a JavaScript repo mostly because I refuse to install Node (the speed improvement is nice though).<p>In both of those cases, the source repos are permissively licensed (MIT), which I interpret as the developer <i>intent</i> as to how their code should used. It is in the spirit of open source to produce better code by iterating on existing code, as that&#x27;s how the software ecosystem grows. That would be the case whether a human wrote the porting code or not. If Claude 4.5 Opus can produce better&#x2F;faster code which has the same functionality and passes all the tests, that&#x27;s a win for the ecosystem.<p>As courtesy and transparency, I will still link and reference the original project in addition to disclosing the Agent use, although those things aren&#x27;t likely required and others may not do the same. That said, I&#x27;m definitely not using an agent to port any GPL-licensed code.
    • throwup23817 hours ago
      <i>&gt; As courtesy and transparency, I will still link and reference the original project in addition to disclosing the Agent use, although those things aren&#x27;t likely required and others may not do the same. That said, I&#x27;m definitely not using an agent to port any GPL-licensed code.</i><p>IANAL but regardless of the license, you have to respect their copyright and it’s hard to argue that an LLM ported library is anything but a derivative work. You would still have to include the original copyright notices and retain the license (again IANAL).
      • minimaxir17 hours ago
        A similar argument could be made about generative AI and whether text&#x2F;image outputs themselves are derivative works, which is a legal point of contention still being argued. It&#x27;s unclear if code text from a generative AI is in scope.
        • throwup23817 hours ago
          That’s a legal point of contention because the nature of language&#x2F;image models is hard to fit into the existing copyright framework. That only really applies to cleanroom-ish one shot requests where the inference input doesn’t contain the copyrighted material in question.<p>It’s a lot easier to argue that it’s a derivative work when you feed the copyrighted code directly into the context and ask it to port it to another language. If the copyrighted code is literally an input to the inference request, that would not escape any judge’s notice. The law may not have any precedent for this technology but judges aren’t automatons beholden to trivially buggy code that can’t adapt.
    • simonw17 hours ago
      That&#x27;s about where I&#x27;m settled on this right now. I feel like authors who select the GPL have made a robust statement about their intent. It may be legal for me to copyright-launder their library (maybe using the trick where one LLM turns their code into a spec and another turns that spec into fresh code) but I wouldn&#x27;t do that because it would subvert the spirit of the license.
      • EmilStenstrom12 hours ago
        Would it be a problem if you maintained the GPL license and released your code as open source?
        • simonw12 hours ago
          Good point, that might actually be fine (especially if you kept copyright for the original authors too.)
          • ZeroGravitas9 hours ago
            Can a human even put GPL on bot written code since it relies on copyright to protect it? Is that like museums adding copyright to scans of public domain paintings in their holdings? Which was fought about in courts for years.
  • swyx17 hours ago
    &gt; How much better would this library be if an expert team hand crafted it over the course of several months?<p>i think the fun conclusion would be: ideally no better, and no worse. that is the state you arrive it IFF you have complete tests and specs (including probably for performance). now a human team handcrafting would undoubtedly make important choices not clarified in specs, thereby extending the spec. i would argue that human chain of thought from deep involvement in building and using the thing is basically 100% of the value of human handcrafting, because otherwise yeah go nuts giving it to an agent.
  • leroman11 hours ago
    The biggest challenge an agent will face with tasks like these is the diminishing quality in relation to the size of the input, specifically I find input of above say 10k tokens dramatically reduced quality of generated output.<p>This specific case worked well, I suspect, since LLMs have a LOT of previous knowledge with HTML, and saw multiple impl and parsing of HTML in the training.<p>Thus I suspect that in real world attempts of similar projects and any non well domain will fail miserably.
    • adastra2211 hours ago
      In my experience it is closer to 25k, but that’s a minor point. What task do you need to do that requires more than that many tokens?<p>No, seriously. If you break your task into bite sized chunks, do you really need more than that at a time? I rarely do.
      • leroman10 hours ago
        What model are you working with where you still get good results at 25k?<p>To your q, I make huge effort in making my prompts as small as possible (to get the best quality output), I go as far as removing imports from source files, writing interfaces and types to use in context instead of fat impl code, write task specific project &#x2F; feature documentation.. (I automate some of these with a library I use to generate prompts from code and other files - think templating language with extra flags). And still for some tasks my prompt size reaches 10k tokens, where I find the output quality not good enough
        • adastra2210 hours ago
          I&#x27;m working with Anthropic models, and my combined system prompt is already 22k. It&#x27;s a big project, lots of skill and agent definitions. Seems to work just fine until it reaches 60k - 70k tokens.
          • leroman5 hours ago
            Interesting, thanks!
  • mNovak13 hours ago
    While this example is explicitly asking for a port (thus a copy), I also find in general that LLM&#x27;s default behavior is to spit out new code from their vast pre-trained encyclopedia, vs adding an import to some library that already serves that purpose.<p>I&#x27;m curious if this will implicitly drive a shift in the usage of packages &#x2F; libraries broadly, and if others think this is a good or bad thing. Maybe it cuts down the surface of upstream supply-chain attacks?
    • MangoToupe12 hours ago
      As a corollary, it might also increase the surface of upstream supply-chain attacks (patched or not)<p>The package import thing seems like a red herring
      • Retr0id12 hours ago
        It&#x27;s going to be fun if someone finds a security vulnerability in a commonly-emitted-by-LLMs code pattern. That&#x27;ll be a lot harder to remediate than &quot;Update dependency xyz&quot;
        • MangoToupe1 hour ago
          &gt; if someone finds a security vulnerability in a commonly-emitted-by-LLMs code pattern<p>how do you distinguish this from injecting a vulnerable dependency to a dependency list?
          • Retr0id13 minutes ago
            You can more easily check for known-vulnerable dependencies
  • cjlm16 hours ago
    Not all AI-assisted ports are quite so successful[0]<p>[0] <a href="https:&#x2F;&#x2F;ammil.industries&#x2F;the-port-i-couldnt-ship&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ammil.industries&#x2F;the-port-i-couldnt-ship&#x2F;</a>
    • zamadatix15 hours ago
      I think a big factor (of many probably) is there is a ~150x difference in bytes of source vs number of tests for them. I.e. I wonder what other projects are easy wins, which are hard ones, and which can be accomplished quickly with a certain approach.<p>It&#x27;d be really interesting if Simon gave a crack at the above and wrote about his findings in doing so. Or at least, I&#x27;d find it interesting :).
  • yobbo6 hours ago
    The problem with translating between languages is that code that &quot;looks the same and runs&quot; are not equivalently idiomatic or &quot;acceptable&quot;. It seems to turn into long files of if-statements, flags and checks and so on. This might be considered idiomatic enough in python, but not something you&#x27;d want to work with in functional or typed code.
  • orange_puff13 hours ago
    This seems really impressive. I am too lazy to replicate this, but I do wonder how important the test suite is for a a port that likely uses straight forward, dependency free python code <a href="https:&#x2F;&#x2F;github.com&#x2F;EmilStenstrom&#x2F;justhtml&#x2F;tree&#x2F;main&#x2F;src&#x2F;justhtml" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;EmilStenstrom&#x2F;justhtml&#x2F;tree&#x2F;main&#x2F;src&#x2F;just...</a><p>It is enormously useful for the author to know that the code works, but my intuition is if you asked an agent to port files slowly, forming its own plan, making commits every feature, it would still get reasonably close, if not there.<p>Basically, I am guessing that this impressive output could have been achieved based on how good models are these days with large amounts of input tokens, without running the code against tests.
    • EmilStenstrom11 hours ago
      I think the reason this was an evening project for Simon is based on both the code and the tests and conjunction. Removing one of them would at least 10x the effort is my guess.
      • simonw6 hours ago
        The biggest value I got from JustHTML here was the API design.<p>I think that represents the bulk of the human work that went into JustHTML - it&#x27;s <i>really</i> nice, and lifting that directly is the thing that let me build my library almost hands-off and end up with a good result.<p>Without that I would have had to think a whole lot more about what I was doing here!
  • xarope13 hours ago
    &quot;If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed&quot;<p>I&#x27;m a bit sad about this; I&#x27;d rather have &quot;had fun&quot; doing the coding, and get AI to create the test cases, than vice versa.
    • EmilStenstrom11 hours ago
      The other way around works as well! ”Get me to 100% test coverage using only integration tests” is a fun prompt!
  • vessenes14 hours ago
    Couple quick points from the read - cool, btw! It&#x27;s not trivial that Simon poked the LLM to get something up and running and working ASAP - that&#x27;s always been a good engineering behavior in my opinion - building on a working core - but I have found it&#x27;s extra helpful&#x2F;needed when it comes to LLM coding - this brings the compiler and tests &quot;in the loop&quot; for the LLM, and helps keep it on the rails - otherwise you may find you get 1,000s of lines of code that don&#x27;t work or are just sort of a goose chase, or all gilding of lilies.<p>As is mentioned in the comments, I think the real story here is two fold - one, we&#x27;re getting longer uninterrupted productive work out of frontier models - yay - and a formal test suite has just gotten vastly more useful in the last few months. I&#x27;d love to see more of these made.
  • ethanpil16 hours ago
    <p><pre><code> &gt; It took two initial prompts and a few tiny follow-ups. GPT-5.2 running in Codex CLI ran uninterrupted for several hours, burned through 1,464,295 input tokens, 97,122,176 cached input tokens and 625,563 output tokens and ended up producing 9,000 lines of fully tested JavaScript across 43 commits. </code></pre> Using a random LLM cost calculator, this amounts to $28.31... pretty reasonable for functional output.<p>I am now confident that within 5-10 years (most&#x2F;all?) junior &amp; mid and many senior dev positions are going to drop out enormously.<p>Source: <a href="https:&#x2F;&#x2F;www.llm-prices.com&#x2F;#it=1464295&amp;cit=97123000&amp;ot=625563&amp;ic=1.75&amp;cic=.175&amp;oc=14" rel="nofollow">https:&#x2F;&#x2F;www.llm-prices.com&#x2F;#it=1464295&amp;cit=97123000&amp;ot=62556...</a>
    • elcritch16 hours ago
      This is for <i>porting</i> an existing project. It’s an ideal case for LLMs. The results are still pretty different for building up a library from scratch.<p>However this changes the economics for languages with smaller ecosystems!
    • afro8816 hours ago
      People say this kind of thing a lot, but in reality the concept of &quot;software engineer&quot; will change and there will still be experience levels with different expectations
    • almostgotcaught16 hours ago
      &gt; I am now confident that within 5-10 years (most&#x2F;all?) junior &amp; mid and many senior dev positions are going to drop out enormously.<p>yes because this is what we do all day every day (port existing libraries from one language to another)....<p>like do y&#x27;all hear yourselves or what?
      • hatefulheart15 hours ago
        I’m afraid the boosters hear nothing.<p>The commenter you’re replying to, in their heart of hearts, truly believes in 5 years that an LLM will be writing the majority of the code for a project like say Postgres or Linux.<p>Worth bearing in mind the boosters said this 5 years ago, and will say this in 5 years time.
  • rcaught6 hours ago
    While I understand the intent of this exercise, couldn&#x27;t someone just wasm compile the Servo html5ever Rust codebase?
  • visarga13 hours ago
    I think specs + tests are the new source of truth, code is disposable and rebuildable. A well tested project is reliable both for humans and AI, a badly tested one is bad for both. When we don&#x27;t test well I call it &quot;vibe testing, or LGTM testing&quot;
  • tantalor17 hours ago
    &gt; Can I even assert copyright over this, given how much of the work was produced by the LLM?<p>No, because it&#x27;s a derivative work of the base library.
    • simonw17 hours ago
      That doesn&#x27;t sound right to me. If it&#x27;s a derivative work I can still assert copyright over the modifications I have made, but not over the original material.
      • tantalor17 hours ago
        You&#x27;re right that derivative works are copyrightable. I got that wrong.<p>I think you can claim the prompt itself. But you didn&#x27;t create the new code. I&#x27;d argue copyright belongs to the original author.
        • simonw17 hours ago
          Something I&#x27;m particularly interested in understanding is where the tipping point here is. At what point is a prompt or the input that accompanies a prompt enough for the result to be copyrightable?<p>This project is the absolute extreme: I handed over exactly 8 prompts, and several of those were just a few words. I count the files on disk as part of the prompts, but those were authored by other people.<p>The US copyright office say &quot;the resulting work is copyrightable only if it contains sufficient human-authored expressive elements&quot; - <a href="https:&#x2F;&#x2F;perkinscoie.com&#x2F;insights&#x2F;update&#x2F;copyright-office-solidifies-stance-copyrightability-ai-generated-works" rel="nofollow">https:&#x2F;&#x2F;perkinscoie.com&#x2F;insights&#x2F;update&#x2F;copyright-office-sol...</a> - but what does that actually mean?<p>Emil&#x27;s JustHTML project involved several months of work and 1,000+ commits - almost all of the code was written by agents but there was an enormous amount of what I&quot;d consider &quot;human-authored expressive elements&quot; guiding that work.<p>Many of my smaller AI-assisted projects use prompts like this one:<p>&gt; Fetch <a href="https:&#x2F;&#x2F;observablehq.com&#x2F;@simonw&#x2F;openai-clip-in-a-browser" rel="nofollow">https:&#x2F;&#x2F;observablehq.com&#x2F;@simonw&#x2F;openai-clip-in-a-browser</a> and analyze it, then build a tool called is-it-a-bird.html which accepts a photo (selected or drag dropped or pasted) and instantly loads and runs CLIP and reports back on similarity to the word “bird” - pick a threshold and show a green background if the photo is likely a bird<p>Result: <a href="https:&#x2F;&#x2F;tools.simonwillison.net&#x2F;is-it-a-bird" rel="nofollow">https:&#x2F;&#x2F;tools.simonwillison.net&#x2F;is-it-a-bird</a><p>It was a short prompt, but the Observable notebook it references was authored by me several years ago. The agent also looked at a bunch of other files in my tools repo as part of figuring out what to build.<p>I think that counts as a great deal of &quot;human-authored expressive elements&quot; by me.<p>So yeah, this whole thing is really complicated!
          • tantalor17 hours ago
            This is, of course, forgetting the fact that the model was trained on heaps and heaps of copyrighted work.<p>Laying claim to anything generated is very likely to fail.
            • simonw16 hours ago
              If it turns out you can&#x27;t copyright code that was generated with the help of LLMs a whole bunch of $billion+ companies are going to have to throw away 18+ months of their work.
              • brailsafe15 hours ago
                &gt; If it turns out you can&#x27;t copyright code that was generated with the help of LLMs a whole bunch of $billion+ companies are going to have to throw away 18+ months of their work.<p>Hmm, it is interesting to think about that situation. Intuitively it would seem to me like there&#x27;s some nuance between whether work would need to be &quot;thrown out&quot; or whether it just can&#x27;t be sold as their own creation, marking some kind of divide between code produced and used privately for commercial purposes vs code that is produced and sold&#x2F;provided publicly as a commercial product. The risk in doing the latter, or entirely throwing out the code, seems like it would be a relatively cheap risk that those companies do anyway all the time.<p>However, if I as a small business owner made a tool to help other businesses based on LLM code that used some of my own prior work for context, then sold the code itself as a product or sold a product with it as a dependency, it would be a much greater liability for me if it turned out to include copyrighted &amp;&amp; unlicensed work that was produced by an LLM that further can&#x27;t be claimed as my own.<p>Privately, on servers or in internal tooling not sold commercially, it would perhaps be next to impossible to either identify or enforce those limits. Without explicit attribution to an agent, I have no idea (with certainty anyway) which code anyone on my team has produced with an LLM, and it&#x27;s not available publicly—aside from pure frontend web stuff—so I wonder in what capacity it would even be possible to throw specific chunks out if it was hypothetically enforceable.
      • leprechaun106617 hours ago
        In this case the majority of the work was done by another company on your instruction. When you signed up was there anything in the terms that said you get ownership over the output?
        • simonw17 hours ago
          All of the notable generative AI companies have policies that the won&#x27;t claim copyright over your outputs.<p>They also frequently offer &quot;liability shields&quot; where their legal teams will go to bat for you if you get sued for copyright infringement based on your usage of their terms.<p><a href="https:&#x2F;&#x2F;help.openai.com&#x2F;en&#x2F;articles&#x2F;5008634-will-openai-claim-copyright-over-what-outputs-i-generate-with-the-api" rel="nofollow">https:&#x2F;&#x2F;help.openai.com&#x2F;en&#x2F;articles&#x2F;5008634-will-openai-clai...</a><p><a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;news&#x2F;expanded-legal-protections-api-improvements" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;news&#x2F;expanded-legal-protections-ap...</a><p><a href="https:&#x2F;&#x2F;ai.google.dev&#x2F;gemini-api&#x2F;terms#use-generated" rel="nofollow">https:&#x2F;&#x2F;ai.google.dev&#x2F;gemini-api&#x2F;terms#use-generated</a>
  • mirthturtle17 hours ago
    Wild to ask, &quot;Is it legal, ethical, responsible or even harmful to build in this way and publish it?&quot; AFTER building and publishing it. Author made up his mind already, or doesn&#x27;t actually care. Ethics and responsibility should guide one&#x27;s actions, not just be engagement fodder after the fact.
    • simonw17 hours ago
      If I thought this was clear-cut 100% unethical and irresponsible I wouldn&#x27;t have done it. I think there&#x27;s ample room for conversation about this. I&#x27;d like to help instigate that conversation.<p>I&#x27;m ready to take a risk to my own reputation in order to demonstrate that this kind of thing is possible. I think it&#x27;s useful to help people understand that this kind of thing isn&#x27;t just feasible now, it&#x27;s somewhat terrifyingly easy.
  • febed13 hours ago
    What was your prompt to get it to run the test suite and heal tests at every step? I didn’t see that mentioned in your write up. Also, any specific reason you went with Codex over Claude Code?
    • simonw12 hours ago
      All of the prompts I used are in the article. The two most relevant to testing were:<p><pre><code> We are going to create a JavaScript port of ~&#x2F;dev&#x2F;justhtml - an HTML parsing library that passes the full ~&#x2F;dev&#x2F;html5lib-tests test suite. [...] </code></pre> And later:<p><pre><code> Configure GitHub Actions test.yml to run that on every commit, then commit and push </code></pre> Good coding models don&#x27;t need much of a push to get heavily into automated testing.<p>I used Codex for a few reasons:<p>1. Claude was down on Sunday when I kicked off tbis project<p>2. Claude Code is my daily driver and I didn&#x27;t want to burn through my token allowance on an experiment<p>3. I wanted to see how well the new GPT-5.2 could handle a long running project
    • EmilStenstrom12 hours ago
      For me (original author of JustHTML), it was enough the put the instructions on how to run tests in the AGENTS.md. It knows enough about coding to run tests by itself.
  • RobertoG9 hours ago
    I suppose a next experiment could be to reproduce sqlite from its test suite.
    • bambax8 hours ago
      But the SQLite test suite is proprietary (and it seems nobody ever tried to buy it).
  • Mystery-Machine6 hours ago
    &gt; How much better would this library be if an expert team hand crafted it over the course of several months?<p>It&#x27;s an interesting assumption that an expert team would build a better library. I&#x27;d change this question to: would an expert team build this library better?
  • WhyOhWhyQ16 hours ago
    &lt;p&gt;© 2024 Example&lt;&#x2F;p&gt;<p>^Claude still thinks it&#x27;s 2024. This happens to me consistently.
  • fithisux7 hours ago
    I think it is time for all HW vendors to open up their documentation so we can use AI for writing Drivers for niche OS.<p>There are many OSe out there suffering from the same problem. Lack of drivers.<p>AI can change it.
  • EmilStenstrom12 hours ago
    Another interesting experiment is to start from the html5lib-tests suite directly, instead of JustHTML. Worth another experiment?
  • pietz9 hours ago
    Now do the same with Rust, build a Python wrapper and we went full circle :)
  • bgwalter16 hours ago
    I think the decision of SQLite to keep its large test suite private is very wise in the presence of thieves.
    • EmilStenstrom12 hours ago
      Talking about &quot;thieves&quot; is very much going back to the idea that software is the same thing as physical things. When talking about software we have a very simple concept to guide us: the license.<p>The license of html5ever is MIT, meaning the original authors are OK that people do whatever they want with it. I&#x27;ve retained that license and given them acknowledgement (not required by the license) in the README. Simon has done the same, kept the license and given acknowledgement (not required) to me.<p>We&#x27;re all good to go.
  • teppic13 hours ago
    Fuck
  • StarterPro17 hours ago
    YOU didn&#x27;t port shit, the ai did all the work.
    • simonw17 hours ago
      That&#x27;s kind of the whole point of this exercise and my write-up of it.
      • kjgkjhfkjf16 hours ago
        I&#x27;m glad you wrote it up. Thanks! But I feel like the folks behind the HTML5 spec and the comprehensive test suite deserve the lion&#x27;s share of the credit for this (very neat) achievement.<p>Most projects don&#x27;t have a detailed spec at the outset. Decades of experience have shown that trying to build a detailed spec upfront does not work out well for a vast class of projects. And many projects don&#x27;t even have a comprehensive test suite when they go into production!
        • simonw16 hours ago
          I completely agree. I hope I gave them enough credit in the blog post and the GitHub repo.
          • kjgkjhfkjf16 hours ago
            Yep, and I think it is a great way to draw attention to their work!
        • visarga12 hours ago
          Having a comprehensive spec and test suite is an absolute requirement, without it all you got is vibe-testing, LGTM feels. As shown by the OP, you can throw away the code and regenerate it back from tests and specs. Our old manual code is now the new machine code.