I think the most interesting thing about this is how it demonstrates that a very particular kind of project is now massively more feasible: library porting projects that can be executed against implementation-independent tests.<p>The big unlock here is <a href="https://github.com/html5lib/html5lib-tests" rel="nofollow">https://github.com/html5lib/html5lib-tests</a> - a collection of 9,000+ HTML5 parser tests that are their own independent file format, e.g. this one: <a href="https://github.com/html5lib/html5lib-tests/blob/master/tree-construction/blocks.dat" rel="nofollow">https://github.com/html5lib/html5lib-tests/blob/master/tree-...</a><p>The Servo html5ever Rust codebase uses them. Emil's JustHTML Python library used them too. Now my JavaScript version gets to tap into the same collection.<p>This meant that I could set a coding agent loose to crunch away on porting that Python code to JavaScript and have it keep going until that enormous existing test suite passed.<p>Sadly conformance test suites like html5lib-tests aren't that common... but they do exist elsewhere. I think it would be interesting to collect as many of those as possible.
The html5lib conformance tests when combined with the WHATWG specs are even more powerful! I managed to build a typed version of this in OCaml in a few hours ( <a href="https://anil.recoil.org/notes/aoah-2025-15" rel="nofollow">https://anil.recoil.org/notes/aoah-2025-15</a> ) yesterday, but I also left an agent building a pure OCaml HTML5 _validator_ last night.<p>This run has (just in the last hour) combined the html5lib expect tests with <a href="https://github.com/validator/validator/tree/main/tests" rel="nofollow">https://github.com/validator/validator/tree/main/tests</a> (which are a complex mix of Java RELAX NG stylesheets and code) in order to build a low-dependency pure OCaml HTML5 validator with types and modules.<p>This feels like formal verification in reverse: we're starting from a scattered set of facts (the expect tests) and iterating towards more structured specifications, using functional languages like OCaml/Haskell as convenient executable pitstops while driving towards proof reconstruction in something like Lean.
Was struggling yesterday with porting something (python->rust). LLM couldn't figure out what was wrong with rust one no matter how I came at it (even gave it wireshark traces). And being vibecoded I had no idea either. Eventually copied in python source into rust project asked it to compare...immediate success<p>Turns out they're quite good at that sort of pattern matching cross languages. Makes sense from a latent space perspective I guess
I’ve idly wondered about this sort of thing quite a bit. The next step would seem to be taking a project’s implementation dependent tests, converting them to an independent format and verifying them against the original project, then conducting the port.
Give coding agent some software. Ask it to write tests that maximise code coverage (source coverage if you have source code; if not, binary coverage). Consider using concolic fuzzing. Then give another agent the generated test suite, and ask it to write an implementation that passes. Automated software cloning. I wonder what results you might get?
I think I've asked this before on HN but is there a language-independent test format? There are multiple libraries (think date/time manipulation for a good example) where the tests should be the same across all languages, but every library has developed its own test suite.<p>Having a standard test input/output format would let test definitions be shared between libraries.
Like Cucumber?<p><a href="https://www.google.com/search?q=cucumber+testing+framework" rel="nofollow">https://www.google.com/search?q=cucumber+testing+framework</a>
<a href="https://testanything.org/" rel="nofollow">https://testanything.org/</a> ?
Maybe tape?
I’ve got to imagine a suite of end to end tests (probably most common is fixture file in, assert against output fixture file) would be very hard to nail all of the possible branches and paths. Like the example here, thousands of well made tests are required.
This is amazing. Porting library from one language to one language are easy for LLMs, LLMs are tired-less and aware of coding syntax very well. What I like in machine learning benchmarks is that agents develop and test many solutions, and this search process is very human-alike. Yesterday, I was looking into MLE-Bench for benchamrking coding Agents on machine learning tasks from Kaggle <a href="https://github.com/openai/mle-bench" rel="nofollow">https://github.com/openai/mle-bench</a> There are many projects that provide agents which performance is simply incredible, they can solve several Kaggle competitions under 24 hours and be on medal place. I think this is already above human level. I was reading ML-Master article and they describe AI4AI where AI is used to create AI systems: <a href="https://arxiv.org/abs/2506.16499" rel="nofollow">https://arxiv.org/abs/2506.16499</a>
Can you port tsc to go in a few hours?
I see it as a learning or training tool for AI. The same way we use mock exams/tests, to verify our skill and knowledge absorption ans prepare for the real thing or career. This could one of many obstacles in an obstacle course which a coding AI would have to navigate in order to "graduate"
If you're porting a library, you can use the original implementation as an 'oracle' for your tests. Which means you only need a way to write/generate inputs, then verify the output matches the original implementation.<p>It doesn't work for everything of course but it's a nice way to bug-for-bug compatible rewrites.
I wonder if this makes AI models particularly well-suited to ML tasks, or at least ML <i>implementation</i> tasks, where you are given a target architecture and dataset and have to implement and train the given architecture on the given dataset. There are strong signals to the model, such as loss, which are essentially a slightly less restricted version of "tests".
We've been doing this at work a bunch with great success. The most impressive moment to me was when the model we were training did a type of overfitting, and rather than just claiming victory (as it all too often) this time Claude went and just added a bunch more robust, human-grade examples to our training data and hold out set, and kept iterating until the model effectively learned the actual crux of what we were trying to teach it.
I'm certain this is the case. Iterating on ML models can actually be pretty tedious - lots of different parameters to try out, then you have to wait a bunch, then exercise the models, then change parameters and try again.<p>Coding agents are <i>fantastic</i> at these kinds of loops.
This is one of the reasons I'm keeping tests to myself for a current project. Usually I release libraries as open source, but I've been rethinking that, as well.
This is an interesting case. It may be good to feed it to other model and see how they do.<p>Also: it may be interesting to port it to other languages too and see how they do.<p>JS and Py are but runtime-typed and very well "spoken" by LLMs. Other languages may require a lot more "work" (data types, etc.) to get the port done.
Few know that Firefox's HTML5 parser was originally written in Java, and only afterward semi-mechanically translated (pre-LLMs) to the dialect of C++ used in the Gecko codebase.<p>This blog post isn't really about HTML parsers, however. The JustHTML port described in this blog post was a worthwhile exercise as a demonstration on its own.<p>Even so, I suspect that for this particular application, it would have been more productive/valuable to port the Java codebase to TypeScript rather than using the already vibe coded JustHTML as a starting point. Most of the value of what is demonstrated by JustHTML's existence in either form comes from Stenström's initial work.
Whoa... it looks like the Firefox HTML5 parser is still maintained as Java to this day!<p>Here's the relevant folder:<p><a href="https://github.com/mozilla-firefox/firefox/tree/main/parser/html/java" rel="nofollow">https://github.com/mozilla-firefox/firefox/tree/main/parser/...</a><p><pre><code> make translate # perform the Java-to-C++ translation from the remote
# sources
</code></pre>
And active commits to that javasrc folder - the last was in November: <a href="https://github.com/mozilla-firefox/firefox/commits/main/parser/html/javasrc" rel="nofollow">https://github.com/mozilla-firefox/firefox/commits/main/pars...</a>
I have secretly held the belief for a while that the Java implementation should be mechanically translated to TypeScript and then fixed up, annotated, and maintained not just primarily but entirely in that form; the requisite R&D/tooling should be created to:<p>(a) permit a fully mechanical, on-the-fly rederivation of the canonical TypeScript sources into Java, for Java consumers that need it (a lot like the ts->js step that happens for execution on JS engines), and<p>(b) compiler support that can go straight from the TypeScript subset used in the parser to a binary that's as performant as the current native implementation, without requiring any intermediate C++ form to be emitted or reviewed/vetted/maintained by hand<p>(Sidenote: Hejlsberg is being weird/not entirely forthcoming about the overall goals wrt the announcement last year about porting the TypeScript compiler to Go. We're due for an announcement that they've done something like lifted the Go compilers' backends out of the golang.org toolchain, strapped the legacy tsc frontend on top, allowing the TypeScript compiler to continue to be developed and maintained in TypeScript while executing with the performance previously seen mostly with tools written in Go vs those making do with running on V8.)<p>I agree with the overall conclusion of the post that what is demonstrated there is a good use case for LLMs. It might even be the best use for them, albeit something to be undertaken/maintained as part of the original project. It wouldn't be hugely surprising if that turned out to be the dominant use of LLM-powered coding assistants when everything shakes out (all the other promises that have been made for and about them notwithstanding).<p>No real reason that they couldn't play a significant role in the project I outlined above.
I just blogged about this <a href="https://simonwillison.net/2025/Dec/17/firefox-parser/" rel="nofollow">https://simonwillison.net/2025/Dec/17/firefox-parser/</a><p>... and then when I checked the henri-sivonen tag <a href="https://simonwillison.net/tags/henri-sivonen/" rel="nofollow">https://simonwillison.net/tags/henri-sivonen/</a> found out I'd previously written about the exact same thing 16 years earlier!
There are certainly dozens of better ways to do what I did here.<p>I picked JustHTML as a base because I really liked the API Emil had designed, and I also thought it would be darkly amusing to take his painstakingly (1,000+ commits, 2 months+ of work) constructed library and see if I could port it directly to Python in an evening, taking advantage of everything he had already figured out.
IANAL. In my opinion, porting code to a different language is still derivative work of the code you are porting it from. Whether done by hand or with an LLM. And in my opinion, the license of the original code still applies. Which means that not only should one link to the repo for the code that was ported, but also make sure to adhere to the terms to the license.<p>The MIT family of licenses state that the copyright notice and terms shall be included in all copies of the software.<p>Porting code to a different language is in my opinion not much different from forking a project and making changes to it, small or big.<p>I therefore think the right thing to do is to keep the original copyright notice and license file, and adding your additional copyright line to it.<p>So for example if the original project had an MIT license file that said<p>Copyright 2019 Suchandsuch<p>Permission is hereby granted and so on<p>You should keep all of that and add your copyright year and author name on the next line after the original line or lines of the authors of the repo you took the code from.
I added Emil to my license file: <a href="https://github.com/simonw/justjshtml/blob/main/LICENSE" rel="nofollow">https://github.com/simonw/justjshtml/blob/main/LICENSE</a><p>I'm not certain I should add the html5ever copyright holders, since I don't have a strong understanding of how much of their IP ended up in Emil's work - see <a href="https://news.ycombinator.com/item?id=46264195#46267059">https://news.ycombinator.com/item?id=46264195#46267059</a>
Surely for debugging and auditing it's always better to write libs in JavaScript? Also, given that much of TypeScripts utilty is for improving the developer experience- is it still as relevant for machine-generated code?
> Code is so cheap it’s practically free. Code that works continues to carry a cost, but that cost has plummeted now that coding agents can check their work as they go.<p>I personally think that even before LLMs, the cost of code wasn't necessarily the cost of typing out the characters in the right order, but having a human actually understand it to the extent that changes can be made. This continues to be true for the most part. You can vibe code your way into a lot of working code, but you'll inevitably hit a hairy bug or a real world context dependency that the LLM just cannot solve, and that is when you need a human to actually understand everything inside out and step in to fix the problem.
The oracle approach mentioned downthread is what makes this practical even without conformance test suites. Run the original, capture input/output pairs, use those as your tests. Property-based testing tools like Hypothesis can generate thousands of edge cases automatically.<p>For solo devs this changes the calculus entirely. Supporting multiple languages used to mean maintaining multiple codebases - now you can treat the original as canonical and regenerate ports as needed. The test suite becomes the actual artifact you maintain.
From original repository:<p><pre><code> Verified Compliance: Passes all 9k+ tests in the official html5lib-tests suite (used by browser vendors).
</code></pre>
Yes, browsers do you use it. But they handle a lot of stuff differently.<p><pre><code> selectolax 68% No Very Fast CSS selectors C-based (Lexbor). Very fast but less compliant.
</code></pre>
The original author compares selectolax to html5lib-tests, but the reality is that when you compare selectolax to Chrome output, you get 90%+.<p>One of the tests:<p><pre><code> INPUT: <svg><foreignObject></foreignObject><title></svg>foo
</code></pre>
It fails for selectolax:<p><pre><code> Expected:
| <html>
| <head>
| <body>
| <svg svg>
| <svg foreignObject>
| <svg title>
| "foo"
Actual:
| <html>
| <head>
| <body>
| <svg>
| <foreignObject>
| <title>
| "foo"
</code></pre>
But you get this in Chrome and selectolax:<p><pre><code> <html><head></head><body><svg><foreignObject></foreignObject><title></title></svg>foo
</body></html></code></pre>
This is a <i>namespacing</i> test. The reason the tag is <svg title> is that the parser is handling the title tag as the svg version of it. SVG has other handling rules, so unless the parser knows that it won't work right. I would be interesting to run the tests against Chrome as well!<p>You are also looking at the test format of the tag, when serialized to HTML the svg prefixes will disappear.
Remarkable that it echoes, from a different angle, this post from just a few days ago on HN:<p><a href="https://martinalderson.com/posts/has-the-cost-of-software-just-dropped-90-percent/" rel="nofollow">https://martinalderson.com/posts/has-the-cost-of-software-ju...</a><p>This last post was largely dismissed in the comments here on HN. Simon's experiment brings new ground for the argument.
The reason is that the post you link to is overly simplistic. The only reason why Simon's experiment works is because there is a pre-existing language agnostic testing framework of 9000 tests that the agent can hold itself accountable to. Additionally, there is a pre-existing API design that it can reuse/reappropriate.<p>These two preconditions don't generally apply to software projects. Most of the time there are vague, underspecified, frequently changing requirements, no test suite, and no API design.<p>If all projects came with 9000 pre-existing tests and fleshed-out API, then sure, the article you linked to could be correct. But that's not really the case.
If you start with some working software, you could make an LLM generate a lot of tests for the existing functionality and ensure they pass against the existing software and have excellent test coverage. Generating tests and specifications from existing software is relatively easy. It's very tedious to do manually but LLMs excel at that type of job.<p>Once you have that, you port over the tests to a new language and generate an implementation that passes all those tests. You might want to do some reviews of the tests but it's a good approach. It will likely result in bug for bug compatible software.<p>Where it gets interesting is figuring out what to do with all the bugs you might find along the way.
> pre-existing language agnostic testing framework of 9000 tests<p>if there exists a language specific test harness, you can ask the LLMs to port it before porting the project itself.<p>if it doesn't, you can ask the LLM to build one first, for the original project, according to specs.<p>if there are no specs, you can ask the LLM to write the specs according to the available docs.<p>if there are no docs, you can ask the LLM to write them.<p>if all the above sounds ridiculous, I agree. it's also effective - go try it.<p>(if there is no source, you can attempt to decompile the binaries. this is hard, but LLMs can use ghidra, too. this is probably unreasonable and ineffective <i>today</i>, though.)
> if it doesn't, you can ask the LLM to build one first, for the original project, according to specs.<p>And you have no idea if that is necessary and sufficient at this point.<p>You are building on sand.
For converting html to markdown in php markydown is pretty good:
<a href="https://devkram.de/markydown/" rel="nofollow">https://devkram.de/markydown/</a>
My opinion on the ending open questions:<p>> Does this library represent a legal violation of copyright of either the Rust library or the Python one? Even if this is legal, is it ethical to build a library in this way?<p>Currently, I am experimenting with two projects in Claude Code: a Rust/Python port of a Python repo which necessitates a full rewrite to get the desired performance/feature improvements, and a Rust/Python port of a JavaScript repo mostly because I refuse to install Node (the speed improvement is nice though).<p>In both of those cases, the source repos are permissively licensed (MIT), which I interpret as the developer <i>intent</i> as to how their code should used. It is in the spirit of open source to produce better code by iterating on existing code, as that's how the software ecosystem grows. That would be the case whether a human wrote the porting code or not. If Claude 4.5 Opus can produce better/faster code which has the same functionality and passes all the tests, that's a win for the ecosystem.<p>As courtesy and transparency, I will still link and reference the original project in addition to disclosing the Agent use, although those things aren't likely required and others may not do the same. That said, I'm definitely not using an agent to port any GPL-licensed code.
<i>> As courtesy and transparency, I will still link and reference the original project in addition to disclosing the Agent use, although those things aren't likely required and others may not do the same. That said, I'm definitely not using an agent to port any GPL-licensed code.</i><p>IANAL but regardless of the license, you have to respect their copyright and it’s hard to argue that an LLM ported library is anything but a derivative work. You would still have to include the original copyright notices and retain the license (again IANAL).
That's about where I'm settled on this right now. I feel like authors who select the GPL have made a robust statement about their intent. It may be legal for me to copyright-launder their library (maybe using the trick where one LLM turns their code into a spec and another turns that spec into fresh code) but I wouldn't do that because it would subvert the spirit of the license.
> How much better would this library be if an expert team hand crafted it over the course of several months?<p>i think the fun conclusion would be: ideally no better, and no worse. that is the state you arrive it IFF you have complete tests and specs (including probably for performance). now a human team handcrafting would undoubtedly make important choices not clarified in specs, thereby extending the spec. i would argue that human chain of thought from deep involvement in building and using the thing is basically 100% of the value of human handcrafting, because otherwise yeah go nuts giving it to an agent.
The biggest challenge an agent will face with tasks like these is the diminishing quality in relation to the size of the input, specifically I find input of above say 10k tokens dramatically reduced quality of generated output.<p>This specific case worked well, I suspect, since LLMs have a LOT of previous knowledge with HTML, and saw multiple impl and parsing of HTML in the training.<p>Thus I suspect that in real world attempts of similar projects and any non well domain will fail miserably.
In my experience it is closer to 25k, but that’s a minor point. What task do you need to do that requires more than that many tokens?<p>No, seriously. If you break your task into bite sized chunks, do you really need more than that at a time? I rarely do.
What model are you working with where you still get good results at 25k?<p>To your q, I make huge effort in making my prompts as small as possible (to get the best quality output), I go as far as removing imports from source files, writing interfaces and types to use in context instead of fat impl code, write task specific project / feature documentation.. (I automate some of these with a library I use to generate prompts from code and other files - think templating language with extra flags). And still for some tasks my prompt size reaches 10k tokens, where I find the output quality not good enough
I'm working with Anthropic models, and my combined system prompt is already 22k. It's a big project, lots of skill and agent definitions. Seems to work just fine until it reaches 60k - 70k tokens.
While this example is explicitly asking for a port (thus a copy), I also find in general that LLM's default behavior is to spit out new code from their vast pre-trained encyclopedia, vs adding an import to some library that already serves that purpose.<p>I'm curious if this will implicitly drive a shift in the usage of packages / libraries broadly, and if others think this is a good or bad thing. Maybe it cuts down the surface of upstream supply-chain attacks?
Not all AI-assisted ports are quite so successful[0]<p>[0] <a href="https://ammil.industries/the-port-i-couldnt-ship/" rel="nofollow">https://ammil.industries/the-port-i-couldnt-ship/</a>
I think a big factor (of many probably) is there is a ~150x difference in bytes of source vs number of tests for them. I.e. I wonder what other projects are easy wins, which are hard ones, and which can be accomplished quickly with a certain approach.<p>It'd be really interesting if Simon gave a crack at the above and wrote about his findings in doing so. Or at least, I'd find it interesting :).
The problem with translating between languages is that code that "looks the same and runs" are not equivalently idiomatic or "acceptable". It seems to turn into long files of if-statements, flags and checks and so on. This might be considered idiomatic enough in python, but not something you'd want to work with in functional or typed code.
This seems really impressive. I am too lazy to replicate this, but I do wonder how important the test suite is for a a port that likely uses straight forward, dependency free python code
<a href="https://github.com/EmilStenstrom/justhtml/tree/main/src/justhtml" rel="nofollow">https://github.com/EmilStenstrom/justhtml/tree/main/src/just...</a><p>It is enormously useful for the author to know that the code works, but my intuition is if you asked an agent to port files slowly, forming its own plan, making commits every feature, it would still get reasonably close, if not there.<p>Basically, I am guessing that this impressive output could have been achieved based on how good models are these days with large amounts of input tokens, without running the code against tests.
"If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed"<p>I'm a bit sad about this; I'd rather have "had fun" doing the coding, and get AI to create the test cases, than vice versa.
Couple quick points from the read - cool, btw! It's not trivial that Simon poked the LLM to get something up and running and working ASAP - that's always been a good engineering behavior in my opinion - building on a working core - but I have found it's extra helpful/needed when it comes to LLM coding - this brings the compiler and tests "in the loop" for the LLM, and helps keep it on the rails - otherwise you may find you get 1,000s of lines of code that don't work or are just sort of a goose chase, or all gilding of lilies.<p>As is mentioned in the comments, I think the real story here is two fold - one, we're getting longer uninterrupted productive work out of frontier models - yay - and a formal test suite has just gotten vastly more useful in the last few months. I'd love to see more of these made.
<p><pre><code> > It took two initial prompts and a few tiny follow-ups. GPT-5.2 running in Codex CLI ran uninterrupted for several hours, burned through 1,464,295 input tokens, 97,122,176 cached input tokens and 625,563 output tokens and ended up producing 9,000 lines of fully tested JavaScript across 43 commits.
</code></pre>
Using a random LLM cost calculator, this amounts to $28.31... pretty reasonable for functional output.<p>I am now confident that within 5-10 years (most/all?) junior & mid and many senior dev positions are going to drop out enormously.<p>Source: <a href="https://www.llm-prices.com/#it=1464295&cit=97123000&ot=625563&ic=1.75&cic=.175&oc=14" rel="nofollow">https://www.llm-prices.com/#it=1464295&cit=97123000&ot=62556...</a>
This is for <i>porting</i> an existing project. It’s an ideal case for LLMs. The results are still pretty different for building up a library from scratch.<p>However this changes the economics for languages with smaller ecosystems!
People say this kind of thing a lot, but in reality the concept of "software engineer" will change and there will still be experience levels with different expectations
> I am now confident that within 5-10 years (most/all?) junior & mid and many senior dev positions are going to drop out enormously.<p>yes because this is what we do all day every day (port existing libraries from one language to another)....<p>like do y'all hear yourselves or what?
I’m afraid the boosters hear nothing.<p>The commenter you’re replying to, in their heart of hearts, truly believes in 5 years that an LLM will be writing the majority of the code for a project like say Postgres or Linux.<p>Worth bearing in mind the boosters said this 5 years ago, and will say this in 5 years time.
While I understand the intent of this exercise, couldn't someone just wasm compile the Servo html5ever Rust codebase?
I think specs + tests are the new source of truth, code is disposable and rebuildable. A well tested project is reliable both for humans and AI, a badly tested one is bad for both. When we don't test well I call it "vibe testing, or LGTM testing"
> Can I even assert copyright over this, given how much of the work was produced by the LLM?<p>No, because it's a derivative work of the base library.
Wild to ask, "Is it legal, ethical, responsible or even harmful to build in this way and publish it?" AFTER building and publishing it. Author made up his mind already, or doesn't actually care. Ethics and responsibility should guide one's actions, not just be engagement fodder after the fact.
If I thought this was clear-cut 100% unethical and irresponsible I wouldn't have done it. I think there's ample room for conversation about this. I'd like to help instigate that conversation.<p>I'm ready to take a risk to my own reputation in order to demonstrate that this kind of thing is possible. I think it's useful to help people understand that this kind of thing isn't just feasible now, it's somewhat terrifyingly easy.
What was your prompt to get it to run the test suite and heal tests at every step? I didn’t see that mentioned in your write up. Also, any specific reason you went with Codex over Claude Code?
All of the prompts I used are in the article. The two most relevant to testing were:<p><pre><code> We are going to create a JavaScript port of ~/dev/justhtml - an HTML parsing library that passes the full ~/dev/html5lib-tests test suite. [...]
</code></pre>
And later:<p><pre><code> Configure GitHub Actions test.yml to run that on every commit, then commit and push
</code></pre>
Good coding models don't need much of a push to get heavily into automated testing.<p>I used Codex for a few reasons:<p>1. Claude was down on Sunday when I kicked off tbis project<p>2. Claude Code is my daily driver and I didn't want to burn through my token allowance on an experiment<p>3. I wanted to see how well the new GPT-5.2 could handle a long running project
For me (original author of JustHTML), it was enough the put the instructions on how to run tests in the AGENTS.md. It knows enough about coding to run tests by itself.
I suppose a next experiment could be to reproduce sqlite from its test suite.
> How much better would this library be if an expert team hand crafted it over the course of several months?<p>It's an interesting assumption that an expert team would build a better library. I'd change this question to: would an expert team build this library better?
<p>© 2024 Example</p><p>^Claude still thinks it's 2024. This happens to me consistently.
I think it is time for all HW vendors to open up their documentation so we can use AI for writing Drivers for niche OS.<p>There are many OSe out there suffering from the same problem. Lack of drivers.<p>AI can change it.
Another interesting experiment is to start from the html5lib-tests suite directly, instead of JustHTML. Worth another experiment?
Now do the same with Rust, build a Python wrapper and we went full circle :)
I think the decision of SQLite to keep its large test suite private is very wise in the presence of thieves.
Talking about "thieves" is very much going back to the idea that software is the same thing as physical things. When talking about software we have a very simple concept to guide us: the license.<p>The license of html5ever is MIT, meaning the original authors are OK that people do whatever they want with it. I've retained that license and given them acknowledgement (not required by the license) in the README. Simon has done the same, kept the license and given acknowledgement (not required) to me.<p>We're all good to go.
Fuck
YOU didn't port shit, the ai did all the work.