> FastRender may not be a production-ready browser, but it represents over a million lines of Rust code, written in a few weeks, that can already render real web pages to a usable degree<p>I feel that we continue to miss the forest for the trees. Writing (or generating) a million lines of code in Rust should not count as an achievement in and of itself. What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.
100%. An equivalent situation would be:<p>Company X does not have a production-ready product, but they have thousands of employees.<p>I guess it could be a strange flex about funding but in general it would be a bad signal.
Absolutely.<p>I think some of these people need to be reminded of the Bill Gates' quote about lines of code:<p>“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”
SLOC was a bad indicator 20 years ago and it is today. Don't tell them - once they realize it's a red flag for us they will use some other metric, because they fight for our attention.
> because they fight for our attention.<p>Not only that, they straight up pay people to just share and write about their thing: <a href="https://i.imgur.com/JkvEjkT.png" rel="nofollow">https://i.imgur.com/JkvEjkT.png</a><p>Most of us probably knew this already, the internet had paid content for as long as I can remember, but I (naively perhaps) thought that software developers and especially Hacker News was more resilient to it, but I think all of us have to get better at not trusting what we read, unless it's actually substantiated.
Line count also becomes less useful of a metric because LLM generated code tends to be unnecessarily verbose.
I think you missed the point. From the blog post:<p><i>To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files [...]<p>Despite the codebase size, new agents can still understand it and make meaningful progress. Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts.</i><p>The point is that the agents can comprehend the huge amount of code generated and continue to meaningfully contribute to the goal of the project. We didn't know if that was possible. They wanted to find out. Now we have a data point.<p>Also, a popular opinion on any vibecoding discussion is that AI can help, but only on greenfield, toy, personal projects. This experiment shows that AI agents can work together on a very complex codebase with ambitious goals. Looks like there was a human plus 2,000 agents, in two months. How much progress do you think a project with 2,000 engineers can achieve in the first two months?<p>> What matters is whether those lines build, function as expected (especially in edge cases) and perform decently. As far as I can tell, AI has not been demonstrated to be useful yet at those three things.<p>They did build. You can give it a try. They did function as expected. How many edge cases would you like it to pass? Perform decently? How could you tell if you didn't try?
simonw, I find it almost shocking how you had the chance to talk directly with the engineer who built this, and even when he directly says things that contradict what Cursor's own CEO said, you didn't push back a single iota.<p>Is the takeaway here that it's fine for a CEO to claim "it even has a custom JS VM!" on Twitter/X, then afterwards the engineer explains: "The JavaScript engine isn’t working yet" and "the agents decided to pause it", and this is all OK? Not a single pushback about this very obvious contradiction? This is just one example of many, and again, since it seems to be repeated: no, no one thinks this was supposed to rival Chrome, what a trite way of trying to change the narrative.<p>I understand you don't want to spook future potential interviewees, but damn if that didn't feel like you suddenly are trying to defend Cursor here, instead of being curious about what actually happened. It doesn't feel curious, it feels like we're all giving up the fight against unneeded hype, exaggeration and degradation of quality.<p>What happened with balanced perspectives, where we don't just take people for their words, and when we notice something is off, we bring it up?<p>On a separate note, I actually emailed Wilson Lin too, asking if I could ask questions about it. While he initially accepted, I never actually received any answers. I'm glad to you were able to get someone from Cursor to clarify a bit at least, even though we're still just scratching the surface. I just wish we had a bit more integrity in the ecosystem and community I guess.
Basically your arguments are:<p>1) The CEO said there was a JS engine, but it didn't work.<p>2) It didn't build when they published the blog post.<p>Therefore it lacks integrity! Except that it built (I took Simon's words for it), and building a browser is beside the point, there are a few other big projects listed (Java LSP, Windows 7 emulator, Excel, etc.)<p>The blog stated:<p><i>"Our goal is to understand how far we can push the frontier of agentic coding for projects that typically take human teams months to complete.<p>This post describes what we've learned from running hundreds of concurrent agents on a single project, coordinating their work, and watching them write over a million lines of code and trillions of tokens."</i><p>They didn't set the goal of building a browser. It's an experiment about coordinating AI agents within a context of a complex software project, yet you complained they exaggerating about a JS engine?<p>The blog post itself is one of the first that describes a large scale experiment of agents, what works, what doesn't. There is very little hype. They didn't say it's game changing or Cursor is the best AI tool.
> The blog post itself is one of the first that describes a large scale experiment of agents, what works, what doesn't. There is very little hype. They didn't say it's game changing or Cursor is the best AI tool.<p>Three days later, and I took it upon myself to replicate their experiment, except just me + one more agent, here is the results: <a href="https://emsh.cat/one-human-one-agent-one-browser/" rel="nofollow">https://emsh.cat/one-human-one-agent-one-browser/</a><p>In short, I think Cursor massively oversold their accomplishment here.
Honestly, grilling him about what the CEO had tweeted didn't even cross my mind.<p>I wanted to get to the truth of what had actually been built and how. If that contradicts what the CEO said then great, the truth is now out there - anyone is free to call that out and use my video as ammunition.<p>I just had a look to see what Michael Truell had said about the project, here it is: <a href="https://x.com/mntruell/status/2011562190286045552" rel="nofollow">https://x.com/mntruell/status/2011562190286045552</a><p>> We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week.<p>> It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.<p>> It <i>kind of</i> works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.<p>This doesn't strike me as the world's most dishonest tweet, though it exaggerates what was achieved. There IS a JS VM in there but it's feature-flagged off. The from-scratch is misleading because there are libraries handling certain aspects - most notably Taffy - which we discussed in the interview.<p>I just ran "cloc" and to my surprise it counted 3,036,403 (I had thought the 3M was an exaggeration) though only 1,658,651 of that was Rust.<p>"It <i>kind</i> of works" is a fair assessment IMO!<p>I don't think "Let's talk about your CEO exaggerating what you built on Twitter" would have added much to the interview.<p>I did make sure to go over the controversies I thought were material to the project, which is why I dug into the dependencies and talked about QuickJS and Taffy.
> Honestly, grilling him about what the CEO had tweeted didn't even cross my mind.<p>That's not the full meaning of what I meant either, I'm assuming you also read the initial blog post they posted? It's also has a bunch of similar inaccurate statements.<p>> I wanted to get to the truth of what had actually been built and how.<p>It's a shame that you seemed to have reviewed that from the point of after a human stepped in to fix the codebase, which happened way after they first published the blog post. Maybe now it compiles and builds, but how does that answer to the fact that it wasn't at the time of publishing?<p>There is a "hole" of two days without commits, presumably when the engineer was busy writing the blog post, and that's the point they "sold" as "this is what was produced by the experiment". To then let them spend more human engineering time to patch the codebase, and review if from after the human fixed it, seems like completely missing the point.<p>> I don't think "Let's talk about your CEO exaggerating what you built on Twitter" would have added much to the interview.<p>What would have added a whole lot more to the ecosystem's understanding on how feasible this sort of things actually are in reality, would have been to talk about what that same person you interviewed first wrote in the blog post, and what turned out to be real at the time they published it.
The blog post had just a couple of paragraphs about the browser project, all of them accurate: <a href="https://cursor.com/blog/scaling-agents#running-for-weeks" rel="nofollow">https://cursor.com/blog/scaling-agents#running-for-weeks</a><p>> To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files. You can explore the source code on GitHub.<p>> Despite the codebase size, new agents can still understand it and make meaningful progress. Hundreds of workers run concurrently, pushing to the same branch with minimal conflicts.<p>The commits that knocked the project into shape so other people could build the code were handled by agents as well.<p>I really don't think there's a notable scandal here.
I don't think there is a "scandal" here neither, companies lie and exaggerate all the time and it's becoming normalized. With that said, I think it's important to record when it happens and exactly how it happens, because not only does it help people in the future to know what to look out for, it also serves as a historical record to refer to when you start to see repeating patterns.<p>Agree to disagree about "all of them accurate", I've already made my case elsewhere and doesn't really help anyone to re-iterate here what's public already.
Which two day gap without commits? The blog was posted on Jan 14. There were commits on Jan 14 all the ways to Jan 18.<p><a href="https://github.com/wilsonzlin/fastrender/commits/main/" rel="nofollow">https://github.com/wilsonzlin/fastrender/commits/main/</a>
Surely this would be push back to the CEO, not the engineer? The engineer is presumably the one telling the truth.
Either of them. If the CEO refuses to answer, you ask others. If you get a chance to talk with them, you ask them about it. Just ignoring the elephant in the room and hoping that the unclear details gets forgotten helps no one except Cursor here.
Is this the project announced a week or two ago by an AI company claiming they had built a browser but it turned out to be a crappy wrapper around Servo that didn’t even build? Or is this another one? I thought it was Anthropic but this says Cursor.
The first paragraph of the article -<p>> Last week Cursor published Scaling long-running autonomous coding, an article describing their research efforts into coordinating large numbers of autonomous coding agents. One of the projects mentioned in the article was FastRender, a web browser they built from scratch using their agent swarms. I wanted to learn more so I asked Wilson Lin, the engineer behind FastRender, if we could record a conversation about the project. That 47 minute video is now available on YouTube. I’ve included some of the highlights below.
It is the same project, but my impression is that HN exaggerated many of the issues with it.<p>For example:<p>- They did eventually get it to build. Unknown to me: were the agents working on it able to build it, or were they blindly writing code? The codebase can't have been _that_ broken since it didn't take long for them to get it buildable, and they'd produced demo screenshots before that.<p>- It had a dependency on QuickJS, but also a homegrown JS implementation; apparently (according to this post) QuickJS was intended as a placeholder. I have no idea which, if either, ended up getting used, though I suspect it may not even matter for the static screenshots they were showing off (the sites may not have required JS to show that).<p>- Some of the dependencies (like Skia and HarfBuzz) are libraries that other browsers also depend on and are not part of browser projects themselves.<p>- Other dependencies probably shouldn't have been used, but they only represent a fraction of what a browser has to do.<p>However…<p>What I don't know, and seemingly nobody else knows, is how functional the rest of the codebase is. It's apparently very slow and fails to render most websites. But is this more like "lots of bugs, but a solid basis", or is it more like "cargo-culted slop; even the stuff that works only works by chance"? I hope someone investigates.
> were the agents working on it able to build it, or were they blindly writing code?<p>The project was able to build the whole time, and the agents were constantly compiling it using the Rust compiler and fixing any compile errors as they occurred.<p>The GitHub CI builds were failing, and when they first opened the repo people incorrectly assumed that meant the code didn't compile at all.<p>The biggest problem with the repo when they first released it was that there were no build instructions for end-users, so it was hard to try out. They fixed that within 24 hours of the initial release.<p>> What I don't know, and seemingly nobody else knows, is how functional the rest of the codebase is.<p>It's functional enough to render web pages - you can build it and run it yourself to see that, I have some screenshots from trying it out here: <a href="https://simonwillison.net/2026/Jan/19/scaling-long-running-autonomous-coding/" rel="nofollow">https://simonwillison.net/2026/Jan/19/scaling-long-running-a...</a><p>That said, it's very much intended as a research project into running parallel coding agents as opposed to a serious browser project that's intended for end users. At the end of my post I compare it to "hello world" - I think "build a browser" may be the "hello world" of massively parallel coding agent systems, which I find quite amusing.
> it's very much intended as a research project<p>If so then the failure of the experiment should be acknowledged.<p>Failure described among others at: <a href="https://news.ycombinator.com/item?id=46705625">https://news.ycombinator.com/item?id=46705625</a><p>> It's functional enough to render web pages<p>> FastRender may not be a production-ready browser, but it represents over a million lines of Rust code, written in a few weeks, that can already render real web pages to a usable degree.<p>This is something that can be done in much less than a million lines of code. There must be a core somewhere in Fastrender--probably just a few thousands lines--which is putting together existing layout and graphics libraries and makes it render something to the screen.<p>Doing that in a few weeks isn't impressive, especially not when buried in a million lines of spaghetti code.<p>If you want an example of a real prototype web engine build along radical design choices, head over to <a href="https://github.com/DioxusLabs/blitz" rel="nofollow">https://github.com/DioxusLabs/blitz</a><p>I'm pretty sure it renders far better than Fastrender(the edits the agents made to Taffy are probably nonsense), and I'm guessing it is at most 50k lines.<p>Conclusion:<p>In the light of the efforts to paper over failures, I'm calling Fastrender not a research project but propaganda.
If you were looking for a good long-term AI benchmark, “build me a Web browser” should last you for a while.
I hear you but still just struggle what we are supposed to take from this. If I worked for McDonalds and came up with a way to make 1000 bad hamburgers in the time it takes for them to currently make 10 good ones, no one would be that impressed.<p>"Hello world" is self-justifying, you know it when you see it, and it is what it is because it shows something unambiguous and impossible to mistake.
The thing I took from this is that you can arrange a set of coding agents in a tree of planners and workers and have them churn away on much larger projects than if you use a single coding agent.<p>This is a new capability - this likely would now have worked prior to GPT-5.1 and Opus 4.5, so we've had models that can do this for less than three months.<p>It's extremely new: the patterns that work are just starting to be figured out. Wilson had an effectively unlimited token budget from Cursor and got to run experiments that most teams would not be able to afford.
I welcome living in this absurd time where monkeys with typewriters producing a work of Shakespeare has become a reality.
The reaction to this would have been different two or three years ago but it looks extremely lame when you open January 2026 hackernews and this is the kind of thing a tech company is trying to persuade you into thinking is exciting or useful.<p>You've heard about what people are doing in the medical industry. Using AI to accelerate diagnosis and analysis of biological material. In astronomy it's showing us things that no human had ever seen before. You hear about all these things changing the world at large and the smaller worlds of individual people and families.<p>Then you look at the actual IT industry and we've got... some premade libraries duct taped together into a crappy browser that barely works. Of course when the value of this is compared to the cost, the response is that it's fine because it was never actually intended to be useful in the first place. Well we're actually a step ahead of you there.<p>The phrase "high on their own supply" describes all the people involved in this very well. I assure you we understand the goal of this project perfectly. It just wasn't a good, worthy, or even interesting goal. The immense amount of resources that went into this should have gone into something better. That's all there is to it.
I wonder if we're heading to a situation where agent written code will function as something distinct, like bytecode.
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."<p>I'm curious what is the energy/environmental/financial impact of this "research" effort of cobbling together a browser based on AI model that had been trained on freely available source code of existing browsers.<p>I can't imagine this browser being used outside of tinkering or curiosity toy - so the purpose of the research is just to see whether you can run absurd amount of agents simultaneously and produce something that somewhat works?
I'd love to see what happens if you hook this renderer up to AFL++...
>I can't imagine this browser being used outside of tinkering or curiosity toy - so the purpose of the research is just to see whether you can run absurd amount of agents simultaneously and produce something that somewhat works?<p>Yes but this is a very interesting question IMO
Why not help the Servo project instead of just making a proof of concept?
[dead]
I'm going to propose a law for these AI orchestration systems based on Greenspun's Tenth Law:<p>> Any sufficiently complicated AI orchestration system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Gas Town.
Isn't it the other way around, Gas Town is an ad hoc, informally specified, bug ridden, slow implementation of other AI orchestration systems.
that statement is a bit early no?