18 comments

  • nicoburns15 hours ago
    Some notes:<p>- The docs.rs docs are still building, but the docs from the recent RC are available [0]<p>- The Slint project have an example of embedding Servo into Slint [1] which is good example of how to use the embedding API, and should be relatively easy to adapt to any other GUI framework which renders using wgpu.<p>- Stylo [2] and WebRender [3] have both also been published to crates.io, and can be useful standalone (Stylo has actually been getting monthly releases for ~year but we never really publicised that).<p>- Ongoing releases on a monthly cadance are planned<p>[0]: <a href="https:&#x2F;&#x2F;docs.rs&#x2F;servo&#x2F;0.1.0-rc2&#x2F;servo" rel="nofollow">https:&#x2F;&#x2F;docs.rs&#x2F;servo&#x2F;0.1.0-rc2&#x2F;servo</a><p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;slint-ui&#x2F;slint&#x2F;tree&#x2F;master&#x2F;examples&#x2F;servo" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;slint-ui&#x2F;slint&#x2F;tree&#x2F;master&#x2F;examples&#x2F;servo</a><p>[2]: <a href="https:&#x2F;&#x2F;docs.rs&#x2F;stylo" rel="nofollow">https:&#x2F;&#x2F;docs.rs&#x2F;stylo</a><p>[3]: <a href="https:&#x2F;&#x2F;docs.rs&#x2F;webrender" rel="nofollow">https:&#x2F;&#x2F;docs.rs&#x2F;webrender</a>
    • apitman14 hours ago
      Tangent, but Slint is a really cool project. Not being able to dynamically insert widgets from code was the only thing that turned me off of it for my use case.
      • Tmpod8 hours ago
        Agreed, I find Slint really interesting. To me, the biggest pain point is the very limited theming support. It&#x27;s virtually impossible to make a custom theme without re-implementing most widget logic, which is a shame.
  • simonw14 hours ago
    Here&#x27;s a vibe-coded &quot;servo-shot&quot; CLI tool which uses this crate to render an image of a web page: <a href="https:&#x2F;&#x2F;github.com&#x2F;simonw&#x2F;research&#x2F;tree&#x2F;main&#x2F;servo-crate-exploration&#x2F;servo-shot" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;simonw&#x2F;research&#x2F;tree&#x2F;main&#x2F;servo-crate-exp...</a><p><pre><code> git clone https:&#x2F;&#x2F;github.com&#x2F;simonw&#x2F;research cd research&#x2F;servo-crate-exploration&#x2F;servo-shot cargo build .&#x2F;target&#x2F;debug&#x2F;servo-shot https:&#x2F;&#x2F;news.ycombinator.com&#x2F; </code></pre> Here&#x27;s the image it generated: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;simonw&#x2F;c2cb4fcb15b0837bbc4540c3d398c65d?permalink_comment_id=6096875#gistcomment-6096875" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;simonw&#x2F;c2cb4fcb15b0837bbc4540c3d398c...</a>
    • mrbonner8 hours ago
      Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.<p>It is the style I prefer to use Rust for. Coming from Python, Typescript and even Java, even with this high level Rust, it yields incredible improvement already.
      • andrepd7 hours ago
        &gt; Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.<p>Yeah that tracks because the AI is dumb as a bag of bricks. It can apply patterns off stackoverflow, but can hardly understand the borrow checker.
    • scrame13 hours ago
      That&#x27;s pretty cool. I&#x27;m guessing it would need some tweaking to handle things like cookies, or does it just need a pointer to the cookiejar? I&#x27;m not too familiar with servo,
      • simonw10 hours ago
        It&#x27;s a VERY simple initial demo, I expect things like cookies would require quite a lot more work.
    • echelon14 hours ago
      This is super useful! I have immediate use for this.<p>Do you know if Servo is 100% Rust with no external system dependencies? (ie, can get away with rustls only?)<p>Can this do Javascript? (Edit: Rendering SPAs &#x2F; Javascript-only UX would be useful.)<p>Edit 2: Can it do WebGL? Same rationale for ThreeJS-style apps and 3D renders. (This in particular is right up my use case&#x27;s alley.)
      • simonw13 hours ago
        It depends on stuff like SpiderMonkey so not pure Rust.<p>It should be able to render JavaScript but I&#x27;ve seen it throw bugs on simple pages, no doubt because my vibe-coded thing is crap not because Servo itself can&#x27;t handle them.
      • minimaxir13 hours ago
        I have been building&#x2F;vibecoding a similar tool and unfortunately came to the conclusion that in practice, there are just too many features dependent on the full Chrome stack that it&#x27;s just more pragmatic to use a real Chromium installation despite the file size. Performance&#x2F;image generation speed is still fine, though.<p>In Rust, the chromiumoxide crate is a performant way to interface with it for screenshots: <a href="https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;chromiumoxide" rel="nofollow">https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;chromiumoxide</a>
        • ospider2 hours ago
          &gt; there are just too many features dependent on the full Chrome stack<p>Do you mind elaborating on what features are missing?
        • mnutt8 hours ago
          I think you could in theory have a similar webkit-based stripped down headless crate that might have a good tradeoff of features, performance, and size.
      • lastontheboat7 hours ago
        Servo does execute JS and does support webgl (and some but not all of webgl2 if you enable the feature: <a href="https:&#x2F;&#x2F;book.servo.org&#x2F;design-documentation&#x2F;experimental-features.html" rel="nofollow">https:&#x2F;&#x2F;book.servo.org&#x2F;design-documentation&#x2F;experimental-fea...</a>).
  • rafaelmn13 hours ago
    This should be the real benchmark of AI coding skills - how fast do we get safe&#x2F;modern infrastructure&#x2F;tooling that everyone agrees we need but nobody can fund the development.<p>If Anthropic wants marketing for Mythos without publishing it - show us servo contrib log or something like that. It aligns nicely with their fundamental infrastructure safety goals.<p>I&#x27;d trust that way more than x% increase on y bench.<p>Hire a core contributor on Servo or Rust, give him unlimited model access and let&#x27;s see how far we get with each release.
    • mort9613 hours ago
      We do not need vibe-coded critical infrastructure.
      • falcor8413 hours ago
        As I see it, the focus should not be about the coding, but about the testing, and particularly the security evaluation. Particularly for critical infrastructure, I would want us to have a testing approach that is so reliable that it wouldn&#x27;t matter who&#x2F;what wrote the code.
        • jbvlkt8 hours ago
          I have been thinking about that lately and isn&#x27;t testing and security evaluation way harder problem than designing and carefully implementing new features? I think that vibecoding automates easiest step in SW development while making more challenging&#x2F;expensive steps harder. How are we suppose to debug complex problems in critical infrastructure if no one understands code? It is possible that in future agents will be able to do that but it feels to me that we are not there yet.
        • bawolff13 hours ago
          I dont think that will ever be possible.<p>At some point security becomes - the program does the thing the human wanted it to do but didn&#x27;t realize they didn&#x27;t actually want.<p>No amount of testing can fix logic bugs due to bad specification.
          • skrtskrt11 hours ago
            AI as advanced fuzz-testing is ridiculously helpful though - hardly any bug you can in this sort of advanced system is a specification logic bug. It&#x27;s low-level security-based stuff, finding ways to DDOS a local process, or work around OS-level security restrictions, etc.
            • bawolff9 hours ago
              I&#x27;m kind of doubtful that AI is all that great at fuzz testing. Putting that aside though, we are talking about web browsers here. Security issues from bad specification or misunderstanding the specification is relatively common.
            • thephyber10 hours ago
              Re-read the thread you are replying to.<p>Each of the last 4 comments in your thread (including yours) are conflating what they mean by AI.
          • falcor8413 hours ago
            Well, yes, agreed - that is the essential domain complexity.<p>But my argument is that we can work to minimize the time we spend on verifying the code-level accidental complexity.
            • bawolff12 hours ago
              Sure, but that is what we&#x27;ve been doing since the early 2000s (e.g. aslr, read only stacks, static analysis, etc).<p>And we&#x27;ve had some succeses, but i wouldn&#x27;t expect any game changing breakthroughs any time soon.
        • mort9613 hours ago
          I disagree. Thorough testing provides some level of confidence that the code is correct, but there&#x27;s immense value in having infrastructure which some people understand because they wrote it. No amount of process around your vibe slop can provide that.
          • px4313 hours ago
            That&#x27;s just status quo, which isn&#x27;t really holding up in the modern era IMO.<p>I&#x27;m sure we&#x27;ll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won&#x27;t be making any bets on slow infrastructure.
          • falcor8413 hours ago
            I somewhat agree, but even then would argue that the proper level at which this understanding should reside is at the architecture and data flow invariants levels, rather than the code itself. And these can actually be enforced quite well as tests against human-authored diagrammatical specs.
            • t4356212 hours ago
              If you don&#x27;t fully understand the code how do you know it implements your architecture exactly and without doing it in a way that has implications you hadn&#x27;t thought of?<p>As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn&#x27;t explain well enough perhaps but how was I to know I failed without reading the code?
              • jbvlkt7 hours ago
                Exactly. We do not have another artifact than code which can be deterministically converted to program. That is reason we have to still read the code. Prompt is not final product in development process.
            • mort9612 hours ago
              I disagree. The code itself matters too.
          • irishcoffee2 hours ago
            Who is writing the tests? An LLM? If so, they have little value.
      • rl36 hours ago
        &gt;&gt; <i>...give him unlimited model access</i><p>&gt;<i>We do not need vibe-coded critical infrastructure.</i><p>I think when you have virtually unlimited compute, it affords the ability to really lock down test writing and code review to a degree that isn&#x27;t possible with normal vibe code setups and budgets.<p>That said for truly critical things, I could see a final human review step for a given piece of generated code, followed by a hard lock. That workflow is going to be popular if it already isn&#x27;t.
        • mort966 hours ago
          The availability or lack thereof of compute has absolutely nothing to do with my opinion. More vibe coded tests doesn&#x27;t fix the problem.
          • rl36 hours ago
            It might when an individual function has 50 different models reviewing it, potentially multiple times each.<p>Perhaps part of a complex review chain for said function that&#x27;s a few hundred LLM invocations total.<p>So long as there&#x27;s a human reviewing it at the end and it gets locked, I&#x27;d argue it <i>ultimately</i> doesn&#x27;t matter how the code was initially created.<p>There&#x27;s a lot of reasons it would matter before it gets to that point, just more to do with system design concerns. Of course, you could also argue safety is an ongoing process that partially derives from system design and you wouldn&#x27;t be wrong.<p>It occurred to me there&#x27;s some recent prior art here:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47721953">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47721953</a><p>It&#x27;s probably fair to say the Linux kernel is critical infra, or at least a component piece in a lot of it.
            • mort966 hours ago
              I do not care how strong your vibes are and how many claudes you have producing slop and reviewing each others&#x27; slop. I do not think vibe coding is appropriate for critical infrastructure. I don&#x27;t understand why you think telling me you&#x27;d have <i>more</i> slop would make me appreciate it more.
              • rl36 hours ago
                Fair enough. I respect the commitment to purity.<p>In the not so distant future you&#x27;ll probably be one of the few who haven&#x27;t had their actual coding skills atrophy, and that&#x27;s a good thing.
                • mort965 hours ago
                  A terrifying thought but not implausible. IMO, the world needs <i>more</i> people with a deep understanding of how stuff works, but that&#x27;s not the direction we&#x27;re moving in.
      • rafaelmn13 hours ago
        If you&#x27;re trusting core contributors without AI I don&#x27;t see why you wouldn&#x27;t trust them with it.<p>Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver.
        • mort9612 hours ago
          I trust people to understand the code they write. I don&#x27;t trust them to understand code they didn&#x27;t write.
        • t4356212 hours ago
          It&#x27;s extremely tempting to write stuff and not bother to understand it similar to the way most of us don&#x27;t decompile our binaries and look at the assembler when we write C&#x2F;C++.<p>So, should I trust an LLM as much as a C compiler?
        • jddj10 hours ago
          What if it impairs judgement?
      • andai11 hours ago
        They&#x27;re getting really good at proofs and theorems, right?
        • IshKebab9 hours ago
          Proofs&#x2F;theorems and memory safety vulnerabilities are a special case because there&#x27;s an easy way to verify whether the model is bullshitting or not.<p>That&#x27;s not true for coding in general. The best you can do is having unreasonably good test coverage, but the vast majority of code doesn&#x27;t have that.
      • scrame13 hours ago
        Unfortunately we&#x27;re going to get it whether or not we need it.
      • teaearlgraycold11 hours ago
        Well if the big players want to tell me their models are nearly AGI they need to put up or shut up. I don&#x27;t want a stochastically downloaded C compiler. I want tech that improves something.
    • nicoburns13 hours ago
      &gt; show us servo contrib log or something like that<p>Servo may not be the best project for this experiment, as it has a strict no-AI contributions allowed policy.
    • Night_Thastus10 hours ago
      The problem with such infrastructure is not the initial development overhead.<p>It&#x27;s the maintenance. The long term, slow burn, uninteresting work that must be done continually. Someone needs to be behind it for the long haul or it will never get adopted and used widely.<p>Right now, at least, LLMs are not great at that. They&#x27;re great for quickly <i>creating</i> smaller projects. They get less good the older and larger those projects get.
      • rafaelmn10 hours ago
        I mean the claim is that next generation models are better and better at executing on larger context. I find that GPT 5.4 xhigh is surprisingly good at analysis even on larger codebases.<p><a href="https:&#x2F;&#x2F;x.com&#x2F;mitchellh&#x2F;status&#x2F;2029348087538565612" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;mitchellh&#x2F;status&#x2F;2029348087538565612</a><p>Stuff like this where these models are root causing nontrivial large scale bugs is already there in SOTA.<p>I would not be surprised if next generation models can both resolve those more reliability and implement them better. At that point would be sufficiently good maintainers.<p>They are suggesting that new models can chain multiple newly discovered vulnerabilities into RCE and privilege escalations etc. You can&#x27;t do this without larger scope planning&#x2F;understanding, not reliabily.
    • andai11 hours ago
      Replicating Chromium as a benchmark? ;)<p>Replicating Rust would also be a good one. There are many Rust-adjacent languages that ought to exist and would greatly benefit mankind if they were created.
    • dabinat11 hours ago
      The true solution to this is to fund things that are important, especially when billion-dollar companies are making a fortune from them.
    • beepbooptheory10 hours ago
      Oh good, I was worried for a sec that people wouldn&#x27;t be talking about AI in this thread.
    • manx13 hours ago
      Agreed. Which other software does society need badly?
    • raincole9 hours ago
      Perhaps, you know, not every thing, especially not every thread on HN, has to be about AI?<p>I read the link twice and no AI or LLM mentioned. I don&#x27;t know why people are so eager to chime in and try to steer the conversation towards AI.
  • giovannibonetti13 hours ago
    For those of you using a browser to generate PDFs, the Rust crate you should look into is Typst [1]. Regardless of your application language, you can use their CLI.<p>It takes some time to get used to their DSL to write PDFs, but nowadays with AI that shouldn&#x27;t take too long.<p>[1] <a href="https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;typst" rel="nofollow">https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;typst</a>
    • realityking9 hours ago
      Just used it to automate some reporting today. Claude Code worked pretty well though sometimes I had to point it to Typst docs to understand what I wanted.
    • andai11 hours ago
      I keep hearing about this one as a LaTeX alternative. I shall have to take a proper look.
      • okanat9 hours ago
        Typst is what Rust is to C++ but for Latex. Saner syntax, well thought extensibility (including scripting and macros), <i>tables that are sane</i>, a package manager. I am happy that I switched to it for documentation purposes. I am looking forward to compiling web pages with it too.
    • globular-toast8 hours ago
      I recently deployed typst to generate PDF letters automatically. Being familiar with (La)TeX (I&#x27;ve typeset everything from letters to my PhD thesis), I was shocked at the speed. It&#x27;s quick enough to use in an HTTP request cycle. The language was also super easy to learn.<p>Not sure if it&#x27;s quite as good as TeX at typesetting, but it seems good enough. When I did my thesis, TikZ was even more valuable. I don&#x27;t know if there&#x27;s any replacement for that.
  • phaistra15 hours ago
    Is there a table of implemented RFCs? Something similar to <a href="http:&#x2F;&#x2F;caniuse.com" rel="nofollow">http:&#x2F;&#x2F;caniuse.com</a> where we can see what HTML&#x2F;JS&#x2F;CSS standards and features are implemented? If it exists, I can&#x27;t seem to find it. Closest thing seems to be &quot;experimental features&quot; page but its not quite detailed enough.
    • lastontheboat14 hours ago
      Oh, I forgot that <a href="https:&#x2F;&#x2F;arewebrowseryet.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;arewebrowseryet.com&#x2F;</a> exists for this too!
    • lastontheboat15 hours ago
      <a href="https:&#x2F;&#x2F;doc.servo.org&#x2F;apis.html" rel="nofollow">https:&#x2F;&#x2F;doc.servo.org&#x2F;apis.html</a> is auto-generated from WebUDL interfaces that exist in Servo. It&#x27;s not great but better than nothing.
    • sebsebmc3 hours ago
      There&#x27;s a lot of work between the WPT team and web-features&#x2F;web-features-mapping that should allow this to work automatically based on WPT results.
    • jszymborski15 hours ago
      Closest is perhaps the web platform tests<p><a href="https:&#x2F;&#x2F;servo.org&#x2F;wpt&#x2F;" rel="nofollow">https:&#x2F;&#x2F;servo.org&#x2F;wpt&#x2F;</a>
    • that_lurker15 hours ago
      Their bloghas monthly posts on changes <a href="https:&#x2F;&#x2F;servo.org&#x2F;blog&#x2F;" rel="nofollow">https:&#x2F;&#x2F;servo.org&#x2F;blog&#x2F;</a>
  • givemeethekeys13 hours ago
    So, since this is the top post on Hacker News, and the website&#x27;s description is a bit too high level for me, what does Servo let me do? By &quot;web technologies&quot;, does it mean &quot;put a web browser inside your desktop app&quot;?
    • caminanteblanco13 hours ago
      It&#x27;s an alternative browser engine, vis a vis Ladybird
      • swiftcoder13 hours ago
        Specifically, it&#x27;s the browser engine that spun out of Mozilla&#x27;s early efforts towards a rust-based browser, and is one of the motivating projects for the entire Rust ecosystem
    • 01HNNWZ0MV43FF10 hours ago
      Yes, Servo is an embeddable web browser &#x2F; webview, like Chromium Embedded Framework. (CEF)<p>Electron = Node.js + CEF<p>Tauri = Rust + webview<p>Tauri has an experimental branch to use Servo to provide a bundled webview. Currently it relies on a system-level webview, like Edge on Windows, Safari on MacOS, and webkit-gtk on Linux.
  • apitman14 hours ago
    &gt; As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo<p>Wait, crate versions go up to 1.0?<p>EDIT: Sorry, while crate stability may be an interesting conversation, this isn&#x27;t the place for it. But I can&#x27;t delete this comment. Please downvote it. Mods feel free to delete or demote it.
    • mort9614 hours ago
      The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0; when major version is 0, the minor takes the role of major and patch takes the role of minor. So packages iterate through 0.x versions, and eventually, they reach a version that&#x27;s &quot;stable&quot;.<p>If version 0.7 turned out to hit the right API and not require backward incompatible changes, releasing a version 1.0 would be as disruptive as a major version change to your users and communicate through version semantics that it is a breaking change.<p>Semver declares that version 0.x is for initial development where there is no stability guarantee at all. This is the right semantics for a versioning system, but Cargo doesn&#x27;t follow this part of semver. Providing stability guarantees throughout the 0.x cycle inevitably results in projects getting stuck in 0.x.<p>This is one of my biggest gripes with Cargo. But Rust people seem to universally consider it a non-issue so I don&#x27;t think it&#x27;ll ever be fixed.
      • kibwen10 hours ago
        <i>&gt; If version 0.7 turned out to hit the right API and not require backward incompatible changes, releasing a version 1.0 would be as disruptive as a major version change</i><p>Nope, this is what the semver trick is for: <a href="https:&#x2F;&#x2F;github.com&#x2F;dtolnay&#x2F;semver-trick" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;dtolnay&#x2F;semver-trick</a><p>TL;DR: You take the 0.7 library, release it as 1.0, then make a 0.7.1 release that does nothing other than depend on 1.0 and re-export all its items. Tada, a compatible 1.0 release that 0.7 users will get automatically when they upgrade.<p>Even more interesting is that you can use this to coordinate only partially-breaking changes, e.g. if you have 100 APIs in your library but only make a breaking change to one, you can re-export the 99 unbroken APIs and only end up making breaking changes in practice for users who actually use the one API with breaking changes.
      • sheepscreek14 hours ago
        &gt; The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0<p>That’s a feature of semver, not a bug :)<p>Long answer: You are right to notice that minor versions within a major release can introduce new APIs and changes but generally, should not break existing APIs until the next major release.<p>However, this rule only applies to libraries <i>after</i> they reach 1.0.0. Before 1.0.0, one shouldn’t expect any APIs to be frozen really.
        • mort9614 hours ago
          No, it&#x27;s explicitly not. Semver says:<p>&gt; Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.<p>Cargo is explicitly breaking with Semver by considering 0.3.5 compatible with 0.3.6.
          • demurgos13 hours ago
            To go further, semver provides semantics and an ordering but it says nothing about version requirement syntax. The caret operator to describe a range of versions is not part of the spec. It was introduced by initial semver-aware package managers such as npm or gem. Cargo decided to default to the caret operator, but it&#x27;s still the caret operator.<p>In practice, there&#x27;s no real issue with using the first non-zero component to define the group of API-compatible releases and most package managers agree on the semantics.
            • steveklabnik12 hours ago
              Thank you.<p>Eventually this will get cleared up. I’m close than I’ve ever been to actually handling this, but it’s been 9 years already, so what’s another few months…
      • Starlevel00414 hours ago
        The standard library has a whole bunch of tools to let them test and evolve APIs with a required-opt in, but every single ecosystem package has to get it right first try because Cargo will silently forcibly update packages and those evolution tools aren&#x27;t available to third party packages.<p>Such a stupid state of affairs.
      • moron4hire14 hours ago
        Personally, I think the 0 major version is a bad idea. I hear the desire to not want to have to make guarantees about stability in the early stages of development and you don&#x27;t want people depending on it. But hiding that behind &quot;v0.x&quot; doesn&#x27;t change the fact that you <i>are</i> releasing versions and people <i>are</i> depending on it.<p>If you didn&#x27;t want people to depend on your package (hence the word &quot;dependency&quot;) then why release it? If your public interface changes, bump that major version number. What are you afraid of? People taking your project seriously?
        • jaapz14 hours ago
          0.x is not that you don&#x27;t want people depending on it, you just don&#x27;t want them to come and complain when you quickly introduce some breaking changes. The project is still in development, it might be stable enough for use in &quot;real projects(tm)&quot;, but it might also still significantly change. It is up to the user to decide whether they are OK with this.<p>1.x communicates (to me at least) you are pretty happy with the current state of the package and don&#x27;t see any considerable breaking changes in the future. When 2.x comes around, this is often after 1.x has been in use for a long time and people have raised some pain points that can only be addressed by breaking the API.
          • OtomotO13 hours ago
            But people will complain, so ex falso quodlibet
          • moron4hire13 hours ago
            If you are at the point that other people can use your software, then you should use v1. If you are not ready for v1, then you shouldn&#x27;t be releasing to other people.<p>Because this comment, &quot;The project is still in development, it might be stable enough for use in &quot;real projects(tm)&quot;, but it might also still significantly change.&quot; That describes every project. Every project is always in development. Every project is stable until it isn&#x27;t. And when it isn&#x27;t, you bump the major number.
            • the__alchemist13 hours ago
              I think we can come up with a reason why bumping the version number each breaking change isn&#x27;t an elegant solution either: You would end up with version numbers in the hundreds or thousands.
              • zokier10 hours ago
                Browser version numbers are in the hundreds and it doesn&#x27;t seem to be a problem.
                • the__alchemist10 hours ago
                  Indeed! I think both 0-based versioning, and this (maybe?) downside I bring up addresses the tension between wanting to limit the damage caused by breaking changes with retaining the ability to make them.
        • mort9614 hours ago
          Versioning is communication. I find it useful to communicate, through using version 0.x, &quot;this is not a production ready library and it may change at any time, I provide no stability guarantees&quot;. Why might I release it in that state? Because it might still be useful to people, and people who find it useful may become contributors.
          • moron4hire14 hours ago
            Any project may change at any time. That&#x27;s why they bump from v1 to v2. But by not using the full precision of the version number, you&#x27;re not able to communicate as clearly about releases. A minor release may not be 100% compatible with the previous version, but people still expect some degree of similarity such that migrating is not a difficult task. But going from v0.n to v0.(n+1) uses that field to communicate &quot;hell, anything could happen, YOLO.&quot;<p>Nobody cares that Chrome&#x27;s major version is 147.
            • mort9613 hours ago
              By releasing a library with version 1.0, I communicate: &quot;I consider this project to be in a state where it is reasonable to depend on it&quot;.<p>By releasing a library with version 0.x, I communicate: &quot;I consider this project to be under initial development and would advice people not to depend on in unless you want to participate in its initial development&quot;.<p>I don&#x27;t understand why people find this difficult or controversial.
              • steveklabnik10 hours ago
                There is additional subtlety here.<p>For example, sometimes projects that have a 0.y version get depended on a lot, and so moving to 1.0.0 can be super painful. This is the case with the libc crate in Rust, which the 0.1.0 -&gt; 0.2.0 transition was super painful for the ecosystem. Even though it should be a 1.0.0 crate, it is not, because the pain of causing an ecosystem split isn&#x27;t considered to be worth the version number change.
                • mort967 hours ago
                  Oh hey I recently saw a comment which discussed this exact issue: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47752915">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47752915</a>
                  • steveklabnik6 hours ago
                    99% of the time this situation is okay, because Cargo allows you to have both 0.1 and 0.2 in the same project as dependencies. It&#x27;s just packages that call out to external dependencies, like libc, where it enforces the single version rule.
                    • mort966 hours ago
                      You <i>can</i> have both 0.1 and 0.2 in the same project, but you really don&#x27;t <i>want</i> to.
                      • steveklabnik5 hours ago
                        Most of the time, it works so well people don&#x27;t even notice.<p>The only time you run into a problem is if you try and use values with a type from 0.1 with a function that takes a 0.2 as an argument, or whatever. Then you get a type error.
        • maxloh11 hours ago
          [dead]
    • the__alchemist13 hours ago
      Hey - Many rust libraries adopt [0-based versioning](<a href="https:&#x2F;&#x2F;0ver.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;0ver.org&#x2F;</a>). That link can describe it more elegantly than I.
    • Fokamul14 hours ago
      If you want to lure Microslop to migrate all their &quot;great&quot; apps to Servo.<p>Easy, just add bloat code so it will use 5GB of RAM by default, that&#x27;s instant adoption by MS.
  • tracker113 hours ago
    I was a little curious to see if there was any Tauri integration, and it looks like there is (tauri-runtime-verso) ... Not sure where that comes out size-wise compared to Electron at that point though. My main desire there would be for Linux&#x2F;flathub distribution of an app I&#x27;ve been working on.
  • solomatov14 hours ago
    What this crate could be used for?
  • grimgrin14 hours ago
    when servo is ready i have plans to swap it into qutebrowser which ive been growing fonder of
  • Talderigi15 hours ago
    Is Servo production-ready enough to replace or embed alongside engines like WebKit or Blink?
    • bastawhiz15 hours ago
      It depends on your use case. I wouldn&#x27;t use it for a JS-heavy site. But if you have simple static content, it&#x27;s probably enough. It&#x27;s worth testing it out as a standalone app before integrating it as a library.
    • mayama12 hours ago
      It doesn&#x27;t crash as often as it used to few years ago. JS heavy sites might not work, and layout issues too. And internet gatekeepers cloudflare turnstile doesn&#x27;t work.
      • andriy_koval11 hours ago
        why did it crash? Rust is supposed to be memory safe?..
        • nablaxcroissant10 hours ago
          crashes happen for reasons besides memory safety. web-engines are crazy complicated pieces of software and crashes could happen for any number of reasons. also I would be shocked if this was written using purely safe rust
        • mkl5 hours ago
          The JS engine is SpiderMonkey, which is C++.
  • nmvk10 hours ago
    Really excited to see this. I contributed to Servo open source 10 years ago, and it was a very cool experience.
  • z3ratul16307113 hours ago
    we&#x27;ve come full circle. they&#x27;ve invented rust to do servo with it.
  • hybirdss12 hours ago
    feels like we&#x27;re actually getting new browser engines this decade and it&#x27;s kind of strange
    • t4356212 hours ago
      Servo has been on-the-go for a while though. It hasn&#x27;t been a lightning speed development, it&#x27;s just getting a bit more visible.
  • phplovesong14 hours ago
    Did firefox drop servo? I recalled they where in the progress of &quot;rewrite in rust&quot;?
    • dralley14 hours ago
      Firefox incorporated parts of the Servo effort which were able to reach maturity. Stylo (Firefox&#x27;s current CSS engine) and Webrender (the rendering engine) and a few other small components came from the Servo project.<p>Most other parts of Servo were not mature enough to integrate at the time Mozilla decided to end support for the project and didn&#x27;t look like they would be mature enough any time soon. The DOM engine for example was in the early stages of being completely rewritten at the time because the original version had an architecture that made supporting the entire breadth of web standards challenging.<p>Keep in mind that you can continue adding Rust to Firefox without replacing whole components. It&#x27;s not like Mozilla abandoned the idea of using more Rust in Firefox just because they stopped trying to rewrite whole components from the ground up.
    • andruby14 hours ago
      Yes, during the layoff of August 2020<p>Mozilla laid off the full Servo team, but never publicly announced this afaik. Wikipedia includes it here: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Firefox#cite_ref-120" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Firefox#cite_ref-120</a>
      • Sammi14 hours ago
        Mozilla can&#x27;t help it but be their own worst enemy. Ladybird may well never have happened if Mozilla just had kept working on Servo, and Ladybird is most definitely going to out compete Firefox when it reaches maturity, as Mozilla keeps on burning bridges with open source enthusiasts.
        • zarzavat14 hours ago
          The problem with Mozilla is not just technical but cultural. The organization has been infected with managers. The managers want to keep their jobs more than they want Firefox to succeed. Clearly the solution is for the managers to fire themselves and allow the developers to run the show, but that was not going to happen.<p>Ladybird, by contrast, is a developer-lead open source project that has no such constraints. They also don&#x27;t have a product yet but I&#x27;m sure the picture will be radically different in a few years.<p>Conway&#x27;s law in action.
    • estebank13 hours ago
      To add to the other replies, Firefox was explicitly never going to consume all of Servo. It was always meant to be a test bed project where sub-projects could be migrated to Firefox. I suspect that the long term intent <i>might have been</i> for Servo to get to a point where it <i>could</i> become Firefox, but that wasn&#x27;t the stated plan.
    • alarmingfox14 hours ago
      I think they implemented parts of it into their Gecko engine. But they laid of all the Servo development team in like 2020 I believe.<p>Only recently when it moved over to the Linux Foundation has Servo started being worked on again
  • tusharkhatri36913 hours ago
    Sounds great, would use the crate from now on. its more convenient that way
  • 9fwfj9r15 hours ago
    It&#x27;s a great move. The early development of Rust aimed to support Servo. However, it&#x27;s still disappointing that the script engine uses SpiderMonkey, which is purely C++.
    • drzaiusx1115 hours ago
      It&#x27;s best not to try and eat the elephant in one bite, which is perhaps where this project went wrong initially. Maybe this is a symptom of learning from past mistakes rather than a flaw.
      • saghm15 hours ago
        My understanding is that the original intent of Servo was to be a way to develop features and port them over to Firefox itself (which did happen with at least a few features), and the relatively slower pace of developer is more due to Mozilla laying off everyone who was working on it. (Yes, presumably many of the same people are involved, but I would expect that being able to work on something full time without needing another source of income will end up making progress faster than needing to find time outside of work and balance between other things in life, ideally in a way that avoids burnout).
        • drzaiusx1114 hours ago
          My understanding was that from day one the desire was to make a complete &quot;web rendering &amp; layout engine&quot; and only pivoted to shipping smaller sub-components like Stylo (stylesheets) when it appeared to be &quot;taking too long.&quot; I followed the project from the early days through the layoffs, but I may be misremembering things.
          • saghm11 hours ago
            Interesting, it&#x27;s certainly possible I was never aware of the super early days.
    • swiftcoder15 hours ago
      There are what, 5+ rust javascript engines that claim to be production-ready? Bolting one of those on in place of spider monkey seems like a reasonable future direction
      • mort9614 hours ago
        What do you mean by &quot;production ready&quot; here exactly? In a web browser context, the JS engine is expected to have a high performance optimising JIT compiler. Do the existing Rust JS engines have that?
        • 8NNTt8z3QvLT8tp13 hours ago
          There&#x27;s something to be said for the security benefits of not having a JIT though. Especially if you&#x27;ve used Rust for the engine you should have pretty solid security.
          • px4313 hours ago
            Yeah, having a code section that is writable and executable is a huge no-no from a security standpoint. JIT is a fundamentally insecure concept, just in general. By definition it&#x27;s trading security for speed.
        • swiftcoder14 hours ago
          I honestly don&#x27;t know, but they do say &quot;production ready&quot; on their marketing pages, so...<p>For an example of what I mean, see JetCrab: <a href="https:&#x2F;&#x2F;jetcrab.com" rel="nofollow">https:&#x2F;&#x2F;jetcrab.com</a>
          • CryZe14 hours ago
            This doesn&#x27;t implement a JS engine, it&#x27;s just a wrapper around boa.
          • mort9614 hours ago
            That page says:<p>&gt; Complete JavaScript execution pipeline from source code parsing to bytecode execution.<p>So it&#x27;s a bytecode interpreter, not a JIT.<p>It might still be production ready for a bunch of use cases. I may use it as a scripting layer for some pluggable piece of software or a game. I wouldn&#x27;t consider it appropriate for a &quot;production ready web browser&quot; which intends to compete with Firefox and Chrome.<p>EDIT: Also for some reason all its components are called v8_something? That&#x27;s pretty off putting, you can&#x27;t just take another project&#x27;s name like that.. and from the author&#x27;s Reddit comments it seems to be mostly AI slop anyway. I&#x27;m guessing Claude wrote the &quot;production ready&quot; part on the website, I wouldn&#x27;t trust it.
      • nicoburns13 hours ago
        They&#x27;re all more than 10x slower than SpiderMonkey.
      • depr13 hours ago
        They may be production-ready in some sense but they&#x27;re not ready to be put in Firefox, and&#x2F;or they are v8 bindings.
    • tialaramex15 hours ago
      I mean SpiderMonkey works, and presumably is fairly self-contained, so I can see why replacing that isn&#x27;t attractive unless you believe you can make it significantly better in some way.
  • diath15 hours ago
    Too little too late now that the new meta is to use system provided webviews so you don&#x27;t have to ship a big ass web renderer per app.
    • bastawhiz15 hours ago
      System web views were available as drag and drop components in VB6 two and a half decades ago. There&#x27;s nothing &quot;new&quot; about that as a concept, and plenty of reasons to not want to use Blink&#x2F;WebKit.
      • diath14 hours ago
        &gt; System web views were available as drag and drop components in VB6 two and a half decades ago. There&#x27;s nothing &quot;new&quot; about that as a concept<p>We are in a thread discussing a Rust library, logically, I was referring to the current approach in GUI rendering in the Rust space (such as Tauri and Dioxus).<p>&gt; and plenty of reasons to not want to use Blink&#x2F;WebKit.<p>Such as? Can you name a few objective reasons against Blink&#x2F;WebKit (the technology) that does not involve just not liking Google&#x2F;Apple?
        • airstrike14 hours ago
          Tauri&#x2F;Dioxus aren&#x27;t necessarily the end state of Rust GUI
        • bastawhiz10 hours ago
          &gt; the current approach in GUI rendering in the Rust space (such as Tauri and Dioxus).<p>Tauri itself doesn&#x27;t render web views. It uses wry under the hood. Dioxus isn&#x27;t a web view at all and deserves a fundamentally different purpose.<p>&gt; Can you name a few objective reasons against Blink&#x2F;WebKit (the technology) that does not involve just not liking Google&#x2F;Apple?<p>If you have a cross platform application, it sucks having to worry about which features work or don&#x27;t work based on which engine is available and how old it is. You also don&#x27;t know if there are user scripts being injected that are affecting the experience. It&#x27;s impossible to debug and many users don&#x27;t even know what browser engine is being used, they just know your app doesn&#x27;t work.<p>If you build for Servo, it works exactly the same on every platform. You could use wry and test that Edge is good on Windows, WebKit works on the past few versions of Macos, gtk WebKit works, etc etc, or you can just use Servo.<p>Not to mention, Servo is probably much lighter than whatever flavor of chromium the user has installed under the hood.
    • swiftcoder15 hours ago
      No particular reason Servo couldn&#x27;t one day become the system web view on Linux distros...
      • chrismorgan14 hours ago
        Linux (GNU&#x2F;Linux or whatever) doesn’t even have the <i>concept</i> of a system web view. The closest you might get to the notion is probably WebKitGTK which is perhaps the <i>GNOME</i> idea of a system web view, but it’s nothing like WebKit on macOS or WebView2 (or MSHTML in the past) on Windows for popularity or availability.<p>As a user of a desktop environment other than gnome-shell, I only have webkitgtk-6.0 installed because I chose to install Epiphany—it’s a good proxy for testing on Safari, which Apple makes ridiculously expensive.
      • mort9614 hours ago
        Yeah the closest thing you come today is arguably WebKitGTK, which is known for being not exactly great.
    • charcircuit13 hours ago
      That is not the meta. The meta is to ship blink so you only have to support a single version of a single web engine in stead of many versions of many different web engines.