14 comments

  • franciscop2 hours ago
    &gt; PEP 658 (2022) put package metadata directly in the Simple Repository API, so resolvers could fetch dependency information without downloading wheels at all.<p>&gt; Fortunately RubyGems.org already provides the same information about gems.<p>&gt; [...]<p>&gt; After we unpack the gem, we can discover whether the gem is a native extension or not.<p>Why not adding the meta information of whether the gem is a native extension or not directly to rubygems.org? You could fully parallelize whole installation trees of dependencies then.
  • dbalatero13 hours ago
    I appreciate that Aaron is focusing on the practical algorithm&#x2F;design improvements that could be made to Bundler, vs. prematurely going all in on &quot;rewrite in Rust&quot;.
    • AlphaSite13 hours ago
      Speed would be nice, but more than that I want it to also manage Ruby installs. I’m infuriated at the mess of Rubys and version managers.
      • stouset12 hours ago
        Mise is the answer to this. I no longer use chruby&#x2F;rbenv&#x2F;rvm. And it manages multiple languages, project-local environment, etc.
        • shevy-java6 hours ago
          Well. GoboLinux solved that already. Back in 2005.<p>I never understood the real need for chruby rvm etc... - I manage everything, all programs, in a versioned AppDir manner. (Note: I am not using GoboLinux; I use a modified variant. GoboLinux gave the correct idea though.)
          • merelysounds3 hours ago
            Context for anyone unfamiliar with GoboLinux:<p>&gt; each program in a GoboLinux system has its own subdirectory tree, where all of its files (including settings specific for that program) may be found. Thus, a program &quot;Foo&quot; has all of its specific files and libraries in &#x2F;Programs&#x2F;Foo, under the corresponding version of this program at hand. For example, the commonly known GCC compiler suite version 8.1.0, would reside under the directory &#x2F;Programs&#x2F;GCC&#x2F;8.1.0<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;GoboLinux" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;GoboLinux</a>
          • pasc18781 hour ago
            And the bank where I worked in mid 1990s had the same each program in its own versioned directory.
          • johnisgood6 hours ago
            Or GNU Stow.
        • NSPG91111 hours ago
          mise is pretty nice, though I don&#x27;t use it for python, Python is handled by uv with poethepoet as the task runner.
          • dirtbag__dad10 hours ago
            Can you help me understand what the value or use case of poethepoet is?
            • jacobtomlinson3 hours ago
              It allows you to define common tasks such as linting, running tests, building docs, etc under an alias.<p>So you can run<p>uv run poe docs<p>Instead of<p>uv run sphinx-build -W -b dirhtml docs&#x2F;source docs&#x2F;build<p>Many languages have a task runners baked into their package manager, but many others don’t. In Ruby it’s roughly the equivalent of Rake.
        • cortesoft10 hours ago
          How does it compare to asdf?
          • pilaf9 hours ago
            <a href="https:&#x2F;&#x2F;mise.jdx.dev&#x2F;dev-tools&#x2F;comparison-to-asdf.html" rel="nofollow">https:&#x2F;&#x2F;mise.jdx.dev&#x2F;dev-tools&#x2F;comparison-to-asdf.html</a><p>&gt; mise can be used as a drop-in replacement for asdf. It supports the same .tool-versions files that you may have used with asdf and can use asdf plugins through the asdf backend.<p>&gt; It will not, however, reuse existing asdf directories (so you&#x27;ll need to either reinstall them or move them), and 100% compatibility is not a design goal. That said, if you&#x27;re coming from asdf-bash (0.15 and below), mise actually has fewer breaking changes than asdf-go (0.16 and above) despite 100% compatibility not being a design goal of mise.<p>&gt; Casual users coming from asdf have generally found mise to just be a faster, easier to use asdf.
      • shevy-java6 hours ago
        I manage all my programs via ruby in a manner similar to GoboLinux, e. g. versioned AppDirs. So I don&#x27;t need uv or anything else for this - not that I have anything against uv, it is just not something I need myself. But I agree with you in that there are multiple objectives or goals to be had here; you mention one, Aaron mentioned speed. I think there are many more (I mentioned documentation before too; people in ruby seem to think documentation is not important, or at the least really many think in that way - look at the documentation of rack, it is virtually not existing. I am not going to sift through low quality ruby code for hours to try to figure out what madness drove this or that author to write some horrible-to-read junk of code. A project without good documentation is worthless. Why do ruby developers not learn this lesson?).<p>I think all of ruby, including the ecosystem as well as how ruby is governed, needs to be considered. In particular with ruby constantly declining and losing user share. That is not good. That should change.
      • angristan1 hour ago
        Maybe <a href="https:&#x2F;&#x2F;github.com&#x2F;spinel-coop&#x2F;rv" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;spinel-coop&#x2F;rv</a> will be the answer when it&#x27;s ready
      • chao-10 hours ago
        I&#x27;m always surprised to hear this, and I want to be clear that I&#x27;m not trying to be dismissive in my comment. However, I&#x27;ve not encountered issues while juggling dozens of Ruby projects since around 2011, despite seeing many people&#x27;s complaints over the years. Ten years ago I was using rvm, and I saw people sharing their issues with it, and listing reasons why rbenv and chruby are better. So I tried those, and my resulting workflow felt basically the same once I got used to the differences.<p>At this point I&#x27;ve used rbenv, rvm, asdf, mise, and one other whose name isn&#x27;t coming to mind. Not to mention docker containers, with or without any of those tools.<p>I don&#x27;t mean to project any particular complaint onto you, and I&#x27;m curious what part of it is infuriating? Each of the version managers I&#x27;ve used has functioned as advertised, and I&#x27;m able to get back to work pretty smoothly.
        • plorkyeran8 hours ago
          My experience with the ruby ecosystem has been that if you get everything set up correctly all of the environment management tools have worked wonderfully. When you don&#x27;t have everything set up correctly, they break in ways that is hard to understand for someone not intimately familiar with the ecosystem. It&#x27;s something that&#x27;s not at all a problem for someone using ruby as their primary language, and a major source of pain for dabblers and people who just want to run something written in ruby.
          • chao-8 hours ago
            That&#x27;s a fair point. That&#x27;s why I&#x27;m interested to know what is at the core of AlphaSite&#x27;s complaint.<p>One challenge, as I see it, is that there are three kinds of Ruby projects that need to take different approaches to the matter, in increasing level of complexity:<p>(1) Developing a longer-lived, deployed or distributed project. Here you should definitely use both the Gemfile Ruby version and a .ruby-version file. You&#x27;re normally only targeting one version at a time, but contributors and&#x2F;or users are very unlikely to somehow accidentally end up using the wrong Ruby version without getting a very obvious notification that they are using the wrong Ruby version. That&#x27;s annoying to encounter, but not difficult to solve once you know that the concept of a &quot;version manager&quot; exists.<p>(2) Hacking on your own small project or just banging out a script. You just want to run some Ruby version and get to it. You probably should default to the latest, unless there&#x27;s some dependency requiring a lower version, and you might not know that until after you&#x27;ve gotten started. The inverse issue might also occur, e.g. you installed Ruby 3.1 a few years ago, you start hacking, and now you want to pull in a gem version that requires Ruby 3.4. You can manage this by putting the Ruby version in your Gemfile, or using a .ruby-version file, or both, but if you&#x27;re relatively green and just diving in, this might not be on your radar.<p>(3) Developing a gem. You probably want to test&#x2F;validate your gem across multiple Ruby versions, and possibly even different versions of your dependencies. You obviously don&#x27;t want to lock yourself into a single Ruby version, and use of a .ruby-version file is inappropriate. There is tooling to do this, but it takes some learning to be able to utilize.<p>My belief is that it is worth it for install documentation for category (1) to be a little more explicit about how to get up and running. For category (2), I don&#x27;t know what the right answer is, but I understand the potential pain points.<p>What I was most curious about is whether AlphaSite&#x27;s complaint was with a specific version manager, or the fact that multiple options for version managers exist, or even the need for version managers at all?
      • prh87 hours ago
        What do you use to manage other languages? Asking because asdf is basically a multi language version of a ruby version manager rbenv
      • riffraff12 hours ago
        what exactly is your issue? I&#x27;ve been using rvm for a decade(?) without any major pain. Cross-language tools such as mise or asdf also seem to work ok.<p>I can relate to the &quot;I wish we didn&#x27;t need a second tool&quot;, but it doesn&#x27;t seem like much of a mess.
        • black3r2 hours ago
          I&#x27;ve been using pyenv for a decade before uv and it wasn&#x27;t a &quot;major pain&quot; either. But compared to uv it was infinitely more complex, because uv manages python versions seamlessly.<p>If python version changes in an uv-managed project, you don&#x27;t have to do any extra step, just run &quot;uv sync&quot; as you normally do when you want to install updated dependencies. uv automatically detects it needs a new python, downloads it, re-creates the virtual environment with it and installs the deps all in one command.<p>And since it&#x27;s the command which everyone does anytime a dependency update is required, no dev is gonna panic why the app is not working after we merge in some new code which requires newer python cause he missed the python update memo.
        • ClikeX12 hours ago
          Of all the languages I&#x27;ve touched, managing multiple ruby versions has been one of the easiest.
    • shevy-java6 hours ago
      Hmmm. Aaron is cool, but also works at Shopify. Neither DHH nor Aaron mentioned anyone at the gem.coop project. I can&#x27;t help but have mixed feelings here.<p>I think the underlying issue is much bigger than merely contained on speed. It also has to do with control - both for developers as well as users.<p>This is also why I think the label &quot;prematurely&quot; is weird, because it attempts to singularize everything down to the speed label. I think the issue is much wider than what is faster. In fact, speed doesn&#x27;t interest me that much; whether it takes 0.35 seconds or 0.45 seconds or even 1.5 seconds to install a gem, is just so irrelevant to me. But, for instance, high quality documentation - now that is an area where ruby failed in epic ways. Not all projects but many. And then ruby wonders why it is dying? And nobody wants to focus on these issues really. So the issue should really be viewed much larger in scope than &quot;merely&quot; &quot;speed is the issue&quot;. I mean ... matz focused on speed in the last 10 years. Ok. Ruby declined. Well, perhaps it is time to focus on other things more - not that people mind a faster ruby, but if a language is dying, then you need to think really hard, come up with a plan, multiple ideas, and push these hard. Rather than, say, purge away old developers and claim nothing happened here and all is well ...
      • 8n4vidtmkvmk6 hours ago
        I don&#x27;t know a lot about Ruby, but I&#x27;d wager what its missing is a hero app or framework. Ruby on Rails got folks interested for awhile, but I guess other frameworks won out. What left does it have? What domains does it excel at?<p>Python has ML. JS has web. C&#x2F;C++ has performance. Rust is stealing a slice of that thanks to safety.<p>That probably covers like 99% of things, at least from my world view. There are arguably other better languages but it doesn&#x27;t much matter if the community all flocks to the well established ones.
        • Kwpolska4 hours ago
          There is no single established framework&#x2F;language for backend Web development. There are many options, all valid, differing in popularity based on their qualities (or sometimes just hype).<p>Ruby used to be cool around 2010, but it lost to better options. Ruby has strange syntax, and Rails abuses magic, so I guess the viability of TypeScript for development made Ruby less popular.
          • pantulis3 hours ago
            &gt; Ruby used to be cool around 2010, but it lost to better options.<p>I&#x27;d argue that it lost the cool kidz mindshare but not to better options. People jumped to Node.js because of async but in the end the relevant industry change was the switch to SPA based architectures in the web space. Rails never embraced that approach and hence lost the popularity.<p>Jump 15 years ahead, and now the Enterprise world is built with React and Angular apps, not with JSPs or Spring MVC apps. Can Rails do a comeback? Who knows, but it&#x27;s still a bona fide web development stack with terrific productivity gains for those who want to optimize that metric.
  • lamontcg11 hours ago
    The biggest thing that gems could do to make rubygems faster is to have a registry&#x2F;database of files for each gem, so that rubygems didn&#x27;t have to scan the filesystem on every `require` looking for which gem had which file in it.<p>That would mean that if you edited your gems directly, things would break. Add a file, and it wouldn&#x27;t get found until the metadata got rehashed. The gem install, uninstall, etc commands would need to be modified to maintain that metadata. But really, you shouldn&#x27;t be hacking up your gem library like that ith shellcommands anyway (and if you are doing manual surgery, having to regen the metadata isn&#x27;t really that burdensome).
    • vidarh3 hours ago
      100%. Optimising &quot;bundle install&quot; etc. is optimizing the wrong thing. You don&#x27;t even need this to work from gems in general. It&#x27;d have solved a lot of problems just to have it work for &quot;bundle install&quot; in standalone mode, where all the files are installed to a directory anyway.<p>But in general, one of the biggest problems with Ruby for me is how $LOAD_PATH causes combinatoric explosion when you add gems because every gem is added, due to the lack of any scoping of require&#x27;s to packages.<p>The existence of <i>multiple</i> projects to cache this is an illustration that this is a real issue. I&#x27;ve had project in the past where starting the app took minutes purely due to require&#x27;s, and where we <i>shaved minutes off</i> by crude manipulation of the load path, as most of that time was pointless stat calls.
    • pmahoney11 hours ago
      I wrote some code to do almost this many years ago (if I recall correctly, it doesn’t cache anything to disk, but builds the hash fresh each time, which can still result in massive speed up).<p>Probably obsolete and broken by now, but one of my favorite mini projects.<p>(And I just realized the graph is all but impossible to read in dark mode)<p><a href="https:&#x2F;&#x2F;github.com&#x2F;pmahoney&#x2F;fastup" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pmahoney&#x2F;fastup</a>
    • byroot3 hours ago
      You are describing bootsnap.<p>And yes I proposed to integrate bootsnap into bundler ages ago, but got told to go away.
  • dang13 hours ago
    Recent and related:<p><i>How uv got so fast</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46393992">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46393992</a> - Dec 2025 (457 comments)
  • nightpool14 hours ago
    Really interesting post, but this part from the beginning stuck out to me:<p><pre><code> Ruby Gems are tar files, and one of the files in the tar file is a YAML representation of the GemSpec. This YAML file declares all dependencies for the Gem, so RubyGems can know, without evaling anything, what dependencies it needs to install before it can install any particular Gem. Additionally, RubyGems.org provides an API for asking about dependency information, which is actually the normal way of getting dependency info (again, no eval required). </code></pre> It would be interesting to compare and contrast the parsing speed for a large representative set of Python dependencies compared to a large representative set of Ruby dependencies. YAML is famously not the most efficient format to parse. We might have been better than `pip`, but I would be surprised if there isn&#x27;t any room left on the table to parse dependency information in a more efficient format (JSON, protobufs, whatever).<p>That said, the points at the end about not needing to parse gemspecs to install &quot;most&quot; dependencies would make this pretty moot (if the information is already returned from the gemserver)
    • solatic5 hours ago
      For a &quot;YAML&quot; lockfile, you could probably write a much simpler and much more performant parser that throws out much of what makes YAML complicated, in particular, anchors, data type tags, all the ways of doing multi-line strings, all the weird unexpected type conversions (like yes&#x2F;no converting to a boolean)... If the lockfile is never meant to be edited by human hands, only reviewed by human eyes, you can build a much simpler parser for something like:<p><pre><code> version: &quot;1&quot; dependencies: foo: version: &quot;1.0&quot; lock: &quot;sha-blabla&quot;</code></pre>
    • masklinn13 hours ago
      Although Yaml is a dreadful thing, given the context and the size of a normal gemspec I would be very surprised if it showed up in any significant capacity when psych should be in the low single digit MB&#x2F;s throughput.
    • tedivm10 hours ago
      It mostly doesn&#x27;t matter, because these metadata files are pulled into their respective package managers. When you publish to RubyGems the file is read into their database and made available to their API, just like when you publish a Python file the pyproject.toml is parse into the PyPI database and made available.<p>This is a major reason why UV is faster than older python package managers, as they were able to take advantage of the change in the PyPI registry that enabled this. Now these package managers can run their dependency calculations without needing to download the entire package, decompress the package files, and then parse them.
  • raggi12 hours ago
    It’s definitely possible, I wrote a prototype many many years ago <a href="https:&#x2F;&#x2F;ra66i.org&#x2F;tmp&#x2F;how_gems_should_be.mov" rel="nofollow">https:&#x2F;&#x2F;ra66i.org&#x2F;tmp&#x2F;how_gems_should_be.mov</a>
  • prescriptivist12 hours ago
    I think fibers (or, rather, the Async library) in Ruby tends to be fetishized by junior Rails engineers who don&#x27;t realize higher level thread coordination issues (connection pools, etc) equally apply to fibers. That said this could be a pretty good use case for fibers -- the code base I use every day has ~230 gems and if you can peel off the actual IO bound installation of all those into non-blocking calls, you would see a meaningful performance difference vs spinning up threads and context switching between them.
    • raggi12 hours ago
      What I would do to really squeeze the rest out in pure ruby (bear in mind I’ve been away about a decade so there _might be_ new bits but nothing meaningful as far as I know): Use a cheaper to parse index format (the gists I wrote years ago cover this: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;raggi&#x2F;4957402" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;raggi&#x2F;4957402</a>) Use threads for the initial archive downloads (this is just io, and you want to reuse some caches like the index) Use a few forks for the unpacking and post install steps (because these have unpredictable concurrency behaviors)
      • prescriptivist12 hours ago
        &gt; there _might be_ new bits but nothing meaningful as far as I know<p>If you didn&#x27;t need backwards compatibility with older rubies you could use Ractors in lieu of forks and not have to IPC between the two and have cleaner communication channels. I can peg all the cores on my machine with a simple Ractor pool doing simple computation, which feels like a miracle as a Ruby old head. Bundler could get away with creating their own Ractor safe installer pool which would be cool as it&#x27;d be the first large scale use of Ractors that I know of.
  • kimos13 hours ago
    I’ve been squinting at the “global cache for all bundler instances” issue[1] and I’m trying to figure out if it’s a minefield of hidden complication or if it’s actually relatively straight forward.<p>It’s interesting as a target because it pays off more the longer it has been implemented as it only would be shared from versions going forward.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;ruby&#x2F;rubygems&#x2F;issues&#x2F;7249" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ruby&#x2F;rubygems&#x2F;issues&#x2F;7249</a>
    • adithyareddy11 hours ago
      It&#x27;s definitely not super straightforward, but there&#x27;s plenty of recent prior art to steal from. Ruby was probably not the best place to solve this for the first time given the constraints (similar to pip), but there&#x27;s no reason the Ruby ecosystem shouldn&#x27;t now benefit from the work other ecosystems have done to solve it.
  • rossjudson8 hours ago
    Optimization: Doing nothing is faster than doing something.
    • mkoubaa7 hours ago
      Do smart thing make code go brr? No. Stop do dumb thing make code go brr.
  • mberning11 hours ago
    I never found Bundler to be all that slow compared to other package managers.
    • faizshah11 hours ago
      Rails 8.1 and ruby 3 are also very surprisingly fast, and coming back to an “omakase” framework is honestly a breath of fresh air especially now that with AI tools you can implement a lot of stuff from scratch instead of using deps.
    • plorkyeran8 hours ago
      I wouldn&#x27;t say it stands out as being unusually slow, but it certainly isn&#x27;t snappy.
  • dmix10 hours ago
    tenderlove never misses
  • quotemstr13 hours ago
    Well, now my opinion of uv has been damaged. It...<p>&gt; Ignoring requires-python upper bounds. When a package says it requires python&lt;4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python&lt;4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive<p>Man, it&#x27;s easy to be fast when you&#x27;re wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don&#x27;t notice.<p>&gt; When multiple package indexes are configured, pip checks all of them. uv picks from the first index that has the package, stopping there. This prevents dependency confusion attacks and avoids extra network requests.<p>Ambiguity detection is important.<p>&gt; uv ignores pip’s configuration files entirely. No parsing, no environment variable lookups, no inheritance from system-wide and per-user locations.<p>Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.<p>&gt; No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install.<p>... thus shifting the bytecode compilation burden to first startup after install. You&#x27;re still paying for the bytecode compilation (and it&#x27;s serialized, so you&#x27;re actually spending more time), but you don&#x27;t associate the time with your package manager.<p>I mean, sure, avoiding tons of Python subprocesses helps, but in our bold new free threaded world, we don&#x27;t have to spawn so many subprocesses.
    • pc48612 hours ago
      &gt;&gt; Ignoring requires-python upper bounds. When a package says it requires python&lt;4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python&lt;4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive &gt; &gt; Man, it&#x27;s easy to be fast when you&#x27;re wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don&#x27;t notice.<p>Version bound checking is NP complete but becomes tractable by dropping the upper bound constraint. Russ Cox researched version selection in 2016 and described the problem in his &quot;Version SAT&quot; blog post (<a href="https:&#x2F;&#x2F;research.swtch.com&#x2F;version-sat" rel="nofollow">https:&#x2F;&#x2F;research.swtch.com&#x2F;version-sat</a>). This research is what informed Go&#x27;s Minimal Version Selection (<a href="https:&#x2F;&#x2F;research.swtch.com&#x2F;vgo-mvs" rel="nofollow">https:&#x2F;&#x2F;research.swtch.com&#x2F;vgo-mvs</a>) for modules.<p>It appears to me that uv is walking the same path. If most developers don&#x27;t care about upper bounds and we can avoid expensive algorithms that may never converge, then dropping upper bound support is reasonable. And if uv becomes popular, then it&#x27;ll be a sign that perhaps Python&#x27;s ecosystem as a whole will drop package version upper bounds.
      • quotemstr12 hours ago
        Perhaps so, although I&#x27;m more algorithmically optimistic. If ignoring upper bounds makes the problem more tractable, you can<p>1. solve dependency constraints <i>as if</i> upper bounds were absent,<p>2. check that your solution actually satisfies constraints (O(N), quick and passes almost all the time), and then<p>3. only if the upper bound constraint check fails, fall back to the slower and reliable parser.<p>This approach would be clever, efficient, and correct. What you don&#x27;t get to do is just ignore the fucking rules to which another system studiously adheres then claim you&#x27;re faster than that system.<p>That&#x27;s called cheating.
        • chuckadams12 hours ago
          If ignoring the rules makes it faster, then it&#x27;s still faster. uv has never claimed to be 100% compatible. How often is it actually incorrect?
          • quotemstr12 hours ago
            My groceries are cheaper if I walk out of the store without paying for them too. Who&#x27;s going to stop me?<p>While I agree that an optimistic optimization for the upper-bound-pass case makes sense, just ignoring the bounds just isn&#x27;t correct either.<p>Common pattern in insurgent software is to violate a specification, demonstrate speedups, and then compare yourself favorably to older software that implements the spec (however stupid) faithfully.
            • unrealhoang8 hours ago
              &gt; My groceries are cheaper if I walk out of the store without paying for them too. Who&#x27;s going to stop me?<p>If you can consistently do that, then it IS the correct thing to do.<p>uv made that choice and users use them, is there an objective truth of what is “correct” to do version parsing?
        • tyre12 hours ago
          It’s not cheating if it works
    • asa40011 hours ago
      &gt; Man, it&#x27;s easy to be fast when you&#x27;re wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don&#x27;t notice.<p>What&#x27;s underhanded about this? What are the observable effects of this choice that make it wrong? They reformulated the a problem into a different problem that they could solve faster, and then solved that, and got away with it. Sounds like creative problem solving to me.
    • naedish13 hours ago
      <i>&gt; uv ignores pip’s configuration files entirely. No parsing, no environment variable lookups, no inheritance from system-wide and per-user locations.<p>Stuff like this sense unlikely to contribute to overall runtime, but it does decrease flexibility.</i><p>Astral have been very clear that they have no intention of replicating all of pip. uv pip install was a way to smooth the transition from using pip to using uv. The point of uv wasn&#x27;t to rewrite pip in rust - and thankfully so. For all of the good that pip did it has shortcomings which only a new package manager turned out capable of solving.<p><i>&gt; No bytecode compilation by default. pip compiles .py files to .pyc during installation. uv skips this step, shaving time off every install.<p>... thus shifting the bytecode compilation burden to first startup after install. You&#x27;re still paying for the bytecode compilation (and it&#x27;s serialized, so you&#x27;re actually spending more time), but you don&#x27;t associate the time with your package manager.</i><p>In most cases this will have no noticeable impact (so a sane default) - but when it does count you simply turn on --compile-bytecode.
      • quotemstr12 hours ago
        I agree that bytecode compilation (and caching to pyc files) seldom has a meaningful impact, but it&#x27;s nevertheless unfair to tout it as an advantage of uv over pip, because by doing so you&#x27;ve constructed an apples to oranges comparison.<p>You could argue that uv has a better default behavior than pip, but that&#x27;s not an engineering advantage: it&#x27;s just a different choice of default setting. If you turned off eager bytecode compilation in pip you&#x27;d get the same result.
        • oblio12 hours ago
          &gt; You could argue that uv has a better default behavior than pip, but that&#x27;s not an engineering advantage: it&#x27;s just a different choice of default setting. If you turned off eager bytecode compilation in pip you&#x27;d get the same result.<p>Until pip does make the change, this is an engineering advantage for uv. Engineers working on code are part of the product. If I build a car with square wheels and don&#x27;t change them when I notice the issue, my car still has a bumpy ride, that&#x27;s a fact.
    • IshKebab13 hours ago
      &gt; Man, it&#x27;s easy to be fast when you&#x27;re wrong.<p>There&#x27;s never going to be a Python 4 so I don&#x27;t think they are wrong. Even if lighting strikes thrice there&#x27;s no way they could migrate people to Python 4 before uv could be updated to &quot;fix&quot; that.<p>&gt; Ambiguity detection is important.<p>I&#x27;m not sure what you mean here. Pip doesn&#x27;t detect any ambiguities. In fact Pip&#x27;s behaviour is a gaping security hole that they&#x27;ve refused to fix, and as far as I know the only way to avoid it is to use `uv` (or register all of your internal company package names on PyPI which nobody wants to do).<p>&gt; thus shifting the bytecode compilation burden to first startup after install<p>Which is a much better option.
      • jjgreen13 hours ago
        <i>There&#x27;s never going to be a Python 4</i><p>There will, but called Pyku ...
        • riffraff4 hours ago
          Took me way too long to get this joke, but when did I giggled, thank you.
      • quotemstr12 hours ago
        &gt; Pip doesn&#x27;t detect any ambiguities. In fact Pip&#x27;s behaviour is a gaping security hole that they&#x27;ve refused to fix, and as far as I know the only way to avoid it is to use `uv`<p>Agreed the current behavior is stupid, FWIW. I hope PEPs 708 and 752 get implemented soon. I&#x27;m just pointing out that there&#x27;s an important qualitative difference between<p>1. we do the same job, but much faster; and<p>2. we decided your job is stupid and so don&#x27;t do it, realizing speedups.<p>uv presents itself as #1 but is actually #2, and that&#x27;s a shame.
        • stouset11 hours ago
          If it turns out nobody is actually relying on, using, or benefiting from those behaviors #1 and #2 are the same thing.<p>“If a tree falls in the forest…”
  • dboreham14 hours ago
    I&#x27;ve been doing software of all kinds for a long long time. I&#x27;ve never, ever, been in a position where I was concerned about the speed of my package manager.<p>Compiler, yes. Linker, sure. Package downloader. No.
    • m00x13 hours ago
      When you&#x27;re in a big company, the problem really starts showing. A service can have 100+ dependencies, many in private repos, and once you start adding and modifying dependencies, it has to figure out how to version it to create the lock file across all of these and it can be really slow.<p>Cloud dev environments can also take several minutes to set up.
    • travisd14 hours ago
      Many of these package managers get invoked countless times per day (e.g., in CI to prepare an environment and run tests, while spinning up new dev&#x2F;AI agent environments, etc).
      • zingar13 hours ago
        Is the package manager a significant amount of time compared to setting up containers, running tests etc? (Genuine question, I’m on holiday and can’t look up real stats for myself right now)
        • maxbond13 hours ago
          Anecdotally unless I&#x27;m doing something really dumb in my Dockerfile (recently I found a recursive `chown` that was taking 20m+ to finish, grr) installing dependencies is longest step of the build. It&#x27;s also the most failure prone (due to transient network issues).
      • byroot13 hours ago
        Ye,s but if your CI isn&#x27;t terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don&#x27;t have a hard dependency on a third party service.<p>The reason for speeding up bundler isn&#x27;t CI, it&#x27;s newcomer experience. `bundle install` is the overwhelming majority of the duration of `rails new`.
        • maccard13 hours ago
          &gt; Ye,s but if your CI isn&#x27;t terrible, you have the dependencies cached, so that subsequent runs are almost instant, and more importantly, you don&#x27;t have a hard dependency on a third party service.<p>I’d wager the majority of CI usage fits your bill of “terrible”. No provider provides OOTB caching in my experience, and I’ve worked with multiple in house providers, Jenkins, teamcity, GHA, buildkite.
          • byroot13 hours ago
            GHA with the `setup-ruby` action will cache gems.<p>Buildkite can be used in tons of different ways, but it&#x27;s common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.
            • maccard12 hours ago
              &gt; GHA with the `setup-ruby` action will cache gems.<p>Caching is a great word - it only means what we want it to mean. My experience with GHA default caches is that it’s absolutely dog slow.<p>&gt; Buildkite can be used in tons of different ways, but it&#x27;s common to use it with docker and build a docker image with a layer dedicated to the gems (e.g. COPY Gemfile Gemfile.lock; RUN bundle install), effectively caching dependencies.<p>The only way docker caching works is if you have a persistent host. That’s certainly not most setups. It can be done, but if you have that running in docker doesn’t gain you much at all you’d see the same caching speed up if you just ran it on the host machine directly.
              • ayuhito5 hours ago
                &gt; My experience with GHA default caches is that it’s absolutely dog slow.<p>For reference, oven-sh&#x2F;setup-bun opted to install dependencies from scratch over using GHA caching since the latter was somehow slower.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;oven-sh&#x2F;setup-bun&#x2F;issues&#x2F;14#issuecomment-1714116221" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;oven-sh&#x2F;setup-bun&#x2F;issues&#x2F;14#issuecomment-...</a>
              • byroot12 hours ago
                &gt; My experience with GHA default caches is that it’s absolutely dog slow.<p>GHA is definitely far from the best, but it works:, e.g 1.4 seconds to restore 27 dependencies <a href="https:&#x2F;&#x2F;github.com&#x2F;redis-rb&#x2F;redis-client&#x2F;actions&#x2F;runs&#x2F;20519158916&#x2F;job&#x2F;58951585454#step:3:123" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;redis-rb&#x2F;redis-client&#x2F;actions&#x2F;runs&#x2F;205191...</a><p>&gt; The only way docker caching works is if you have a persistent host.<p>You can pull the cache when the build host spawns, but yes, if you want to build efficiently, you can&#x27;t use ephemeral builders.<p>But overall that discussion isn&#x27;t very interesting because Buildkite is more a kit to build a CI than a CI, so it&#x27;s on you to figure out caching.<p>So I&#x27;ll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.<p>I&#x27;ve worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds. And when rubygems.org had to yank a critical gem for copyright reasons [0], we continued building and shipping without disruption while other companies with bad CIs were all sitting ducks for multiple days.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;rails&#x2F;marcel&#x2F;issues&#x2F;23" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;rails&#x2F;marcel&#x2F;issues&#x2F;23</a>
                • maccard11 hours ago
                  &gt; So I&#x27;ll just reiterate my main point: a CI system must provide a workable caching mechanism if it want to be both snappy and reliable.<p>The problem is that none of the providers really do this out of the box. GHA kind of does it, but unless you run the runners yourself you’re still pulling it from somewhere remotely.<p>&gt; I&#x27;ve worked for over a decade on one of the biggest Rails application in existence, and restoring the 800ish gems from cache was a matter of a handful of seconds.<p>I kind of suspected - the vast majority of orgs don’t have a team of people who can run this kind of a system. Most places with 10-20 devs (which was roughly the size of the team that ran the builds at our last org) have some sort of script, running on cheap as hell runners and they’re not running mirrors and baking base images on dependency changes.
                  • byroot3 hours ago
                    &gt; none of the providers really do this out of the box<p>CircleCI does. And I&#x27;m sure many others.
            • firesteelrain12 hours ago
              This is what I came to say. We pre cache dependencies into an approved baseline image. And we cache approved and scanned dependencies locally with Nexus and Lifecycle.
    • maccard13 hours ago
      There is no situation where toolchain improvements or workflow improvements should be snuffed at.
      • mikepurvis13 hours ago
        It can be harder to justify in private tooling where you might only have a few dozen or hundred devs saving those seconds per each invocation.<p>But in public tooling, where the benefit is across tens of thousands or more? It&#x27;s basically always worth it.
        • maccard12 hours ago
          Obviously effort vs reward comes in here, but if you have 20 devs and you save 5 seconds per run, you save a context switch on every tool invocation possibly.
          • mikepurvis8 hours ago
            This is true, but I think the other side of it is that in most shops there is lower hanging fruit than 5 seconds per tool run, especially if it&#x27;s not the tool that&#x27;s in the build &#x2F; debug &#x2F; test loop but rather the workspace setup &#x2F; packaging &#x2F; lockfile management tool.<p>Like, I switched my team&#x27;s docker builds to Depot and we immediately halved our CI costs and shed like 60% of the build time because it&#x27;s a persistent worker node that doesn&#x27;t have to re-download everything every time. I have no association with them, just a happy customer; I&#x27;m only giving it to illustrate how many more gains are typically on the table before a few seconds here and there are the next thing to seriously put effort into.
    • IshKebab13 hours ago
      Must be nice not to use Python!
    • skinnymuch12 hours ago
      I agree for my own projects where the code and libraries are not enormous. The speed and size gains aren’t going to be something that matters enough.
    • blibble12 hours ago
      try conda<p>took an hour to install 30 dependencies
  • amazingman13 hours ago
    If any Bundler replacement isn&#x27;t using Minimal Version Selection, I&#x27;m not interested.