5 comments

  • binaryturtle2 hours ago
    I remember visiting a computer exhibition (CeBIT) in the very early 90s. In one booth they had some of the big Amiga systems (2000, I think) and at some point on of the booth&#x27;s staff did the 3 finger salute (press 3 specific keys on the keyboard to force a reboot) on one of the machines. The machine was back up in what felt like an instant. I was amazed by that. They probably had setup the whole boot process via RAM (see &quot;RAD&quot; disk on the Amiga), but I hadn&#x27;t any idea about that back in the days.<p>Still to this day I think this is how it should be. You want to switch ON your computer and it should be ready for use.<p>But what do we get? What feels like minutes of random waiting time. My Raspberry PI with Linux which probably eats 10 of those Amiga 2Ks for breakfast shifts through through a few 1000 lines of initialising output… my Mac which probably eats like 50 of those Amiga 2Ks for lunch… showing a slowly growing bar doing whatever… Why didn&#x27;t this improve at all in the last 30 years?
    • embedding-shape15 minutes ago
      &gt; They probably had setup the whole boot process via RAM (see &quot;RAD&quot; disk on the Amiga), but I hadn&#x27;t any idea about that back in the days.<p>&gt; Still to this day I think this is how it should be. You want to switch ON your computer and it should be ready for use.<p>Don&#x27;t we already kind of have this? It&#x27;s setup to be dynamic, and we&#x27;d ended up calling it &quot;sleep&quot;, but it basically does what you&#x27;re talking about, but dynamically and optionally, basically chucking the entire state into RAM (or disk for &quot;hibernate&quot;) then resumes from that when you wanna continue.<p>Personally I&#x27;ve avoided it for the longest of times because something always breaks or ends up wonky when you resumes, at least on my desktop. The PS5 and the Steam Deck handles this seemingly even with games running, so seems possible, and I know others who are using it, maybe Linux desktop is just lagging behind there a bit so I continue to properly shut down my computer every night.
    • skeezyjefferson1 hour ago
      &gt; But what do we get? What feels like minutes of random waiting time.<p>largely, nerds dont care about peoples experience, partly due to autism.
    • davemp1 hour ago
      Windows prioritize phoning home and data collection over UX. If you have a corporate install you’ll also have negligent EDP software killing your boot times.<p>You can get fast boot times on linux if you care to tweak things.
    • vachina2 hours ago
      Because we still carry data over coppers and wires.
  • darthShadow2 hours ago
    Just curious, was something like <a href="https:&#x2F;&#x2F;github.com&#x2F;containerd&#x2F;stargz-snapshotter" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;containerd&#x2F;stargz-snapshotter</a> considered&#x2F;evaluated before designing your own lazily-loaded container FS and if so, any pros&#x2F;cons for the same?
    • hhthrowaway12301 hour ago
      Also curious! I was also wondering if criu frozen containers would help here. I.e. load the notebooks, snapshot them, and then restore them.
  • zkmon5 hours ago
    When did booting time has become a problem to solve?
    • mcemilg5 hours ago
      From my (just a user) perspective, GPUs are expensive, they shouldn&#x27;t be left standing if they&#x27;re not being used.
      • embedding-shape13 minutes ago
        &gt; From my (just a user) perspective, GPUs are expensive, they shouldn&#x27;t be left standing if they&#x27;re not being used.<p>How much does a idling GPU actually take when there is no monitor attached and no activity on it? My monitor turns off after 10 minutes of inactivity or something, and at that point, I feel like the power draw should be really small (but haven&#x27;t verified it myself).
      • jack_tripper5 hours ago
        I was there Gandalf, 3000 years ago, when people used folding@home to donate idle CPUs.
        • krige24 minutes ago
          and SETI@home too!
    • ukblewis3 hours ago
      I honestly can’t believe that the only top level comment right now is this kind of “I can’t be assed to read the linked article, I came just to shit on it” kinda comment
  • ukblewis3 hours ago
    This looks awesome!
  • mongolchakma7 hours ago
    [flagged]
    • muragekibicho6 hours ago
      Lol is this the Hacker News version of &quot;I am Groot&quot;.<p>Sidenote, I attempted a startup in the &quot;Google Colab for GPUs&quot; space.<p>You approach won&#x27;t work unless you give a generous free tier and bankroll content creators to create CUDA&#x2F;Mojo tutorials for the noobs.<p>The professional GPU devs mostly have ChatGPT writing CUDA kernels for them that they debug on their personal computers.<p>Your best at growth-hacking would be forking VS Code and building an IDE solely for GPU programming. Again, I&#x27;m super familiar with the space and if you&#x27;d like to collaborate then reach out on Twitter and I&#x27;ll share all the growth-hacking tricks I learnt while building a &quot;Google Colab for GPUs&quot; startup.
      • KeplerBoy4 hours ago
        Isn&#x27;t Google Colab already &quot;Google Colab for GPUs&quot;?<p>Also VS Code for GPU programming is VS Code with Nvidia&#x27;s Nsight plugins (look up the new Nsight Copilot), there is no gap to be filled.
        • brazukadev3 hours ago
          &gt; Isn&#x27;t Google Colab already &quot;Google Colab for GPUs&quot;?<p>Yes it is. Same way Llama is also open. A considerable % of devs don&#x27;t wanna use anything that comes from them