9 comments

  • lambdanodecore0 minutes ago
    Basically any open source project nowadays run their software stack in containers often requiring docker compose. Unfortunatley Smol machines do not support Docker inside the microvms and they also do not support nested VMs for things that use Vagrant. I think this is a big drawback.
  • gavinray7 minutes ago
    The feature that lets you create self-contained binaries seems like a potentially simpler way to package JVM apps than GraalVM Native.<p>Probably a lot of other neat usecases for this, too<p><pre><code> smolvm pack create --image python:3.12-alpine -o .&#x2F;python312 .&#x2F;python312 run -- python3 --version # Python 3.12.x — isolated, no pyenv&#x2F;venv&#x2F;conda needed</code></pre>
  • binsquare1 hour ago
    Hello, I&#x27;m building a replacement for docker containers with a virtual machine with the ergonomics of containers + subsecond start times.<p>I worked in AWS previously in the container space + with firecracker. I realized the container is an unnecessary layer that slowed things down + firecracker was a technology designed for AWS org structure + usecase.<p>So I ended up building a hybrid taking the best of containers with the best of firecracker.<p>Let me know your thoughts, thanks!
    • harshdoesdev1 hour ago
      +1. i built something similar called shuru.run because i wanted an easy way to set up microVM sandboxes to run some of my AI apps, and firecracker wasn&#x27;t available for macOS (and, as you said, it is just too heavy for normal user-level workloads).
      • sahil-shubham35 minutes ago
        Nice work on Shuru — I remember looking at it when I was researching this space. You went with a Rust wrapper on Apple’s Virtualization framework right?<p>I have been working on something similar but on top of firecracker, called it bhatti (<a href="https:&#x2F;&#x2F;github.com&#x2F;sahil-shubham&#x2F;bhatti" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sahil-shubham&#x2F;bhatti</a>).<p>I believe anyone with a spare linux box should be able to carve it into isolated programmable machines, without having to worry about provisioning them or their lifecycle.<p>The documentation’s still early but I have been using it for orchestrating parallel work (with deploy previews), offloading browser automation for my agents etc. An auction bought heztner server is serving me quite well :)
      • fqiao1 hour ago
        Yes, having a light-weight solution for local devices as well is one primary goal of the design. Another one is to make it easy for hosting, self or managed
    • sdrinf45 minutes ago
      hi, great project! Windows support is sorely lacking, though. As someone working a lot with sandboxed LLMs right now, the options-space on windows for sandboxing is _extremely lacking_. Any plans to support it?
      • fqiao36 minutes ago
        Hey, thanks so much! yah we will definitely add windows support later. We are exploring how to get this work with WSL and will release it asap. Stay tuned and thanks!
      • binsquare38 minutes ago
        Yeah, it&#x27;s in my mind.<p>WSL2 runs a linux virtual machine. Need to take some time and care to wire that up, but definitely feasible.
    • thm1 hour ago
      You could add OrbStack to the comp. table
      • fqiao50 minutes ago
        Will do. Thanks for the suggestion!
  • bch35 minutes ago
    see too[0][1] for projects of a similar* vein, incl historical account.<p>*yes, FreeBSD is specifically developed against Firecracker which is specifically avoided w &quot;Smol machines&quot;, but interesting nonetheless<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;NetBSDfr&#x2F;smolBSD" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;NetBSDfr&#x2F;smolBSD</a><p>[1] <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;publications&#x2F;loginonline&#x2F;freebsd-firecracker" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;publications&#x2F;loginonline&#x2F;freebsd-fire...</a>
  • 0cf8612b2e1e40 minutes ago
    This looks very cool. Does the VM machinery still work if I run it in a bubblewrap? Can it talk to a GPU?<p>Can you pipe into one? It would be cute if I could wget in machine 1 and send that result to offline machine 2 for processing.
    • binsquare33 minutes ago
      Haven&#x27;t tried with bubblewrap - but it should.<p>Yes! GPU passthrough is being actively worked on and will land in next major release: <a href="https:&#x2F;&#x2F;github.com&#x2F;smol-machines&#x2F;smolvm&#x2F;pull&#x2F;96" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;smol-machines&#x2F;smolvm&#x2F;pull&#x2F;96</a><p>Yea just tried piping, it works:<p>``` smolvm machine exec --name m1 -- wget -qO- <a href="https:&#x2F;&#x2F;example.com&#x2F;data.csv" rel="nofollow">https:&#x2F;&#x2F;example.com&#x2F;data.csv</a> \ | smolvm machine exec --name m2 -i -- python3 process.py ```
  • cr125rider1 hour ago
    Great job with the comparison table. Immediately I was like “neat sounds like firecracker” then saw your table to see where it was similar and different. Easy!<p>Nice job! This looks really cool
    • fqiao1 hour ago
      Thanks so much
  • fqiao1 hour ago
    Give it a try folks. Would really love to hear all the feedbacks!<p>Cheers!
    • leetrout1 hour ago
      why did you seemingly create two HN accounts?<p>Edit: I see this appears to be a contributor to the project as well. It was not obvious to me.
      • fqiao54 minutes ago
        this is me: <a href="https:&#x2F;&#x2F;github.com&#x2F;phooq" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;phooq</a><p>@binsquare is this one: <a href="https:&#x2F;&#x2F;github.com&#x2F;BinSquare" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;BinSquare</a>
  • harshdoesdev1 hour ago
    its a really innovative idea! very interested in the subsecond coldstart claim, how does it achieve that?
    • fqiao1 hour ago
      @binsquare basically brute-force trimmed down unnecessary linux kernel modules, tried to get the vm started with just bare minimum. There are more rooms for improvement for sure. We will keep trying!
      • deivid36 minutes ago
        With this approach I managed to get to sub-10ms start (to pid1), if you can accept a few constraints there&#x27;s plenty of room!<p>Though my version was only tested on Linux hosts
        • binsquare7 minutes ago
          would be interested to see how you do it, how can I connect with you - emotionally?
      • harshdoesdev1 hour ago
        nice! for most local workloads, it is actually sufficient. so, do you ship a complete disk snapshot of the machines?
        • fqiao57 minutes ago
          Yes. files on the disks are kept across stop and restart. We also have a pack command to compress the machine as a single file so that it can shipped and rehydrated elsewhere
  • volume_tech1 hour ago
    [dead]