Basically any open source project nowadays run their software stack in containers often requiring docker compose. Unfortunatley Smol machines do not support Docker inside the microvms and they also do not support nested VMs for things that use Vagrant. I think this is a big drawback.
The feature that lets you create self-contained binaries seems like a potentially simpler way to package JVM apps than GraalVM Native.<p>Probably a lot of other neat usecases for this, too<p><pre><code> smolvm pack create --image python:3.12-alpine -o ./python312
./python312 run -- python3 --version
# Python 3.12.x — isolated, no pyenv/venv/conda needed</code></pre>
Hello, I'm building a replacement for docker containers with a virtual machine with the ergonomics of containers + subsecond start times.<p>I worked in AWS previously in the container space + with firecracker. I realized the container is an unnecessary layer that slowed things down + firecracker was a technology designed for AWS org structure + usecase.<p>So I ended up building a hybrid taking the best of containers with the best of firecracker.<p>Let me know your thoughts, thanks!
+1. i built something similar called shuru.run because i wanted an easy way to set up microVM sandboxes to run some of my AI apps, and firecracker wasn't available for macOS (and, as you said, it is just too heavy for normal user-level workloads).
Nice work on Shuru — I remember looking at it when I was researching this space. You went with a Rust wrapper on Apple’s Virtualization framework right?<p>I have been working on something similar but on top of firecracker, called it bhatti (<a href="https://github.com/sahil-shubham/bhatti" rel="nofollow">https://github.com/sahil-shubham/bhatti</a>).<p>I believe anyone with a spare linux box should be able to carve it into isolated programmable machines, without having to worry about provisioning them or their lifecycle.<p>The documentation’s still early but I have been using it for orchestrating parallel work (with deploy previews), offloading browser automation for my agents etc. An auction bought heztner server is serving me quite well :)
Yes, having a light-weight solution for local devices as well is one primary goal of the design. Another one is to make it easy for hosting, self or managed
hi, great project! Windows support is sorely lacking, though. As someone working a lot with sandboxed LLMs right now, the options-space on windows for sandboxing is _extremely lacking_. Any plans to support it?
Hey, thanks so much! yah we will definitely add windows support later. We are exploring how to get this work with WSL and will release it asap.
Stay tuned and thanks!
Yeah, it's in my mind.<p>WSL2 runs a linux virtual machine. Need to take some time and care to wire that up, but definitely feasible.
You could add OrbStack to the comp. table
see too[0][1] for projects of a similar* vein, incl historical account.<p>*yes, FreeBSD is specifically developed against Firecracker which is specifically avoided w "Smol machines", but interesting nonetheless<p>[0] <a href="https://github.com/NetBSDfr/smolBSD" rel="nofollow">https://github.com/NetBSDfr/smolBSD</a><p>[1] <a href="https://www.usenix.org/publications/loginonline/freebsd-firecracker" rel="nofollow">https://www.usenix.org/publications/loginonline/freebsd-fire...</a>
This looks very cool. Does the VM machinery still work if I run it in a bubblewrap? Can it talk to a GPU?<p>Can you pipe into one? It would be cute if I could wget in machine 1 and send that result to offline machine 2 for processing.
Haven't tried with bubblewrap - but it should.<p>Yes! GPU passthrough is being actively worked on and will land in next major release: <a href="https://github.com/smol-machines/smolvm/pull/96" rel="nofollow">https://github.com/smol-machines/smolvm/pull/96</a><p>Yea just tried piping, it works:<p>```
smolvm machine exec --name m1 -- wget -qO- <a href="https://example.com/data.csv" rel="nofollow">https://example.com/data.csv</a> \
| smolvm machine exec --name m2 -i -- python3 process.py
```
Great job with the comparison table. Immediately I was like “neat sounds like firecracker” then saw your table to see where it was similar and different. Easy!<p>Nice job! This looks really cool
Give it a try folks. Would really love to hear all the feedbacks!<p>Cheers!
its a really innovative idea! very interested in the subsecond coldstart claim, how does it achieve that?
[dead]