> It brings the power of edge computing to your own infrastructure.<p>I like the idea of self-hosting, but it seems fairly strongly opposed to the concept of edge computing. The edge is only made possible by big ass vendors like Cloudflare. Your own infrastructure is very unlikely to have 300+ points of presence on the global web. You can replicate this with a heterogeneous fleet of smaller and more "ethical" vendors, but also with a lot more effort and downside risk.
But do you need 300 pops to benefit from the edge model? Or would 10 pops in your primary territory be enough.
For most applications 1 location is probably good enough.I assume HN is single location and I am a lomg way from CA but have no speed issues.<p>Cavaet for high scale sites and game servers. Maybe for image heavy sites too (but self hosting then adding a CDN seems like a low lock in and low cost option)
Honestly, for <i>my own</i> stuff I only need one PoP to be close to my users. And I've avoided using Cloudflare because they're too far away.<p>More seriously, I think there's a distinction between "edge-style" and actual <i>edge</i> that's important here. Most of the services I've been involved in wouldn't benefit from any kind of edge placement: that's not the lowest hanging fruit for performance improvements. But that doesn't mean that the "workers" model wouldn't fit, and indeed I <i>suspect</i> that using a workers model would help folk architect their stuff in a form that is not only more performant, but also more amenable to edge placement.
I agree, latency is very important and 300 pops is great, but seems more for marketing and would see diminishing returns for the majority of applications.
many apps are fine on a single server
> But do you need 300 pops to benefit from the edge model? Or would 10 pops in your primary territory be enough.<p>I don't think that the number of PoPs is the key factor. The key factor is being able to route requests based on a edge-friendly criteria (latency, geographical proximity, etc) and automatically deploy changes in a way that the system ensures consistency.<p>This sort of projects do not and cannot address those concerns.<p>Targeting the SDK and interface is a good hackathon exercise, but unless you want to put together a toy runtime to do some local testing, this sort of project completely misses the whole reason why this sort of technology is used.
Is some sort of decentralised network of hosts somehow working together to challenge the Cloudflare hegemony even plausible? Would it be too difficult to coordinate in a safe and reliable way?
If you have a central database, what benefits are you getting from edge compute? This is a serious question. As far as I understand edge computing is good for reducing latency. If you have to communicate with a non-edge database anyway, is there any advantage from being on the edge?
I've decided to ditch CF because Wrangler is deployed via NPM and I cannot bear NodeJS and Microsoft NPM anymore.<p>I get the impression this can't be run without NodeJS right now?
The problem with sandboxing solutions is that they have to provide <i>very</i> solid guarantees that code can't escape the sandbox, which is really difficult to do.<p>Any time I'm evaluating a sandbox that's what I want to see: evidence that it's been robustly tested against all manner of potential attacks, accompanied by detailed documentation to help me understand how it protects against them.<p>This level of documentation is rare! I'm not sure I can point to an example that feels good to me.<p>So the next thing I look for is evidence that the solution is being used in production by a company large enough to have a dedicated security team maintaining it, and with real money on the line for if the system breaks.
I agree, and as much as I think AI helps productivity, for a high security solution,<p>> Recently, with Claude's help, I rewrote everything on top of rusty_v8 directly.<p>worries me
Yes, exactly. The other reason Cloudflare workers runtime is secure is that they are incredibly active at keeping it patched and up to date with V8 main. It's often ahead of Chrome in adopting V8 releases.
I didn’t know this, but there are also security downsides to being ahead of chrome — namely, all chrome releases take dependencies on “known good” v8 release versions which have at least passed normal tests and minimal fuzzing, but also v8 releases go through much more public review and fuzzing by the time they reach chrome stable channel. I expect if you want to be as secure as possible, you’d want to stay aligned with “whatever v8 is in chrome stable.”
Cloudflare Workers often rolls out <i>V8 security patches</i> to production before Chrome itself does. That's different from beta vs. stable channel. When there is a security patch, generally all branches receive the patch at about the same time.<p>As for beta vs. stable, Cloudflare Workers is generally somewhere in between. Every 6 weeks, Chrome and V8's dev branch is promoted to beta, beta branch to stable, and stable becomes obsolete. Somewhere during the six weeks between verisons, Cloudflare Workers moves from stable to beta. This has to happen before the stable version becomes obsolete, otherwise Workers would stop receiving security updates. Generally there is some work involved in doing the upgrade, so it's not good to leave it to the last moment. Typically Workers will update from stable to beta somewhere mid-to-late in the cycle, and then that beta version subsequently becomes stable shortly thereafter.<p>(I'm the lead engineer for Cloudflare Workers.)
Thanks for the clarification on CF's V8 patching strategy, that 24h turnaround is impressive and exactly why I point people to Cloudflare when they need production-grade multi-tenant security.<p>OpenWorkers is really aimed at a different use case: running your own code on your own infra, where the threat model is simpler. Think internal tools, compliance-constrained environments, or developers who just want the Workers DX without the vendor dependency.<p>Appreciate the work you and the team have done on Workers, it's been the inspiration for this project for years.
Fair point. The V8 isolate provides memory isolation, and we enforce CPU limits (100ms) and memory caps (128MB). Workers run in separate isolates, not separate processes, so it's similar to Cloudflare's model. That said, for truly untrusted third-party code, I'd recommend running the whole thing in a container/VM as an extra layer. The sandboxing is more about resource isolation than security-grade multi-tenancy.
I think you should consider adjusting the marketing to reflect this. "untrusted JavaScript" -> "JavaScript", "Secure sandboxing with CPU (100ms) and memory (128MB) limits per worker" -> "Sandboxing with CPU (100ms) and memory (128MB) limits per worker", overhauling <a href="https://openworkers.com/docs/architecture/security" rel="nofollow">https://openworkers.com/docs/architecture/security</a>.<p>Over promising on security hurts the credibility of the entire project - and the main use case for this project is probably executing trusted code in a self hosted environment not "execut[ing] untrusted code in a multi-tenant environment".
I don't think what you want us even possible. How would such guarantees even look like? "Hello, we are a serious cybersec firm and we have evaluated the code and it's pretty sound, trust us!"?<p>"Hello, we are a serious cybersec firm and we have evaluated the code and here are our test with results that proof that we didn't find anything, the code is sound; Have we been through? We have, trust us!"
In terms of a one off product without active support - the only thing I can really imagine is a significant use of formal methods to prove correctness of the entire runtime. Which is of course entirely impractical given the state of the technology today.<p>Realistically security these days is an ongoing process, not a one off, compare to cloudflare's security page: <a href="https://developers.cloudflare.com/workers/reference/security-model/" rel="nofollow">https://developers.cloudflare.com/workers/reference/security...</a> (to be clear when I use the pronoun "we" I'm paraphrasing and not personally employed by cloudflare/part of this at all)<p>- Implicit/from other pieces of marketing: We're a reputably company with these other big reputable companies who care about security and are juicy targets for attacks using this product.<p>- We update V8 within 24 hours of a security update, compared to weeks for the big juicy target of Google Chrome.<p>- We use various additional sandboxing techniques on top of V8, including the complete lack of high precision timers, and various OS level sandboxing techniques.<p>- We detect code doing strange things and move it out of the multi-tennant environment into an isolated one just in case.<p>- We detect code using APIs that increase the surface area (like debuggers) and move it out of the multi-tennant environment into an isolated on just in case.<p>- We will keep investing in security going forwards.<p>Running secure multi-tenant environments is not an easy problem. It seems unlikely that it's possible for a typical open source project (typical in terms of limited staffing, usually including a complete lack of on-call staff) to release software to do so today.
Agreed. Cloudflare has dedicated security teams, 24h V8 patches, and years of hardening – I can't compete with that. The realistic use case for OpenWorkers is running your own code on your own infra, not multi-tenant SaaS. I will update the docs to reflect this.
Something like "all code is run with no permissions to the filesystem or external IO by default, you have to do this to add fine-grained permissions for IO, the code is run within an unprivileged process that's sandboxed using standard APIs to defend in depth against possible v8 vulnerabilities, here's how this system protects against obvious possible attacks..." would be pretty good. Obviously it's not proof it's all implemented perfectly, but it would be a quick sign that the project is miles ahead of a naive implementation, and it would give someone interested some good pointers on what parts to start reviewing.
Other response address how you could go about this, but I'd just like to note that you touch on the core problem of security as a domain: At the end of the day, it's a problem of figuring out who to trust, how much to trust them, and when those assessments need to change.<p>To use your example: Any cybersecurity firm or practitioner worth their salt should be *very* explicit about the scope of their assessment.<p>- That scope should exhaustively detail what was and wasn't tested.<p>- There should be proof of the work product, and an intelligible summary of why, how, and when an assessment was done.<p>- They should give you what you need to have confidence in *your understanding of* you security posture as well as evidence that you *have* a security posture you can prove with facts and data.<p>Anybody who tells you not to worry and take their word for something should be viewed with extreme skepticism. It is a completely unacceptable frame of mind when you're legally and ethically responsible for things you're stewarding for other people.
That's the problem! It's really hard to find trustworthy sandboxing solutions, I've been looking for a long time. It's kind of my white whale.
As I understand it separate isolates in a single process are inherently less secure than separate processes (e.g. Chrome's site isolation) which is again less secure than virtualization based solutions.<p>As a TinyKVM / KVM Server contributor I'm obviously hopeful our approach will work out, but we still have some way to go to get to a level of polish that makes it easy to get going with and have the confidence of production level experience.<p>TinyKVM has the advantage of a much smaller surface area to secure as a KVM based solution and the ability to offer fast per-request isolation as we can reset the VM state a couple of orders of magnitude faster than v8 can create a new isolate from a snapshot.<p><a href="https://github.com/libriscv/kvmserver" rel="nofollow">https://github.com/libriscv/kvmserver</a>
I imagine you messed about with Sandstorm back in the day?
Cloudflare needs to worry about their sandbox, because they are running your code and you might be malicious. You have less reason to worry: if you want to do something malicious to the box your worker code is running on, you already have access (because you're self-hosting) and don't need a sandbox escape.
Automatically running LLM-written code (where the LLM might be naively picking a malicious library to use, is poisoned by malicious context from the internet, or wrongly thinks it should reconfigure the host system it's executing code on) is an increasingly popular use-case where sandboxing is important.
Since it’s self hosted the sandboxing aspect at the language/runtime level probably matters just a little bit less.
I think this is, sandboxed so your debugging didn't need to consider interactions, not sandboxes so you can run untrusted code.
Not if you're self-hosting and running your own trusted code, you don't. I care about resource isolation, not security isolation, between my own services.
Cool project, great work!<p>Forgive the uninformed questions, but given that `workerd` (<a href="https://github.com/cloudflare/workerd" rel="nofollow">https://github.com/cloudflare/workerd</a>) is "open-source" (in terms of the runtime itself, less so the deployment model), is the main distinction here that OpenWorkers provides a complete environment? Any notable differences between the respective runtimes themselves? Is the intention to ever provide a managed offering for scalability/enterprise features, or primarily focus on enabling self-hosting for DIYers?
Thanks! Main differences:
1. Complete stack: workerd is just the runtime. OpenWorkers includes the full platform – dashboard, API, scheduler, logs, and self-hostable bindings (KV, S3/R2, Postgres).
2. Runtime: workerd uses Cloudflare's C++ codebase, OpenWorkers is Rust + rusty_v8. Simpler, easier to hack on.
3. Managed offering: Yes, there's already one at dash.openworkers.com – free tier available. But self-hosting is a first-class citizen.
Amazing work!<p>I have been thinking exactly about this. CF Workers are nice but the vendor lock-in is a massive issue mid to long term.
Bringing D1 makes a lot of sense for web apps via libSql (SQLite with read/write replicas).<p>Do you intended to work with the current wrangler file format?
Does this currently work with Hono.js with the cloudflare connector>
I see anything that reduces the relience on vendor lock-in I upvote. Hopefully cloud services see mass exodus so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT.<p>Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.
Probably worth pointing out that the Cloudflare Workers runtime is already open source: <a href="https://github.com/cloudflare/workerd" rel="nofollow">https://github.com/cloudflare/workerd</a>
True, workerd is open source. But the bindings (KV, R2, D1, Queues, etc.) aren't – they're Cloudflare's proprietary services. OpenWorkers includes open source bindings you can self-host.
I'm worrying that the increasing ram prices will drive more people away from local and more to cloud services because if the big companies are buying up all the resources it might not be feasible to self host in a few years
> so they have to have reasonable pricing that actually reflects their costs instead of charging more than free for basic services like NAT<p>How is the cost of NAT free?<p>> Cloud services are actually really nice and convenient if you were to ignore the eye watering cost versus DIY.<p>I don't doubt clouds are expensive, but in many countries it'd cost more to DIY for a proper business. Running a service isn't just running the install command. Having a team to maintain and monitor services is already expensive.
Presumably they're talking about the egregious price of NAT on AWS.<p>It's next to free self hosting considering even the crappiest consumer router has hardware accelerated NAT and takes a tiny amount of power. You likely already have the hardware and power since you need routing and potentially other network services
salesforce had their hosting bill jump orders of magnitude after ditching their colocation, it did not save anything and colocation staff were replaced with AWS engineers<p>nat is free to provide because the infrastructure to have NAT is already there and there is never anything maxing out a switch cluster(most switches sit at ~1% usage since they're overspeced $1,000,000 switches), so other than host CPU time managing interrupts (which is unlikely since all network cards offload this).<p>sure you could argue that regional NAT might should be priced, but these companies have so much fiber between their datacenters that all of nat usage is probably a rounding error.
They said “charging more than free” - i.e., more than $0, i.e., they’re not free. It was awkwardly worded.
Perhaps it might be helpful to some to also lay out the things that don't work today (or eg roadmap of what's being worked on that doesn't currently work?). Anyway, looks very cool!
Good idea! Main things not yet implemented: Durable Objects, WebSockets, HTMLRewriter, and cache API. Next priority is execution recording/replay for debugging. I'll add a roadmap section to the docs.
[flagged]
Self-hosted workers are becoming critical infrastructure for AI agent workloads. When you're running agents that need to interact with external services - web scraping, API orchestration, browser automation - you hit Cloudflare's execution limits fast. The 30s CPU time on the free tier and even the 15min on paid plans don't work for long-running agent tasks.<p>The isolation model here is interesting. For agents that need to handle untrusted input (processing user URLs, parsing arbitrary documents), V8 isolates give you a security boundary that's much lighter than full container isolation. But you trade off the ability to do things like spawn subprocesses or access the filesystem.<p>Curious about the persistence story. Most agent workflows need some form of state between invocations - conversation history, task progress, cached auth tokens. Is there a built-in KV store or does this expect external storage?
To the author: The ASCII-art Architecture diagram is very broken, at least on my Pixel phone with Firefox.<p>These kinds of text-based diagrams are appealing for us techies, but in the end I learned that they are less practical. My suggestion is to use an image, and think of the text-based version as the "source code" which you keep, meanwile what gets published is the output of "compiling" it into something that is for sure always viewable <i>without mistake</i> (that one is where we tend to miss it with ascii-art).
Nice project.<p>One thing Cloudflare Workers gets right is strong execution isolation.
When self-hosting, what’s the failure model if user code misbehaves?
Is there any runtime-level guardrail or tracing for side-effects?<p>Asking because execution is usually where things go sideways.
I did a huge chunk of work to split deno_core from deno a few years back and TBH I don't blame you from moving to raw rusty_v8. There was a _lot_ of legacy code in deno_core that was challenging to remove because touching a lot of the code would break random downstream tests in deno constantly.
Thanks for that work! deno_core is a beautiful piece of work and is still an option for OpenWorkers: <a href="https://github.com/openworkers/openworkers-runtime-deno" rel="nofollow">https://github.com/openworkers/openworkers-runtime-deno</a><p><pre><code> We maintained it until we introduced bindings — at that point, we wanted more fine-grained control over the runtime internals, so we moved to raw rusty_v8 to iterate faster. We'll probably circle back and add the missing pieces to the deno runtime at some point.</code></pre>
What if we hosted the cloud... on our own computers?<p>I see we have entered that phase in the ebb and flow of cloud vs. self-hosting. I'm seeing lots of echoes of this everywhere, epitomised by talks like this:<p><a href="https://youtu.be/tWz4Eqh9USc" rel="nofollow">https://youtu.be/tWz4Eqh9USc</a>
It won't be a... cloud?<p>To me, the principal differentiator is the elasticity. I start and retire instances according to my needs, and only pay for the resources I've actually consumed. This is only possible on a very large shared pool of resources, where spikes of use even out somehow.<p>If I host everything myself, the cloud-like deployment tools simplify my life, but I still pay the full price for my rented / colocated server. This makes sense when my load is reasonably even and predictable. This also makes sense when it's my home NAS or media server anyway.<p>(It is similar to using a bus vs owning a van.)
> What if we hosted the cloud... on our own computers?<p>The value proposition of function-as-a-service offerings is not "cloud" buzzwords, but providing an event-handling framework where developers can focus on implementing event handlers that are triggered by specific events.<p>FaaS frameworks are the high-level counterpart of the low-pevel message brokers+web services/background tasks.<p>Once you include queues in the list of primitives, durable executions are another step in that direction.<p>If you have any experience developing and maintaining web services, you'll understand that API work is largely comprised of writing boilerplate code, controller actions, and background tasks. FaaS frameworks abstract away the boilerplate work.
Technically, and architecturally this is excellent. It’s also an excellent product idea. And I’m particularly a fan of the big-ass-vendor-inversion-model where instead of the big ass vendor ripping off an open source project and monetizing it, you look at one of their projects and you rip it off inversely and open source it — this is the way.
Cool. I always liked CF workers but haven’t shipped anything serious with it due to not wanting vendor lock-in. This is perfect for knowing you always got a escape hatch.
Isn't the whole point of Cloudflare's Workers to pay per function? If it is self-hosted, you must dedicate hardware in advance, even if it's rented in the cloud.
Good to see this! Cloudflare's cool, but those locked-in things (KV, D1, etc.) always made it hard to switch.
Offering open-source alternatives is always good, but maintainign them is on the community. Even without super-secure multi-tenancy, being able to run the same code on your own stuff or a small VPS without changing the storage is a huge dev experience boost.
Any reason to abandon Deno?<p>edit: if the idea was to have compatibility with cloudflare workers, workers can run deno <a href="https://docs.deno.com/examples/cloudflare_workers_tutorial/" rel="nofollow">https://docs.deno.com/examples/cloudflare_workers_tutorial/</a>
Deno core is great and I didn't really abandon Deno – we support 5 runtimes actually, and Deno is the second most advanced one (<a href="https://github.com/openworkers/openworkers-runtime-deno" rel="nofollow">https://github.com/openworkers/openworkers-runtime-deno</a>). It broke a few weeks ago when I added the new bindings system and I haven't had time to fix it yet. Focused on shipping bindings fast with the V8 runtime. Will get back to Deno support soon.
Cool project, but I never found the cloudflare DX desirable compared to self hosted alternatives. A plain old node server in a docker container was much easier to manage, use and is scalable. Cloudflare's system was just a hoop that you needed to jump through to get to the other nice to haves in their cloud.
Does this actually use the cloudflare worker runtime or is this just a way to run code in v8 isolates?
It's a custom V8 runtime built with rusty_v8, not the actual Cloudflare runtime (github.com/openworkers/openworkers-runtime-v8). The goal is API compatibility – same Worker syntax (fetch handler, Request/Response, etc.) so you can migrate code easily. Under the hood it's completely independent.
This is super nice! Thank you for working on this!<p>Recently really enjoying CloudFlare Workflows (used it in <a href="https://mafia-arena.com" rel="nofollow">https://mafia-arena.com</a>) and would be nice to build Workflows on top of this too.
This is similar to what rivet (1) does, perhaps focusing more on stateless than rivet does<p>(1) <a href="https://www.rivet.dev/docs/actors/">https://www.rivet.dev/docs/actors/</a>
I wonder why V8 is considered as superior compared to WASM for sandboxing.
Is WASM’s story for side effects solved yet? eg network calls seems too complicated (<a href="https://github.com/vasilev/HTTP-request-from-inside-WASM" rel="nofollow">https://github.com/vasilev/HTTP-request-from-inside-WASM</a> etc)
On V8, you can run both JavaScript and WASM.
This is very nice! Do you plan to hook this up to GitHub, so that a push of worker code (and maybe a yaml describing the environment & resources) will result in a redeploy?
Not yet, but it's one of the next big features. I'm currently working on the CLI (WIP), and GitHub integration with auto-deploy on push will come after that. A yaml config for bindings/cron is definitely on the roadmap too.
Why would I want this over just sticking Node / Deno / Bun in a Docker container?
Could you add a kubernetes deployment quick-start? Just a simple deployment.yaml is enough.
DX?<p>I'm quite ignorant on the topic (as I never saw the appeal of Cloudflare workers, not due to technical problems but solely because of centralization) but what does DX in "goal has always been the same: run JavaScript on your own servers, with the same DX as Cloudflare Workers but without vendor lock-in." mean? Looks like a runtime or environment but looking at <a href="https://github.com/drzo/workerd" rel="nofollow">https://github.com/drzo/workerd</a> I also don't see it.<p>Anyway if the "DX" is a kind of runtime, in which actual contexts is it better than the incumbents, e.g. Node, or the newer ones e.g. Deno or Zig or even more broadly WASI?
DX means Developer Experience, they're saying it lets you use the same tooling and commands to build the workers as you would if they were on CloudFlare.
> Anyway if the "DX" is a kind of runtime, in which actual contexts is it better than the incumbents, e.g. Node, or the newer ones e.g. Deno or Zig or even more broadly WASI?<p>I'm not the blogger, I'm just a developer who works professionally with Cloudflare Workers. To me the main value proposition is avoiding vendor lock-in, and even so the logic doesn't seem to be there.<p>The main value proposition of Cloudflare Workers is being able to deploy workers at the edge and use them to implement edge use cases. Meaning, custom cache logic, perhaps some pauthorization work, request transformation and aggregation, etc. If you remove the global edge network and cache, you do not have any compelling reason to look at this.<p>It's also perplexing how the sales pitch is Rust+WASM. This completely defeats the whole purpose of Cloudflare Workers. The whole point of using workers is to have very fast isolates handling IO-heavy workloads where they stay idling the majority of the time so that the same isolate instance can handle a high volume of requests. WASM is notorious for eliminating the ability to yield on awaits from fetch calls, and is only compelling if your argument is a lift-and-shift usecase. Which this ain't it.
Interesting option to consider next to openfaas
<a href="https://imgflip.com/i/agah04" rel="nofollow">https://imgflip.com/i/agah04</a>