This vulnerability is basically the worst-case version of what people have been warning about since RSC/server actions were introduced.<p>The server was deserializing untrusted input from the client directly into module+export name lookups, and then invoking whatever the client asked for (without verifying that metadata.name was an own property).<p><pre><code> return moduleExports[metadata.name]
</code></pre>
We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.<p>My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
To me it just looks like unacceptable carelessness, not an indictment of the alleged "lack of explicitness" versus something like gRPC. Explicit schemas aren't going to help you if you're so careless that, right at the last moment, you allow untrusted user input to reference <i>anything whatsoever</i> in the server's name space.
But once that particular design decision is made it is a question of time before that happens. The one enables the other.<p>The fact that React embodies an RPC scheme in disguise is quite obvious if you look at the kind of functionality that is implemented, some of that simply can not be done any other way. But then you should own that decision and add all of the safeguards that such a mechanism requires, you can't bolt those on after the fact.
All mistakes can be blamed to "carelessness". This doesn't change the fact that some designs are more error-prone and more unsafe.
The endpoint is not whatever the client asks for. It's marked specifically as exposed to the user with "use server". Of course the people who designed this recognize that this is designing an RPC system.<p>A similar bug could be introduced in the implementation of other RPC systems too. It's not entirely specific to this design.<p>(I contribute to React but not really on RSC.)
”use server” is not required for this vulnerability to be exploitable.
so any package could declare some modules as “use server” and they’d be callable, whether the RSC server owner wanted them to or not? That seems less than ideal.
They were warned. I don't see how this can be characterized as anything but sloppy.
You can call anything, anytime, anywhere without restrictions or protection.<p>Imagine these dozens of people, working at Meta.<p>They sit at the table, they agree to call eval() and not think "what could go wrong"
For the layperson, does this mean this approach and everything that doesn't use it is not secure?<p>Building a private, out of date repo doesn't seem great either.
From Facebook/Meta: <a href="https://www.facebook.com/security/advisories/cve-2025-55182" rel="nofollow">https://www.facebook.com/security/advisories/cve-2025-55182</a><p>> A pre-authentication remote code execution vulnerability exists in React Server Components versions 19.0.0, 19.1.0, 19.1.1, and 19.2.0 including the following packages: react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack. The vulnerable code unsafely deserializes payloads from HTTP requests to Server Function endpoints.<p>React's own words: <a href="https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components" rel="nofollow">https://react.dev/blog/2025/12/03/critical-security-vulnerab...</a><p>> React Server Functions allow a client to call a function on a server. React provides integration points and tools that frameworks and bundlers use to help React code run on both the client and the server. React translates requests on the client into HTTP requests which are forwarded to a server. On the server, React translates the HTTP request into a function call and returns the needed data to the client.<p>> An unauthenticated attacker could craft a malicious HTTP request to any Server Function endpoint that, when deserialized by React, achieves remote code execution on the server. Further details of the vulnerability will be provided after the rollout of the fix is complete.
Given that the fix appears to be to look for own properties, the attack was likely to reference prototype level module properties or the gift-that-keeps-giving the that is __proto__.
I see this type of vulnerability all the time. Seen it in Java, Lua, JavaScript, Python and so on.<p>I think deserialization that relying on blacklists of properties is a dangerous game.<p>I think rolling your own object deserialization in a library that isn’t fully dedicated to deserialization is about as dangerous as writing your own encryption code.
not `__proto__` but likely `constructor`, if you access `({}).constructor` you'd get the Object constructor, then if you access `.constructor` on that you'd get the Function constructor<p>the one problem I haven't understood is how it manages to perform a second call afterwards, as only being able to call Function constructor doesn't really amount to much (still a serious problem though)
This comment from a dupe thread is worth considering: <a href="https://news.ycombinator.com/item?id=46137352">https://news.ycombinator.com/item?id=46137352</a>
"React Server Functions allow a client to call a function on a server"<p>Intentionally? That's a scary feature
used to wire up form submission in a type-safe way, so that part makes sense at least<p>whatever monstrosity hides underneath these nice high-level TypeScript frameworks to make all of it happen in JS, usually that's the worrying part
Why does the react development team keeps investing their time on confusing features that only reinvent the wheel and cause more problems than solve?<p>What does server components do so much better than SSR? What minute performance gain is achieved more than client side rendering?<p>Why won’t they invest more on solving the developer experience that took a nosedive when hooks were introduced? They finally added a compiler, but instead of going the svelte route of handling the entire state, it only adds memoization?<p>If I can send a direct message to the react team it would be to abandon all their current plans, and work on allowing users to write native JS control flows in their component logic.<p>sorry for the rant.
Server Components is not really related to SSR.<p>I like to think of Server Components as componentized BFF ("backend for frontend") layer. Each piece of UI has some associated "API" with it (whether REST endpoints, GraphQL, RPC, or what have you). Server Components let you express the dependency between the "backend piece" and the "frontend piece" as an import, instead of as a `fetch` (client calling server) or a <script> (server calling client). You can still have an API layer of course, but this gives you a syntactical way to express that there's a piece of backend that prepares data for this piece of frontend.<p>This resolves tensions between evolving both sides: the each piece of backend always prepares the exact data the corresponding piece of frontend needs because they're literally bound by a function call (or rather JSX). This also lets you load data as granularly as you want without blocking (very nice when you have a low-latency data layer).<p>Of course you can still have a traditional REST API if you want. But you can also have UI-specific server computation in the middle. There's inherent tension between the data needed to display the UI (a view model) and the way the data is stored (database model); RSC gives you a place to put UI-specific logic that should execute on the backend but keeps composability benefits of components.
Thanks for the comment Dan, I always appreciate you commenting and explaining in civility, and I’m sorry if I came a bit harsh.<p>I understand the logic, but there are several issues I can think of.<p>1 - as I said, SSR and API layers are good enough, so investing heavily in RSC when the hooks development experience is still so lacking seems weird to me. React always hailed itself as the “just JS framework”, but you can’t actually write regular JS in components since hooks have so many rules that bind the developer in a very specific way of writing code.<p>2 - as react was always celebrated as an unopinionated framework, RSC creates a deep coupling between 2 layers which were classically very far apart.<p>Here are a list of things that would rather have react provide:<p>- advanced form functionality that binds to model, and supports validation<p>- i18n, angular has the translations compiled into the application and fetching a massive json with translations is not needed<p>- signals, for proper reactive state<p>- better templating ability for control flows<p>- native animation library<p>All of these things are important so I wouldn’t have to learn each new project’s permutation of the libraries de jour.
I wish React wasn’t the “default” framework.<p>I agree that the developer experience provided by the compiler model used in Svelte and React is much nicer to work with
React is good enough, so it's very hard to come up with a strong case to use anything else.
IMO angular provides such a great experience developing.
They had minimal API changes in the last 10 years, and every project looks almost the same since it’s so opinionated.<p>And what they DO add? Only things that improve dev exp
> They had minimal API changes in the last 10 years<p>The 1 to 2 transition was one hell of a burn though; people are probably still smarting...
It was one hell of a ride, but I would say the Angular team did one hell of a job too, supporting the glue code until v18 (not sure if the latest version still does).<p>Having both old and new Angular running in one project is super weird, but everything worked out in the end.
Well, the official statement is that 1 and 2 are 2 different frameworks. That’s why they were later named to angular JS and angular, to avoid confusion.<p>The migration path between angular 1 and 2 is the same as react and angular, it’s just glue holding 2 frameworks together<p>And that change happened 10 years ago
> That’s why they were later named to angular JS and angular, to avoid confusion.<p>Angular.js and angular. That's not confusing at all :-)
Easy migration was promised but never delivered. Angular 2 was still full of boilerplate. “Migrating” an AngularJS project to Angular 2 is as much work as porting it to React or anything else.<p>So yes, people got burnt (when we were told that there will be a migration path), and I will never rely on another Google-backed UI framework.
You aren’t wrong. I basically stopped using any OSS code backed by Google as a result.<p>I’d pushed Angular over React[0] for a massive project, and it worked well, but the migration to Angular 2 when it came created a huge amount of non-value-adding work.<p>Never again.<p>I don’t even really want to build anything against Gemini, despite how good it is, because I don’t trust Google not to do another rug pull.<p><i>[0] I’ve never enjoyed JSX/TSX syntax, nor appreciated the mix of markup with code, but I’ve subsequently learned to live with it.</i>
I'll second that Angular provides a great experience these days, but they have definitely had substantial API changes within the last few years: standalone components, swapping WebPack for esbuild, the new control-flow syntax, the new unit-test runner, etc...
I tried it once, and it was like, you have to edit 5 files to add 1 button.
I agree. Incoming hot take.<p>IMO, a big part of it is the lack of competition (in approach) exacerbated by the inability to provide alternatives due to technical/syntactical limitations of JavaScript itself.<p>Vue, Svelte, Angular, Ripple - anything other than React-y JSX based frameworks require custom compilers, custom file-types and custom LSPs/extensions to work with.<p>React/JSX frameworks have preferential treatment with pre-processors essentially baking in a crude compile time macro for JSX transformations.<p>Rust solved this by having a macro system that facilitated language expansion without external pre-processors - e.g. Yew and Leptos implement Vue-like and React-like patterns, including support for JSX and HTML templating natively inside standard .rs files, with standard testing tools and standard LSP support;<p><a href="https://github.com/leptos-rs/leptos/blob/main/examples/counter/src/lib.rs#L15" rel="nofollow">https://github.com/leptos-rs/leptos/blob/main/examples/count...</a><p><a href="https://github.com/yewstack/yew/blob/master/examples/counter/src/main.rs#L39" rel="nofollow">https://github.com/yewstack/yew/blob/master/examples/counter...</a><p>So either the ECMAScript folks figure out a way to have standardized runtime & compilable userland language extensions (e.g. macros) or WASM paves the way for languages better suited to the task to take over.<p>Neither of these cases are likely, however, so the web world is likely destined to remain unergonomic, overly complex and slow - at least for the next 5 - 10 years.
OK I got my own extremely hot take.<p>In my opinion, the core functionality of React (view rendering) is actually good and is why it cannot be unseated.<p>I remember looking for a DOM library:<p>- dojo: not for me<p>- prototype.js: not for me<p>- MooTools: not for me<p>- jQuery: something I liked finally<p>Well, guess what library won. After I adopted jQuery, I completely stopped looking for other DOM libraries.<p>But I still needed a template rendering library:<p>- Mustache.js: not for me<p>- Handlebars.js: not for me<p>- Embedded JavaScript Templates: not for me<p>- XML with XSLT: not for me<p>- AngularJS: really disliked it SOO much*<p>- Knockout.js: not for me<p>- Backbone.js with template engine: not for me and actually it was getting popular and I really wished it would just go away at the time**<p>- React: something I actually liked<p>You must remember that when React came out, you needed a JSX transpiler too, at a time when few people even used transpilers. This was a far bigger obstacle than these days IMO.<p>Which leads to my hot take: core React is just really good. I really like writing core React/JSX code and I think most people do too. If someone wrote a better React, I don’t think the problem you mentioned would hamper adoption.<p>The problems come when you leave React’s core competency. Its state management has never been great. Although not a React project itself, I hated Redux (from just reading its docs). I think RSC at the current moment is a disaster — so many pain points.<p>I think that’s where we are going to see the next innovation. I don’t think anyone is going to unseat React or JSX itself for rendering templates. No one unseated jQuery for DOM manipulation — rather we just moved entirely away from DOM manipulation.<p>*I spent 30 minutes learning AngularJS and then decided “I’m never going to want to see this library again.” Lo and behold they abandoned their entire approach and rewrote Angular for v2 so I guess I was right.<p>**It went away and thankfully I avoided having to ever learn Backbone.js.
Does transpilation not cover this? That's how they did JSX.
Transpilation of anything other than jsx requires a complex toolchain with layers of things like LSPs, compilers, IDE plugins, bundler plugins, etc.<p>Frameworks that go that route typically activate this toolchain by defining a dedicated file extension (.vue, .svelte).<p>This custom toolchain (LSP, IDE plugins) presents a lot of overhead to project maintainers and makes it difficult to actually create a viable alternative to the JSX based ecosystem.<p>For instance both Vue and Svelte took years to support TypeScript, and their integrations were brittle and often incompatible with test tooling.<p>Angular used decorators in a very similar way to what I am describing here. It's a source code annotation in "valid" ecmascript that is compiled away by their custom compiler. Though decorators are now abandoned and Angular still requires a lot of custom tooling to work (e.g, try to build an Angular project with a custom rspack configuration).<p>JSX/TSX has preferential treatment in this regard as it's a macro that's built into tsc - no other framework has this advantage.
Because Facebook has a budget for R&D, which works out to several salaries, and React is one of the biggest technical assets they have, so it's someone full time job to develop features and new versions of React to increase the moat and stock value of Meta.<p>It works out because it keeps a workforce of React Developers on their feet, learning about the new features, rather than doing other stuff. It's like SaSS for developers, only instead of paying a monthly subscription in cash, you have to pay a monthly subscription in man-hours.
I like the hooks :(
> What does server components do so much better than SSR? What minute performance gain is achieved more than client side rendering?<p>RSC is their solution to not being able to figure out how to make SSR faster and an attempt to reduce client-side bloat (which also failed)
They are taking care of the customers. The customers are front-end dev with little experience in servers, back-end and networking. So they want to run some code that changes state without having to deal with all of that infra and complexity. Preferably while remaining in the "React state". That is the attraction of Nextjs and RSC.
I couldn't agree more. I'll probably switch from React to something like ArrowJS in my personal work:<p><a href="https://www.arrow-js.com/docs/" rel="nofollow">https://www.arrow-js.com/docs/</a><p>It makes it easy to have a central JSON-like state object representing what's on the page, then have components watch that for changes and re-render. That avoids the opaqueness of Redux and promise chains, which can be difficult to examine and debug (unless we add browser extensions for that stuff, which feels like a code smell).<p>I've also heard heard good things about Astro, which can wrap components written in other frameworks (like React) so that a total rewrite can be avoided:<p><a href="https://docs.astro.build/en/guides/imports/" rel="nofollow">https://docs.astro.build/en/guides/imports/</a><p>I'm way outside my wheelhouse on this as a backend developer, so if anyone knows the actual names of the frameworks I'm trying to remember (hah), please let us know.<p>IMHO React creates far more problems than it solves:<p><pre><code> - Virtual DOM: just use Facebook's vast budget to fix the browser's DOM so it renders 1000 fps using the GPU, memoization, caching, etc and then add the HTML parsing cruft over that
- Redux: doesn't actually solve state transfer between backend and frontend like, say, Firebase
- JSX: do we really need this when Javascript has template literals now?
- Routing: so much work to make permalinks when file-based URLs already worked fine 30 years ago and the browser was the V in MVC
- Components: steep learning curve (but why?) and they didn't even bother to implement hooks for class components, instead putting that work onto users, and don't tell us that's hard when packages like react-universal-hooks and react-hookable-component do it
- Endless browser console warnings about render changing state and other errata: just design a unidirectional data flow that detects infinite loops so that this scenario isn't possible
</code></pre>
I'll just stop there. The more I learn about React, the less I like it. That's one of the primary ways that I know that there's no there there when learning new tools. I also had the same experience with the magic convention over configuration in Ruby.<p>What's really going on here, and what I would like to work on if I ever win the internet lottery (unlikely now with the arrival of AI since app sales will soon plummet along with website traffic) is a distributed logic flow. In other words, a framework where developers write a single thread of execution that doesn't care if it's running on backend or frontend, that handles all state synchronization, preferably favoring a deterministic fork/join runtime like Go over async behavior with promise chains. It would work a bit like a conflict-free replicated data type (CRDT) or software transactional memory (STM) but with full atomicity/consistency/isolation/durability (ACID) compliance. So we could finally get back to writing what looks like backend code in Node.js, PHP/Laravel, whatever, but have it run in the browser too so that users can lose their internet connection and merge conflicts "just work" when they go back online.<p>Somewhat ironically, I thought that was how Node.js worked before I learned it, where maybe we could wrap portions of the code to have @backend {} or @frontend {} annotations that told it where to run. I never dreamed that it would go through so much handwaving to even allow module imports in the browser!<p>But instead, it seems that framework maintainers that reached any level of success just pulled up the ladder behind them, doing little or nothing to advance the status quo. Never donating to groups working from first principles. Never rocking the boat by criticizing established norms. Just joining all of the other yes men to spread that gospel of "I've got mine" to the highest financial and political levels.<p>So much of this feels like having to send developers to the end of the earth to cater to the runtime that I question if it's even programming anymore. It would be like having people write the low-level RTF codewords in MS word rather than just typing documents via WYSIWYG. We seem to have all lost our collective minds ..the emperor has no clothes.
For a single page of HTML, ArrowJS's site loads really slow. I sat for almost a full second on just the header showing.
> I also had the same experience with the magic convention over configuration in Ruby.<p>I'm not sure what this is a reference to? Is it actually about Rails?
I suspect the commit to fix is:<p><a href="https://github.com/facebook/react/commit/bbed0b0ee64b89353a40d6313037bbc80221bc3d" rel="nofollow">https://github.com/facebook/react/commit/bbed0b0ee64b89353a4...</a><p>and it looks like its been squashed with some other stuff to hide it or maybe there are other problems as well.<p>this pattern appears 4 times and looks like it is reducing the functions that are exposed to the 'whitelist'. i presume the modules have dangerous functions in the prototype chain and clients were able to invoke them.<p><pre><code> - return moduleExports[metadata.name];
+ if (hasOwnProperty.call(moduleExports, metadata.name)) {
+ return moduleExports[metadata.name];
+ }
+ return (undefined: any);</code></pre>
> Projects hosted on Vercel benefit from platform-level protections that already block malicious request patterns associated with this issue.<p><a href="https://vercel.com/changelog/cve-2025-55182" rel="nofollow">https://vercel.com/changelog/cve-2025-55182</a><p>> Cloudflare WAF proactively protects against React vulnerability<p><a href="https://blog.cloudflare.com/waf-rules-react-vulnerability/" rel="nofollow">https://blog.cloudflare.com/waf-rules-react-vulnerability/</a>
We collaborated with many industry partners to proactively deploy mitigations due to the severity of the issue.<p>We still strongly recommend everyone to upgrade their Next, React, and other React meta-frameworks (peer)dependencies immediately.
Same for Netlify: <a href="https://www.netlify.com/changelog/2025-12-03-react-security-vulnerability-response/" rel="nofollow">https://www.netlify.com/changelog/2025-12-03-react-security-...</a><p>and Deno Deploy/Subhosting: <a href="https://deno.com/blog/react-server-functions-rce" rel="nofollow">https://deno.com/blog/react-server-functions-rce</a>
I patched and rebuilt what I could and added custom Crowdsec WAF rules for this, in case I missed something.
I'm fumbled around a bit and got it working, but not entirely sure if this is how it really works: have a look at <a href="https://github.com/ejpir/CVE-2025-55182-poc" rel="nofollow">https://github.com/ejpir/CVE-2025-55182-poc</a>
I ran your exploit-rce-v4.js with and without the patched react-server-dom-webpack, and both of them executed the RCE.<p>So I don't think this mechanism is exactly correct, can you demo it with an actual nextjs project, instead of your mock server?
I'v updated the code, try it now with server-realistic.js:<p>1. npm start
2. npm run exploit
I'm trying that, nextjs is a little different because it uses a Proxy object before it passes through, which blocks the rce.<p>I'm debugging it currently, maybe I'm not on the right path after all.
Your lump of AI-generated slop has detracted from the response to an important vulnerability. Congratulations. Your PoC is invalid and you should delete it.
Thanks for the writeup, it's incredible!
there can be no React RCE. if it is on the frontend, it is a browser RCE. if it is on the backend, then, as in this case it is a Next.js RCE.
CVE 10.0 is bonkers for a project this widely used
The packages affected, like [1], literally say:<p>> <i>Experimental React Flight bindings for DOM using Webpack.</i><p>> <i>Use it at your own risk.</i><p>311,955 weekly downloads though :-|<p>[1]: <a href="https://www.npmjs.com/package/react-server-dom-webpack" rel="nofollow">https://www.npmjs.com/package/react-server-dom-webpack</a>
That number is misleadingly low, because it doesn't include Next.js which bundles the dependency. Almost all usage in the wild will be Next.js, plus a few using the experimental React Router support.
As far as I'm aware, transitive dependencies are counted in this number. So when you npm install next.js, the download count for everything in its dependency tree gets incremented.<p>Beyond that, I think there is good reason to believe that the number is inflated due to automated downloads from things like CI pipelines, where hundreds or thousands of downloads might only represent a single instance in the wild.
It's not a transitive dependency, it's just literally bundled into nextjs, I'm guessing to avoid issues with fragile builds.
why is it not normal for CI pipelines to cache these things? its a huge waste of compute and network.
These often do get cached at CDNs inside of the consuming data centers. Even the ISP will cache these kind of things too.
It's certainly not uncommon to cache deps in CI. But at least at some point CircleCI was so slow at saving+restoring cache that it was actually faster to just download all the deps. Generally speaking for small/medium projects installing all deps is very fast and bandwidth is basically free, so it's natural many projects don't cache any of it.
The subjects of theses types of posts should report the CVSS severity as 10.0 so the PR speak can't simply deflect to what needs to be done.
React is widely used, react server components not so much.
You can't have Vercel without RCE.
More detail in the React Blog post here <a href="https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components" rel="nofollow">https://react.dev/blog/2025/12/03/critical-security-vulnerab...</a>
Next.js/RSC has become the new PHP :)<p>I guess now we'll see more bots scanning websites for "/_next" path rather than "/wp-content".
JavaScript is meant to be run in a browser. Not on a backend server [1].<p>Those who are choosing JS for the backend are irresponsible stewards of their customers' data.<p>1- <a href="https://ashishb.net/tech/javascript/" rel="nofollow">https://ashishb.net/tech/javascript/</a>
Link should go to: <a href="https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components" rel="nofollow">https://react.dev/blog/2025/12/03/critical-security-vulnerab...</a>
till this day, I don't know the substantial benefits of React Server Components over say classically rendered html pages + using htmx ?<p>mind you react in 2017 paid my rent. now cz of the complexity I refuse to work with react.
>now cz of the complexity I refuse to work with react.<p>What do you like to work with now?
They lend you optionality of when and where you want your code to run. Plus it enables you to define the server/client network boundary where you see fit and cross that boundary seamlessly.<p>It's totally fine to say you don't understand <i>why</i> they have benefits, but it really irks me when people exclaim they have no value or exist just for complexity's sake. There's no system for web development that provides <i>the developer</i> with more grounded flexibility than RSCs. I wrote a blog post about this[0].<p>To answer your question, htmx solves this by leaning on the server immensely. It doesn't provide a complete client-side framework <i>when you need it</i>. RSCs allow both the server and the client to co-exist, simply composing between the two while maintaining the full power of each.<p>[0] <a href="https://saewitz.com/server-components-give-you-optionality" rel="nofollow">https://saewitz.com/server-components-give-you-optionality</a>
But is it a good idea to make it seamless when every crossing of the boundary has significant implications for security and performance? Maybe the seam should be made as simple and clear as possible instead.
Just because something is made possible and you can do it doesn't mean you should!<p>The criticism is that by allowing you to do something you shouldn't, there isn't any benefit to be had, even if that system allows you to do something you couldn't before.
You can optionally enhance it and use React on the client. Doing that with HTMX is doable with "islands" but a bit more of a pain in the ass - and you'll struggle hard if you attempt to share client state across pages. Actually there are just a lot of little gotchas with the htmx approach<p>I mean it's a lot of complexity but ideally you shouldn't bring it in unless you actually need it. These solutions do solve real problems. The only issue is people try to use it everywhere. I don't use RSC, standard SPAs are fine for my projects and simpler
easier/more reactivity, doesnt require your api responses to be text parsable to html
React lets you inflate your salary.
Next is only good for it's static build, once it drops support for that I'm out.
Anyone know how Tanstack Start isn't affected?
TanStack Start has its own implementation of Server Functions: <a href="https://tanstack.com/start/latest/docs/framework/solid/guide/server-functions" rel="nofollow">https://tanstack.com/start/latest/docs/framework/solid/guide...</a>. It doesn't use React Server Functions, in part because it intends to be agnostic of the rendering framework (it currently supports React and Solid).<p>To be fair, they also haven't released (even experimental) RSC support yet, so maybe they lucked out on timing here.
They haven't implemented RSC yet.
This is genuinely embarrassing for the Next.js and React teams. They were warned for years that their approach to server-client communication had risks, derided and ignored everyone who didn't provide unconditional praise, and now this.<p>I think their time as Javascript thought leaders is past due.
It's almost like trying to magically wire up your frontend to the backend through magical functions is a bad idea.
This reminds me of the recent SvelteKit <i>Remote Functions</i> GH discussion:<p>> Even in systems that prevent server functions from being declared in client code (such as "use server" in React Server Components), experienced developers can be caught out. We prefer a design that emphasises the public nature of remote functions rather than the fact that they run on the server, and avoids any confusion around lexical scope. [0]<p>[0] <a href="https://github.com/sveltejs/kit/discussions/13897" rel="nofollow">https://github.com/sveltejs/kit/discussions/13897</a>
One could get the impression that the only really really important non-functional requirement for such a thing is to absolutely ensure that you can only call the "good" functions with the "good" payload.
ikr, no way this could have been predicted and warned about for months and months before now.
CV driven development needs new ideas for resume padding regardless of whether the idea is good or bad. Then you get this
Look at the money they’ve made to see if it was a bad idea or not.
I am betting it would be exploited in the wild in the next few days, buckle up!
I'm not a javascript person so I was trying to understand this. if i get it right this is basically a way to avoid writing backend APIs and manually calling them with fetch or axios as someone traditionally would do. The closest comparison my basic java backend brain can make is dynamically generating APIs at runtime using reflection, which is something I would never do... I'm lazy but not dumb
It's an RPC. They're half a century old. Java had RMI within a year of existence. [0]<p>> In remote procedure call systems, client-side stub code must be generated and linked into a client before a remote procedure call can be done. This code may be either statically linked into the client or linked in at run-time via dynamic linking with libraries available locally or over a network file system. In either the case of static or dynamic linking, the specific code to
handle an RPC must be available to the client machine in compiled form... Dynamic stub loading is used only when code for a
needed stub is not already available. The argument and return types specified in the remote interfaces are made available using the same mechanism. Loading arbitrary classes into clients or servers presents a potential security problem;<p><a href="https://pdos.csail.mit.edu/archive/6.824-2009/papers/waldo-rmi.pdf" rel="nofollow">https://pdos.csail.mit.edu/archive/6.824-2009/papers/waldo-r...</a>
There is a certain category of developers (a category that multiplied in size many times over around the same time as the boom in coding bootcamps, take that for what you will) who believe that there's virtue in running the same code on the client and the server, despite them being totally different paradigms with different needs. This kind of thing is the predictable result.
"You can run JavaScript on the frontend <i>and</i> the backend!" always struck me as the weakest marketing ever. I've been around the block, and which language the web application uses is hardly any sort of limiting factor in ease of development. (And ideally, your frontend has as little JavaScript as possible anyway.) There is very little that can't be programmed in a more web-friendly way, like POSTing forms and rendering HTML templates. Sure, I guess Google Maps can just be a fat application, but like... every eCommerce site doesn't need to be some big ball of React mud, I promise.
I think the main problem was that 'the standard' wasn't evolving fast enough to solve the bulk of the issues around input validation and UI building so we got this crap language that is powerful enough to hide a thousand footguns in a few lines of code. It will never be perfect so it will generate this kind of issue for the next century or so.<p>If instead, we would have gradually expanded the HTML standard without adding a fully functional programming language into the content stream we would have had better consistency between websites and applications and we would treat the browser like what it is: a content delivery mechanism, not an application programming platform. That's the core mistake as far as I'm concerned.
to be fair to bootcamp developers, i don't think they ever did "believe that there's virtue" in the setup, they were just told this is what you use and how you use it.
It's just the latest take on what we had 20 years ago with .NET's WebForms and Java's JSF. Both of which tried to hide the network separation between client and server and were not fun to work with.<p>Those who don't learn history are bound to repeat it, and all that.
Not even remotely similar.
Do you really need React Server Conponents or even Server Side Rendering?
Before SSR (unless you were using PHP I guess) you had to ship a shell of a site with all the conditionals being decided only AFTER the browser has gotten all the HTML + JS pulled down. If you need to make any API calls, you've delayed rendering by hundreds of milliseconds or worse (round trip to your server)<p>With SSR, those round trips to the server could be down to single-digit milliseconds assuming your frontend server is in the same datacenter as your backend. Plus you send HTML that has actual content to be rendered right away.<p>A truly functional pageload can go from seconds to milliseconds, and you're transferring less data over the wire. Better all around at the expense of running a React Server instead of a static file host.
It's very use-case dependent.<p>SSR can be a game-changer in domains like e-commerce. But completely useless for some other use case.<p>RSC advantages are a bit more complex to explain, because even a simple portfolio website would benefit from it. Contrary to the common belief created by long-term ReactJS dev, RSC simplifies a lot of the logic. Adapting existing code to RSC can be quite a mess and RSC is a big change of mindset for anybody used to ReactJS.
Yes. Web applications were impossible before these libraries.
If you truly believe that than we must really be moving backwards
No, they were not. They required a lot more round-trips to the server though, and rendering the results was a lot harder. But if you think of a browser as an intelligent terminal there is no reason why you couldn't run the application server side and display the UI locally, that's just a matter of defining some extra primitives. Graphical terminals were made first in the 60's or so.
Of course you do, in certain cases making less round-trips to the server is just straight more efficient
I'm a big fan of react, but all the server stuff was a cold hard mistake, it's only a matter of time before the (entire) react team realises it, assuming their nextjs overlords permit it.
static builds save the day.
The CVE says the that flaw is in React Server Components, which implies strongly that this is a RCE on the backend (!!), not the client.
dupe: <a href="https://news.ycombinator.com/item?id=46136067">https://news.ycombinator.com/item?id=46136067</a>
AHAHAHAHAHA, I'm sorry but we all knew this would happen.<p>I'm just laughing because I called it when they were in the "random idea x posts" about use server.<p>They'll fix it, but this was what we were warning about.<p>edit: downvote if you want, but I'm sorry React thinking they could shoehorn "use server" in and not create huge vulnerabilities was a pipe dream at best. I vote gross negligence because EVERYONE knew this was going to happen.