Having the ability to do real-time video generation on a single workstation GPU is mind blowing.<p>I'm currently hosting a video generation website, also on a single GPU (with a queue), which is also something I didn't even think possible a few years ago (my show HN from earlier today, coincidentally: <a href="https://news.ycombinator.com/item?id=46388819">https://news.ycombinator.com/item?id=46388819</a>). Interesting times.
Looks like there is some quality reduction, but nonetheless 2s to generate a 5s video on a 5090 for WAN 2.1 is absolutely crazy. Excited to see more optimizations like this moving into 2026.
Efficient realtime video diffusion will revolutionize the way people use computers even more so than LLMs.<p>I actually think we are already there with quality, but nobody is going to wait 10 minutes to do a task with video that takes 2 seconds with text.<p>If Sora/Kling/whatever ran cool locally 24/7 at 60FPS, would anyone ever build a UI? Or a (traditional) OS?<p>I think it's worth watching the scaling graph.
> If Sora/Kling/whatever ran cool locally 24/7 at 60FPS, would anyone ever build a UI?<p>I like my buttons to stay where I left them.
Please no, please no<p>That will be Windows 12 and perhaps 2 generations in of iOS :)
That’s not the actual time if you run it, encoding and decoding is extra
this is probably the best tool for this stuff now:
<a href="https://github.com/deepbeepmeep/Wan2GP" rel="nofollow">https://github.com/deepbeepmeep/Wan2GP</a><p>It has fastwan ... probably will have this soon. it's a request in multiple tickets: <a href="https://github.com/deepbeepmeep/Wan2GP/issues" rel="nofollow">https://github.com/deepbeepmeep/Wan2GP/issues</a>
Video AI acceleration is tricky, where many of the currently in use acceleration loras and cache level accelerations have a subtle at first impact on the generated video, which renders these accelerations as poison for video work: the AI's become dumber to the degree they can't follow camera directions, and the character performances suffer, the lip sync becomes a lip flap, and the body motions are reduced in quality, and become repetitive.<p>Now, I've not tested TurboDiffusion yet, but I am very actively generating AI video, I probably did a half hour of finished video clips yesterday. There is no test for this issue yet, and for the majority it is yet to be realized as an issue.
Out of curiosity, what do you do with the footage? in a personal way I found it fun for the occasional funny situational video, or for some small background animations, but not so useful as a whole. I understand for things like making sketches from scripts and quick prototyping it's nice, but I am genuinely curious what's the use :)
I'm creating a college / corporate seminar level class with a 3D animated host and instructor. Each lesson begins with a brief animated intro, which the student then reads their lesson afterwards, and then there is a chatbot that understands that lesson that engages with the student afterwards. The course will have about an hour of 1-2 minute videos in the end, and because they are an animated "professor" it is then possible to create other ethic versions, and other language versions easier than otherwise. And for the curious, that hour of final in-use video will be sourced out of somewhere around 8 hours of final video; these video AIs are fine and dandy for short content, but when working on longer form media the consistency issues grow to Godzilla sized and really become the most consistent issue: trying to keep the likenesses of the characters, and the environment from drifting over time.
Also interested, since that is my same impression.
We are scarily close to realtime personalization of video which if you agree with this NeurIPS paper [1] may lead to someone inadvertently creating “digital heroin”<p>[1] <a href="https://neurips.cc/virtual/2025/loc/san-diego/poster/121952" rel="nofollow">https://neurips.cc/virtual/2025/loc/san-diego/poster/121952</a>
> We further urge the machine learning community to act proactively by establishing robust design guidelines, collaborating with public health experts, and supporting targeted policy measures to ensure responsible and ethical deployment<p>We’ve seen this play out before, when social media first came to prominence. I’m too old and cynical to believe anything will happen. But I really don’t know what to do about it at a person level. Even if I refuse to engage in this content, and am able to identify it, and keep my family away from it…it feels like a critical mass of people in my community/city/country are going to be engaging with it. It feels hopeless.
I tend to think that it leads to censorship, and then censorship at a broader level in the name of protecting our kids. See with social networks where you now have to give your ID card to protect kids.<p>The best way in that case is education of the kids / people and automatically flag potentially harmful / disgusting content and let the owner of the device set-up the level of filtering he wants.<p>Like with LLMs they should be somewhat neutral in default mode but they should never refuse a request if user asks.<p>Otherwise the line between technology provider and content moderator is too blurry, and tomorrow SV people are going to abuse of that power (or be coerced by money or politics).<p>At a person / parent level, time limits (like you can do with web filtering device for TikTok), content policy would solve and taking time to spend with the kids as much as possible and to talk to them so they don’t become dumber and dumber due to short videos.<p>But totally opposed that it should be done on public policy level: “now you have right to watch pornography but only after you give ID to prove you are adult” (this is already the case in France for example)<p>It can quickly become: “now to watch / generate controversial content, you have to ID”
That doesn't work when the Chinese produce uncensored open weight models, or ones that can easily be adapted to create uncensored content.<p>Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.
> Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.<p>Censorship doesn't work for stuff that is currently illegal. See pirated movies.
You just fell outrage bait so I doubt you will be able to identify AI.
Potentially interesting that the authors are primarily affiliated with NatWest - a British bank. I had to Google their names to find that out, though.<p>They highlight reduced workplace productivity as a risk, among other things.
It saddens me to think that the efforts so far hasn't been it. Maybe I should try my hand at "closing the loop" for image generation models.<p>Could it destroy the society? The humanity had lived through bunch of such actual substances, and always got bored of it in matters of decades... those risk talks feel a bit overblown to me.
Infinite Jest predicted this.
Fun fact: if you say the right prayers to the Myelin Gods it will fuse straight through sage3 at D/DQ like it's seen it before, which of course it has.<p><a href="https://gist.github.com/b7r6/94f738f4e5d1a67d4632a8fbd18d3473" rel="nofollow">https://gist.github.com/b7r6/94f738f4e5d1a67d4632a8fbd18d347...</a><p>Faster than Turbo with no pre-distill.
Now if someone could release an optimization like this for the M4 Max I would be so happy. Last time I tried generating a video it was something like an hour for a 480p 5-second clip.
I mean the baselines were deliberately worse and not how someone would be using these to begin with maybe noobs and the quoted number is only for DIT steps not for other encoding and decoding steps, which is actually quite high still. No actual use of FA4/Cutlass based kernels nor TRT at any point.
I want to use this on a website!