24 comments

  • vunderba1 day ago
    I&#x27;ve done some preliminary testing with Z-Image Turbo in the past week.<p>Thoughts<p>- It&#x27;s fast (~3 seconds on my RTX 4090)<p>- Surprisingly capable of maintaining image integrity even at high resolutions (1536x1024, sometimes 2048x2048)<p>- The adherence is impressive for a 6B parameter model<p>Some tests (2 &#x2F; 4 passed):<p><a href="https:&#x2F;&#x2F;imgpb.com&#x2F;exMoQ" rel="nofollow">https:&#x2F;&#x2F;imgpb.com&#x2F;exMoQ</a><p>Personally I find it works better as a refiner model downstream of Qwen-Image 20b which has significantly better prompt understanding but has an unnatural &quot;smoothness&quot; to its generated images.
    • nialv721 hours ago
      China really is keeping the open weight&#x2F;source AI scene alive. If in five years a consumer GPU market still exists it would be because of them.
      • p-e-w20 hours ago
        Pretty sure the consumer GPU market mostly exists because of games, which has nothing to do with China or AI.
        • samus16 hours ago
          The consumer GPU market is not treated as a primary market by GPU makers anymore. Similar to how Micron went B2B-only.
          • adventured7 hours ago
            The parent comment of course understands that. Nvidia views the gaming market as an entry threat, a vector from which a competitor can come after their AI GPU market. That&#x27;s the reason Nvidia won&#x27;t be looking to exit the gaming scene no matter how large their AI business gets. If done correctly, staying in the gaming GPU market helps to suppress competition.<p>Exiting the consumer market is likely a mistake by Micron. If China takes that market segment, they&#x27;ll eventually take the rest, eliminating most of Micron&#x27;s value. Holding consumer is about keeping entry attacks covered.
            • CamperBob23 hours ago
              <i>Exiting the consumer market is likely a mistake by Micron.</i><p>I actually think their move to shut down the Crucial channel will prove to be a good one. Why? Because we&#x27;re heading toward a bimodal distribution of outcomes: either the AI bubble won&#x27;t pop, and it will pay to prioritize the data center customers, or it will pop. In the latter case a consumer&#x2F;business-facing RAM manufacturer will have to compete with its own surplus&#x2F;unused product on scales never seen before.<p>Worst case scenario for Micron&#x2F;Crucial, all those warehouses full of wafers that Altman has reserved are going to end up back in the normal RAM marketplace anyway. So why not let him foot the bill for fabbing and storing them in the meantime? Seems that the RAM manufacturers are just trying to make the best of a perilous situation.
              • gunalx55 minutes ago
                But why not just keep the consumer brand until stockpiles empty and blame supply issues until things possibly cool down, or people have forgotten the brand at all.
    • tarruda22 hours ago
      &gt; It&#x27;s fast (~3 seconds on my RTX 4090)<p>It is amazing how far behind Apple Silicon is when it comes to use non- language models.<p>Using the reference code from Z-image on my M1 ultra, it takes 8 seconds per step. Over a minute for the default of 9 steps.
      • liuliu1 hour ago
        Not saying M1 Ultra is great. But you should only get ~8x slow down with proper implementation (such as Draw Things upcoming implementation for Z Image). It should be 2~3 sec per step. On M5 iPad, it is ~6s per step.
      • tails4e10 hours ago
        I heard last year the potential future of gaming is not rendering but fully AI generated frames. 3 seconds per &#x27;frame&#x27; now, it&#x27;s not hard to believe it could do 60fps in a few short years. It makes it seem more likely such a game could exist. I&#x27;m not sure I like the idea, but it seems like it could happen
        • snek_case8 hours ago
          The problem is going to be how to control those models to produce a universe that&#x27;s temporally and spatially consistent. Also think of other issues such as networked games, how would you even begin to approach that in this new paradigm? You need multiple models to have a shared representation that includes other players. You need to be able to sync data efficiently across the network.<p>I get that it&#x27;s tempting to say &quot;we no longer have to program game engines, hurray&quot;, but at the same time, we&#x27;ve already done the work, we already have game engines that are relatively very computationally efficient and predictable. We understand graphics and simulation quite well.<p>Personally: I think there&#x27;s an obvious future in using AI tools to generate game content. 3D modelling and animation can be very time consuming. If you could get an AI model to generate animated characters, you could save a lot of time. You could also empower a lot of indie devs who don&#x27;t have 3D modelers to help them. AI tools to generate large maps, also super valuable. Replacing the game engine itself, I think it&#x27;s a taller order than people realize, and maybe not actually desirable.
          • adventured7 hours ago
            20 years out, what will everybody be using routine 10gbps pipes in our homes for?<p>I&#x27;m paying $43 &#x2F; month for 500mbps at present and there&#x27;s nothing special about that at all (in the US or globally). What might we finally use 1gbps+ for? Pulling down massive AI-built worlds of entertainment. Movies &amp; TV streaming sure isn&#x27;t going to challenge our future bandwidth capabilities.<p>The worlds are built and shared so quickly in the background that with some slight limitations you never notice the world building going on behind the scenes.<p>The world building doesn&#x27;t happen locally. Multiple players connect to the same built world that is remote. There will be smaller hobbyist segments that will still world-build locally for numerous reasons (privacy for one).<p>The worlds can be constructed entirely before they&#x27;re downloaded. There are good arguments for both approaches (build the entire world then allow it to be accessed, or attempt to world-build as you play). Both will likely be used over the coming decades, for different reasons and at different times (changes in capabilities will unlock new arguments for either as time goes on, with a likely back and forth where one pulls ahead then the other pulls ahead).
        • wcoenen9 hours ago
          Increasing the framerate by rendering at a lower resolution + upscaling, or outright generation of extra frames has already been a thing for a few years now. NVidia calls it Deep Learning Super Sampling (DLSS)[1]. AMD&#x27;s equivalent is called FSR[2].<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Deep_Learning_Super_Sampling" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Deep_Learning_Super_Sampling</a><p>[2] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;GPUOpen#FidelityFX_Super_Resolution" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;GPUOpen#FidelityFX_Super_Resol...</a>
      • p-e-w20 hours ago
        The diffusion process is usually compute-bound, while transformer inference is memory-bound.<p>Apple Silicon is comparable in memory bandwidth to mid-range GPUs, but it’s light years behind on compute.
        • tarruda19 hours ago
          &gt; but it’s light years behind on compute.<p>Is that the only factor though? I wonder if pytorch is lacking optimization for the MPS backend.
          • rfoo8 hours ago
            This is the only factor. People sometimes perceive Apple&#x27;s NPU as &quot;fast&quot; and &quot;amazing&quot; which is simply false.<p>It&#x27;s just that NVIDIA GPU sucks (relatively) at *single-user* LLM inference and it makes people feel like Apple not so bad.
    • amrrs1 day ago
      On fal, it takes less than a second many times.<p><a href="https:&#x2F;&#x2F;fal.ai&#x2F;models&#x2F;fal-ai&#x2F;z-image&#x2F;turbo&#x2F;api" rel="nofollow">https:&#x2F;&#x2F;fal.ai&#x2F;models&#x2F;fal-ai&#x2F;z-image&#x2F;turbo&#x2F;api</a><p>Couple that with the LoRA, in about 3 seconds you can generate completely personalized images.<p>The speed alone is a big factor but if you put the model side by side with seedream and nanobanana and other models it&#x27;s definitely in the top 5 and that&#x27;s killer combo imho.
      • venusenvy4721 hours ago
        I don&#x27;t know anything about paying for these services, and as a beginner, I worry about running up a huge bill. Do they let you set a limit on how much you pay? I see their pricing examples, but I&#x27;ve never tried one of these.<p><a href="https:&#x2F;&#x2F;fal.ai&#x2F;pricing" rel="nofollow">https:&#x2F;&#x2F;fal.ai&#x2F;pricing</a>
        • Bombthecat12 hours ago
          For images I like them: <a href="https:&#x2F;&#x2F;runware.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;runware.ai&#x2F;</a> super cheap and super fast, they also support Loras and you can upload your own models.<p>And you work with credits
          • Bombthecat3 hours ago
            Why the down vote? Are they scam?
        • tethys21 hours ago
          It works with prepaid credits, so there should be no risk. Minimum credit amount is $10, though.
          • vunderba20 hours ago
            This. You can also run most (if not all) of the models that Fal.ai directly from the playground tab including Z-Image Turbo.<p><a href="https:&#x2F;&#x2F;fal.ai&#x2F;models&#x2F;fal-ai&#x2F;z-image&#x2F;turbo" rel="nofollow">https:&#x2F;&#x2F;fal.ai&#x2F;models&#x2F;fal-ai&#x2F;z-image&#x2F;turbo</a>
    • rendaw17 hours ago
      That&#x27;s 2&#x2F;4? The kitkat bars look nothing like kitkat bars for the most part (logo? splits? white cream filling?). The DNA armor is made from normal metal links.
      • vunderba16 hours ago
        Fair. Nobody said it was going to surpass Flux.1 Dev (a 12B parameter model) or Qwen-Image (a 20B parameter model) where prompt adherence is strictly concerned.<p>It&#x27;s the reason I&#x27;m holding off until the Z-Image Base version is released before adding to the official GenAI model comparisons.<p>But for a 6B model that can generate an image in under 5 seconds, it punches far above its weight class.<p>As to the passing images, there is white chocolate kit-kat (<i>I know, blasphemy, right?</i>).
    • soontimes20 hours ago
      If that’s your website please check GitHub link - it has a typo (gitub) and goes to a malicious site
      • vunderba20 hours ago
        Thanks for the heads up. I just checked the site through several browsers and proxying through a VPN. There&#x27;s no typo and it properly links to:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;Tongyi-MAI&#x2F;Z-Image" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Tongyi-MAI&#x2F;Z-Image</a><p><i>Screenshot of site with network tools open to indicate link</i><p><a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;FZDz0K2" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;FZDz0K2</a><p><i>EDIT: It&#x27;s possible that this issue might have existed in an old cached version. I&#x27;ll purge the cache just to make sure.</i>
        • rprwhite20 hours ago
          The link with the typo is in the footer.
          • vunderba20 hours ago
            Well holy crap - that&#x27;s been there for about forever! I need a &quot;domain name&quot; spellchecker built into my Gulp CI&#x2F;CD flow.<p>EDIT: Fixed! Thanks soontimes and rprwhite!
    • echelon1 day ago
      So does this finally replace SDXL?<p>Is Flux 1&#x2F;2&#x2F;Kontext left in the dust by the Z Image and Qwen combo?
      • vunderba1 day ago
        Yeah, I&#x27;ve definitely switched largely away from Flux. Much as I do like Flux (for prompt adherency), BFL&#x27;s baffling licensing structure along with its excessive censorship makes it a noop.<p>For ref, the Porcupine-cone creature that ZiT couldn&#x27;t handle by itself in my aforementioned test was easily handled using a Qwen20b + ZiT refiner workflow and even with two separate models <i>STILL</i> runs faster than Flux2 [dev].<p><a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;5qYP0Vc" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;5qYP0Vc</a>
      • mythz18 hours ago
        SDXL has long been surpassed, it&#x27;s primary redeeming feature is fine tuned variants for different focus and image styles.<p>IMO HiDream had the best quality OSS generations, Flux Schnell is decent as well. Will try out Z-Image soon.
      • tripplyons1 day ago
        SDXL has been outclassed for a while, especially since Flux came out.
        • aeon_ai1 day ago
          Subjective. Most in creative industries regularly still use SDXL.<p>Once Z-image base comes out and some real tuning can be done, I think it has a chance of replacing it for the function SDXL has
          • CuriouslyC16 hours ago
            I don&#x27;t think that&#x27;s fair. SDXL is crap at composition. It&#x27;s really good with LoRAs to stylize&#x2F;inpaint though.
          • Scrapemist1 day ago
            Source?
            • echelon21 hours ago
              Most of the people I know doing local AI prefer SDXL to Flux. Lots of people are still using SDXL, even today.<p>Flux has largely been met with a collective yawn.<p>The only thing Flux had going for it was photorealism and prompt adherence. But the skin and jaws of the humans it generated looked weird, it was difficult to fine tune, and the licensing was weird. Furthermore, Flux never had good aesthetics. It always felt plain.<p>Nobody doing anime or cartoons used Flux. SDXL continues to shine here. People doing photoreal kept using Midjourney.
              • kouteiheika17 hours ago
                &gt; it was difficult to fine tune<p>Yep. It&#x27;s pretty difficult to fine tune, mostly because it&#x27;s a distilled model. You <i>can</i> fine tune it a little bit, but it will quickly collapse and start producing garbage, even though fundamentally it should have been an <i>easier</i> architecture to fine-tune compared to SDXL (since it uses the much more modern flow matching paradigm).<p>I think that&#x27;s probably the reason why we never really got any good anime Flux models (at least not as good as they were for SDXL). You just don&#x27;t have enough leeway to be able to train the model for long enough to make the model great for a domain it&#x27;s currently suboptimal for without completely collapsing it.
                • magicalhippo11 hours ago
                  &gt; It&#x27;s pretty difficult to fine tune, mostly because it&#x27;s a distilled model.<p>What about being distilled makes it harder to fine-tune?
  • danielbln1 day ago
    We&#x27;ve come a long way with these image models, and the things you can do with paltry 6B are super impressive. The community has adopted this model wholesale, and left Flux(2) by the way side. It helps that Z-Image isn&#x27;t censored, whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how &quot;safe&quot; (read: censored and lobotomized) their model is.
    • pferdone8 hours ago
      It‘s mainly due to system requirements that Flux.2-dev doesn’t get same usage as Z-Image. A 5090 needs about a minute to generate an image with a basic workflow with Flux.2-dev. But prompt adherence and scene&#x2F;character consistency in edit mode is (way) ahead of Qwen-Edit-2509 if you ask me.
    • AuryGlenz23 hours ago
      To be fair, a lot of that was about their online service and not the model itself. It can definitely generate breasts.<p>That said I do find the focus on “safety” tiring.
    • rfoo1 day ago
      But this is a CCP model, would it refuse to generate Xi?
      • vunderba23 hours ago
        You tell me.<p><a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;7FR3uT1" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;7FR3uT1</a>
      • CamperBob221 hours ago
        It will generate anything. Xi&#x2F;Pooh porn, Taylor Swift getting squashed by a tank at Tiananmen Square, whatever, no censorship at all.<p>With simplistic prompts, you quickly conclude that the small model size is the only limitation. Once you realize how good it is with detailed prompts, though, you find that you can get a lot more diversity out of it than you initially thought you could.<p>Absolute game-changer of a model IMO. It is competitive with Nano Banana Pro in some respects, and that&#x27;s saying something.
        • cubefox21 hours ago
          I could imagine the Chinese government is not terribly interested in enforcing its censorship laws when this would conflict with boosting Chinese AI. Overregulation can be a significant inhibitor to innovation and competitiveness, as we often see in Europe.
          • CamperBob23 hours ago
            I&#x27;m sure they&#x27;re also aware that few of their own citizens are in a position to run the model themselves, and that it&#x27;s easy enough to use the system prompt to censor hosted copies for domestic consumption.<p>Censoring open-source models really doesn&#x27;t make a lot of sense for China. Which could also be why local Deepseek instances are relatively easy to jailbreak.
    • SV_BubbleTime15 hours ago
      &gt; whereas BFL (makers of Flux 2) dedicated like a fith of their press release talking about how &quot;safe&quot; (read: censored and lobotomized) their model is.<p>Agreed, but let’s not confuse what it is. Talking about safety is just “WE WONT EMBARRASS YOU IF YOU INVEST IN US”.
    • ForOldHack20 hours ago
      Explain lobotomizing a Image Generator? Modern problems require modern terms.
  • xnx1 day ago
    Z-Image seems to be the first successor to Stable Diffusion 1.5 that delivers better quality, capability, and extensibility across the board in an open model that can feasibly run locally. Excitement is high and an ecosystem is forming fast.
    • SV_BubbleTime15 hours ago
      Did you forget about SDXL?<p>Clearly you have, but while on the topic, it is amazing to me that only came out 2.5 years ago.
  • muglug1 day ago
    The [demo PDF](<a href="https:&#x2F;&#x2F;github.com&#x2F;Tongyi-MAI&#x2F;Z-Image&#x2F;blob&#x2F;main&#x2F;assets&#x2F;Z-Image-Gallery.pdf" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Tongyi-MAI&#x2F;Z-Image&#x2F;blob&#x2F;main&#x2F;assets&#x2F;Z-Ima...</a>) has ~50 photos of attractive young women sitting&#x2F;standing alone, and exactly two photos featuring young attractive men on their own.<p>It&#x27;s incredibly clear who the devs assume the target market is.
    • abbycurtis331 day ago
      They&#x27;re correct. This tech, like much before it, is being driven by the base desires of extremely smart young men.
      • cma1 day ago
        They maybe have an rhlf phase, but I mean there is also just the shape of the distribution of images on the internet and, since this is from alibaba, their part of the internet&#x2F;social media (Weibo) to consider
      • IncreasePosts1 day ago
        [flagged]
        • abbycurtis3323 hours ago
          With today&#x27;s remote social validation for women and all time low value of men due to lower death rates and the disconnect from where food and shelter come from, lonely men make up a huge portion of the population.
        • Manuel_D23 hours ago
          Something like &gt;80% of men consume sexually explicit media. It&#x27;s hardly limited to involuntarily celibate men.
          • IncreasePosts21 hours ago
            It&#x27;s not about consumption, it&#x27;s about having a vast majority of your demo being sexy women instead of a balance.
            • Manuel_D19 hours ago
              I&#x27;m still not following. Ads for a pickup truck are probably more likely to feature towing a boat than ads for a hatchback even if they&#x27;re both capable of towing boats. Because buyers of the former are more likely to use the vehicle for that purpose.<p>If a disproportionate share of users are using image generation for generating attractive women, why is it out of place to put commensurate focus on that use case in demos and other promotional material?
              • IncreasePosts4 hours ago
                I think you would really need to show that&#x27;s the case. I&#x27;m sure nano banana has a huge number of users not generating sexy women.
        • pixl971 day ago
          I mean spending all that time on dates, and wives, and kids gives you much less time to build AI models.<p>The people with the time and desire to do something are the ones most likely to do it, this is no brilliant observation.
          • IncreasePosts21 hours ago
            You could say that about any field, and yet we don&#x27;t see the same behavior in most other fields<p>Spending all your time on dates and wives and kids means you&#x27;re not spending all your time building houses.
            • pixl9719 hours ago
              I mean things that take hard physical labor are typically self limiting...<p>I do nerdy computer things and I actually build things too, for example I busted up the limestone in my backyard in put in a patio and raised garden. Working 16 hours a day coding&#x2F;or otherwise computering isn&#x27;t that hard even if your brain is melted at the end of the day. 8 - 10 of physically hard labor and your body starts taking damage if you keep it up too long.<p>And really building houses is a terrible example! In the US we&#x27;ve been chronically behind on building millions of units of houses. People complain the processes are terribly slow and there is tons of downtime.<p>So yea, I don&#x27;t think your analogy works at all.
        • decremental23 hours ago
          [dead]
      • weregiraffe14 hours ago
        Gooners are base all right, but smart? Seriously? They can&#x27;t even use their imagination to jerk off.
    • kouteiheika17 hours ago
      &gt; It&#x27;s incredibly clear who the devs assume the target market is.<p>Not &quot;assume&quot;. That&#x27;s what the target market is. Take a look at civitai and see what kind of images people generate and what LoRAs they train (just be sure to be logged in and disable all of the NSFW filters in the options).
      • iamflimflam136 minutes ago
        Yeah - that was a bit of a shock! I&#x27;ll just unblur these pictures - how hardcore could they be...
    • mhb21 hours ago
      Maybe both women and men prefer looking at attractive women.
    • killingtime741 day ago
      It&#x27;s interesting the handsome guy is literally Tony Leung Chiu-wai, <a href="https:&#x2F;&#x2F;www.imdb.com&#x2F;name&#x2F;nm0504897&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.imdb.com&#x2F;name&#x2F;nm0504897&#x2F;</a>, not even modified
    • iamflimflam11 day ago
      The model is uncensored, so will probably suite that target market admirably.
    • AuryGlenz23 hours ago
      Considering how gaga r&#x2F;stablediffusion is about it, they weren’t wrong. Apparently Flux 2 is dead in the water even though the knowledge it has contained in the model is way, way higher than Z-Image (unsurprisingly).
      • BoorishBears21 hours ago
        Flux 2[dev] is awful.<p>Z-Image is getting traction because it fits on their tiny GPUs and does porn sure, but even with more compute Flux 2[dev] has no place.<p>Weak world knowledge, worse licensing, and it ruins the #1 benefit of a larger LLM backbone with post-training for JSON prompts.<p>LLMs already understand JSON, so additional training for JSON feels like a cheaper way to juice prompt adherence than more robust post-training.<p>And honestly even &quot;full fat&quot; Flux 2 has no great spot: Nano Banana Pro is better if you need strong editing, Seedream 4.5 is better if you need strong generation.
        • GaggiX13 hours ago
          I didn&#x27;t even know seedream 4.5 has been released, things move fast, I have used seedream 4 a lot through their API.
    • bobsmooth1 day ago
      The ratio of naked female loras compared to naked male loras, or even non-porn loras, on civitai is at least 20 to 1. This shouldn&#x27;t be surprising.
    • CGamesPlay16 hours ago
      I get the implication, but this is also the common configuration for fashion &#x2F; beauty marketing.
    • cess1122 hours ago
      &quot;The Internet is really, really great...&quot;<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=LTJvdGcb7Fs" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=LTJvdGcb7Fs</a>
    • Zopieux17 hours ago
      Don&#x27;t forget the expensive sport cars.
    • nubg19 hours ago
      [flagged]
    • thih91 day ago
      Please write what you mean instead of making veiled implications. What is the point of beating around the bush here?<p>It&#x27;s not clear to me what you mean either, especially since female models are overwhelmingly more popular in general[1].<p>[1]: &quot;Female models make up about 70% of the modeling industry workforce worldwide&quot; <a href="https:&#x2F;&#x2F;zipdo.co&#x2F;modeling-industry-statistics&#x2F;" rel="nofollow">https:&#x2F;&#x2F;zipdo.co&#x2F;modeling-industry-statistics&#x2F;</a>
      • muglug23 hours ago
        &gt; Female models make up about 70% of the modeling industry workforce worldwide<p>Ok so a ~2:1 ratio. Those examples have a 25:1 ratio.
        • cwillu10 hours ago
          No prize for guessing what the output for an empty prompt is.
  • khimaros1 day ago
    i have been testing this on my Framework Desktop. ComfyUI generally causes an amdgpu kernel fault after about 40 steps (across multiple prompts), so i spent a few hours building a workaround here <a href="https:&#x2F;&#x2F;github.com&#x2F;comfyanonymous&#x2F;ComfyUI&#x2F;pull&#x2F;11143" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;comfyanonymous&#x2F;ComfyUI&#x2F;pull&#x2F;11143</a><p>overall it&#x27;s fun and impressive. decent results using LoRA. you can achieve good looking results with as few as 8 inference steps, which takes 15-20 seconds on a Strix Halo. i also created a llama.cpp inherence custom node for prompt enhancement which has been helping with overall output quality.
  • nine_k1 day ago
    It&#x27;s amazing how much knowledge about the world fits into 16 GiB of the distilled model.
    • echelon1 day ago
      This is early days, too. We&#x27;re probably going to get better at this across more domains.<p>Local AI will eventually be booming. It&#x27;ll be more configurable, adaptable, hackable. &quot;Free&quot;. And private.<p>Crude APIs can only get you so far.<p>I&#x27;m in favor of intelligent models like Nano Banana over ComfyUI messes (the future is the model, not the node graph).<p>I still think we need the ability to inject control layers and have full access to the model, because we lose too much utility by not having it.<p>I think we&#x27;ll eventually get Nano Banana Pro smarts slimmed down and running on a local machine.
      • bobsmooth1 day ago
        &gt;Local AI will eventually be booming.<p>With how expensive RAM currently is, I doubt it.
        • gpm4 hours ago
          That&#x27;s a short term effect. Long term Wright&#x27;s law will kick in and ram will end up cheaper as a result of all the demand. It&#x27;s not like there&#x27;s a fundamental bottleneck on how much ram we could produce we&#x27;re running into, just how much we&#x27;re currently set up to produce.
        • echelon17 hours ago
          It&#x27;s temporary. Sam Altman booked all the supply for a year. Give it time to unwind.
        • api19 hours ago
          I’m old enough to remember many memory price spikes.
          • lomase7 hours ago
            Do you also remember when eveybody was waiting for cryto to cool off to buy a GPU?
          • SV_BubbleTime15 hours ago
            I remember saving up for my first 128MB stick and the next week it was like triple in price.
      • bogwog1 day ago
        [flagged]
        • echelon21 hours ago
          Is this a joke?<p>Image and video models are some of the most useful tools of the last few decades.
  • ArcaneMoose5 hours ago
    This model is awesome. I am building an infinite CYOA game and this was a drop-in replacement for my scene image generation. Faster, cheaper, and higher quality than what I was using before!
  • icyfox15 hours ago
    We talked about this model in some depth on the last Pretrained episode: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;5weFerGhO84?si=Eh_92_9PPKyiTU_h&amp;t=1743" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;5weFerGhO84?si=Eh_92_9PPKyiTU_h&amp;t=1743</a><p>Some interesting takeaways imo:<p>- Uses existing model backbones for text encoding &amp; semantic tokens (why reinvent the wheel if you don&#x27;t need to?)<p>- Trains on a whole lot of synthetic captions of different lengths, ostensibly generated using some existing vision LLM<p>- Solid text generation support is facilitated by training on all OCR&#x27;d text from the ground truth image. This seems to match how Nano Banana Pro got so good as well; I&#x27;ve seen its thinking tokens sketch out exactly what text to say in the image before it renders.
  • thih91 day ago
    As an AI outsider with a recent 24GB macbook, can I follow the quick start[1] steps from the repo and expect decent results? How much time would it take to generate a single medium quality image?<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;Tongyi-MAI&#x2F;Z-Image?tab=readme-ov-file#-quick-start" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Tongyi-MAI&#x2F;Z-Image?tab=readme-ov-file#-qu...</a>
    • aleyan22 hours ago
      I have a 24GB M5 macbook pro. In ComfyUI using default z-image workflow, generating a single image just took me 399 seconds, during which the computer froze and my airpods lost audio.<p>On replicate.com a single image takes 1.5s at a price of 1000 images per $1. Would be interesting to see how quick it is on ComfyUI Cloud.<p>Overall, running generative models locally on Macs seems very poor time investment.
    • altmanaltman23 hours ago
      If you don&#x27;t know anything about AI in terms of how these models are run, comfyui&#x27;s macos version is probably the easiset to use. There is already a Z-Image workflow that you can get and comfyui will get all the models you need and get it work together. Can expect decent speed
      • egeozcan23 hours ago
        Have a 48GB M4 Pro and every inference step takes like 10 seconds on a 1024x1024 image. so six steps and you need a minute. Not terrible, not great.
      • thih923 hours ago
        I&#x27;m fine with the quick start steps and I prefer CLI to GUI anyway. But if I try it and find it too complex, I now know what to try instead - thanks.<p>I&#x27;m still curious whether this would run on a MacBook and how long would it take to generate an image. What machine are you using?
    • Eisenstein19 hours ago
      Try koboldcpp with the kcppt config file. The easiest way by far.<p>Download the release here<p>* <a href="https:&#x2F;&#x2F;github.com&#x2F;LostRuins&#x2F;koboldcpp&#x2F;releases&#x2F;tag&#x2F;v1.103" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;LostRuins&#x2F;koboldcpp&#x2F;releases&#x2F;tag&#x2F;v1.103</a><p>Download the config file here<p>* <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;koboldcpp&#x2F;kcppt&#x2F;resolve&#x2F;main&#x2F;z-image-turbo.kcppt" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;koboldcpp&#x2F;kcppt&#x2F;resolve&#x2F;main&#x2F;z-image-...</a><p>Set +x to the koboldcpp executable and launch it, select &#x27;Load config&#x27; and point at the config file, then hit &#x27;launch&#x27;.<p>Wait until the model weights are downloaded and launched, then open a browser and go to:<p>* http:&#x2F;&#x2F;localhost:5001&#x2F;sdui<p>EDIT: This will work for Linux, Windows and Mac
  • xfalcox1 day ago
    We have vLLM for running text LLMs in production. What is the equivalent for this model?
    • mh-1 day ago
      I would say there&#x27;s isn&#x27;t an equivalent. Some people will probably tell you ComfyUI - you can expose workflows via API endpoints and parameterize them. This is how e.g. Krita AI Diffusion uses a ComfyUI backend.<p>For various reasons, I doubt there are any large scale SaaS-style providers operating this in production today.
      • salty_frog8 hours ago
        I&#x27;m intrigued by the various reasons why you think there are not any large scale SAAS operating this in production?
        • threeebo1 hour ago
          i dont believe there is a viable use case for large scale AI-generated images as there is for text... except for porn, but many orgs with SAAS capabilities wouldn&#x27;t touch that
  • zkmon1 day ago
    Just want to learn - who actually needs or buys up generated images?
    • wongarsu1 day ago
      I follow an author who publishes online on places like Scribblehub and has a modestly successful Patreon. Over the years he has spent probably tens of thousands of dollars on commissioned art for his stories, and he&#x27;s still spending heavily on that. But as image models have gotten better this has increasingly been supplemented with AI-images for things that are worth a couple dollars to get right with AI, but not a couple hundred to get a human artist to do them<p>Roughly speaking the art seems to have three main functions:<p>1. promote the story to outsiders: this only works with human-made art<p>2. enhance the story for existing readers: AI helps here, but is contentious<p>3. motivate and inspire the author: works great with AI. The ease of exploration and pseudo-random permutations in the results are very useful properties here that you don&#x27;t get from regular art<p>By now the author even has an agreement with an artist he frequently commissions that he can use his style in AI art in return for a small &quot;royalty&quot; payment for every such image that gets published in one of his stories. A solution driven both by the author&#x27;s conscience and by the demands of the readers
    • nine_k1 day ago
      Some ideas for your consideration:<p>- Illustrating blog posts, articles, etc.<p>- A creativity tool for kids (and adults; consider memes).<p>- Generating ads. (Consider artisan production and specialized venues.)<p>- Generating assets for games and similar, such as backdrops and textures.<p>Like any tool, it takes certain skill to use, and the ability to understand the results.
      • Zopieux17 hours ago
        &gt;A creativity tool for kids (and adults; consider memes).<p>Fixed that for you: (and adults; consider porn).<p>I don&#x27;t think you realize the extent of the “underground” nsfw genai community, which <i>has</i> to rely on open-weight models since API models all have prude filters.
      • zkmon1 day ago
        Except for gaming, that doesn&#x27;t sound like a huge market worthy of pouring millions into training these high-quality models. And there is a lot of competition too. I suspect there are some other deep-pocketed customers for these images. Probably animations? movies? TV ads?
        • nine_k20 hours ago
          I&#x27;d say that picture ad market alone would suffice.<p>OTOH these are open-weight models released to the public. We don&#x27;t get to use more advanced models for free; the free models are likely a byproduct of producing more advanced models anyway. These models can be the freemium tier, or gateway drugs, or a way of torpedoing the competition, if you don&#x27;t want to believe in the goodwill of their producers.
        • pixl971 day ago
          Propaganda?
    • leobg1 day ago
      Dying businesses like newspapers and local banks, who use it to save the money they used to spend on shutterstock images? That’s where I’ve seen it at least. Replacing one useless filler with another.
    • Youden23 hours ago
      During the holiday season I&#x27;ve been noticing AI-generated assets on tons of meatspace ads and cheap, themed products.
    • lomase7 hours ago
      Scammers do.
  • GuestFAUniverse15 hours ago
    All the examples I tried were garbage. Looked decent -- no horrors -- but didn&#x27;t do the job.<p>Anything with &quot;most cultures&quot; were manga-influenced comic strips with kanji. Useless.
    • GaggiX4 hours ago
      &gt;manga-influenced comic strips with kanji. Useless.<p>Are you sure it was Japanese? Because the model is Chinese so it&#x27;s likely to output Chinese (it happened in my testing).
      • GuestFAUniverse3 hours ago
        Honestly I don&#x27;t know if it was (Simplified) Chinese, or Japanese Kanji (so, symbols derived from Chinese).<p>And it isn&#x27;t even relevant. &quot;most cultures&quot; cannot read anything of it. So what&#x27;s the nitpicking about?
        • GaggiX2 hours ago
          &gt;So what&#x27;s the nitpicking about?<p>Idk I just thought it was funny to read the ignorant comment that called the Chinese model useless because it rendered Chinese text and calling it Japanese. The model is trained to render English or Chinese text.
  • thot_experiment19 hours ago
    I&#x27;ve messed with this a bit and the distill is incredibly overbaked. Curious to see the capabilities of the full model but I suspect even the base model is quite collapsed.
  • Copenjin1 day ago
    Very good, not always perfect with text or with following exactly the prompt, but 6B so... impressive.
    • accrual18 hours ago
      I have had good textual results with the Turbo version so far. Sometimes it drops a letter in the output, but most of the time it adheres well to both the text requested and the style.<p>I tried this prompt on my username: &quot;A painted UFO abducts the graffiti text &quot;Accrual&quot; painted on the side of a rusty bridge.&quot;<p>Results: <a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;z-image-test-hL1ACLd" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;z-image-test-hL1ACLd</a>
  • Tepix12 hours ago
    I‘m wondering: Is it faster or slower when spread across two GPUs (RTX3090)?
  • reactordev22 hours ago
    My issue with this model is it keeps producing Chinese people and Chinese text. I have to very specifically go out of my way to say what kind of race they are.<p>If I say “A man”, it’s fine. A black man, no problem. It’s when I add context and instructions is just seems to want to go with some Chinese man. Which is fine, but I would like to see more variety of people it’s trained on to create more diverse images. For non-people it’s amazingly good.
    • orbital-decay20 hours ago
      All modern models have their default looks. Meaningful variety of outputs for the same inputs in finetuned models is still an open technical problem. It&#x27;s not impossible, but not solved either.
    • SV_BubbleTime15 hours ago
      I’m not sure how this is anything but a plus.<p>It means it respects nationality choices and if you don’t mention it that is your bad prompting and not a failure to not have the default nationality you would prefer.
  • bilsbie22 hours ago
    What kind of rig is required to run this?
    • b0ner_t0ner16 hours ago
      CPU can be used:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;rupeshs&#x2F;fastsdcpu&#x2F;pull&#x2F;346" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;rupeshs&#x2F;fastsdcpu&#x2F;pull&#x2F;346</a>
    • CamperBob221 hours ago
      The simple Python example program runs great on almost any GPU with 8 GB or more memory. Takes about 1.5 seconds per iteration on a 4090.<p>The bang:buck ratio of Z-Image Turbo is just bonkers.
  • phantomathkg16 hours ago
    Unfortunately, another China censored model. Simply ask it to generate &quot;Tank Man&quot; or &quot;Lady Liberty Hong Kong&quot; and the model return a blackboard with text saying &quot;Maybe Not Safe&quot;.
    • user342838 hours ago
      This is an issue with your provider. You need to download the model.<p>It generates an image of a tank and the statue of liberty for those prompts.
  • idontwantthis1 day ago
    Does it run on apple silicon?
    • sheepscreek1 day ago
      Apparently - <a href="https:&#x2F;&#x2F;github.com&#x2F;ivanfioravanti&#x2F;z-image-mps" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ivanfioravanti&#x2F;z-image-mps</a><p>Supports MPS (Metal Performance Shaders). Using something that skips Python entirely along with a mlx or gguf converted model file (if one exists) will likely be even faster.
      • opensandwich20 hours ago
        (Not tested) though apparently it already exists: <a href="https:&#x2F;&#x2F;github.com&#x2F;leejet&#x2F;stable-diffusion.cpp&#x2F;wiki&#x2F;How-to-Use-Z%E2%80%90Image-on-a-GPU-with-Only-4GB-VRAM" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;leejet&#x2F;stable-diffusion.cpp&#x2F;wiki&#x2F;How-to-U...</a>
    • iamflimflam11 day ago
      It&#x27;s working for me - it does max out my 64GB though.
      • sheepscreek1 day ago
        Wow. I always forget how unlike autoregressive models, diffusion models are heavier on resources (for the same number of parameters).
  • pawelduda1 day ago
    Did anyone test it on 5090? I saw some 30xx reports and it seemed very fast
    • egeres21 hours ago
      Incredibly fast, on my 5090 with CUDA 13 (&amp; the latest diffusers, xformers, transformers, etc...), 9 samplig steps and the &quot;Tongyi-MAI&#x2F;Z-Image-Turbo&quot; model I get:<p>- 1.5s to generate an image at 512x512<p>- 3.5s to generate an image at 1024x1024<p>- 26.s to generate an image at 2048x2048<p>It uses almost all the 32Gb Gb of VRAM and GPU usage. I&#x27;m using the script from the HF post: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;Tongyi-MAI&#x2F;Z-Image-Turbo" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;Tongyi-MAI&#x2F;Z-Image-Turbo</a>
      • SV_BubbleTime15 hours ago
        Weird, even at 2048 I don’t think it should be using all your 32GB VRAM.
        • egeres9 hours ago
          It stays around 26Gb at 512x512. I still haven&#x27;t profiled the execution or looked much into the details of the architecture but I would assume it trades off memory for speed by creating caches for each inference step
          • SV_BubbleTime4 hours ago
            IDK. Seems odd. It’s an 11GB model, I don’t know what it could caching in ram.
    • Wowfunhappy1 day ago
      Even on my 4080 it&#x27;s extremely fast, it takes ~15 seconds per image.
      • accrual20 hours ago
        Did you use PyTorch Native or Diffusers Inference? I couldn&#x27;t get the former working yet so I used Diffusers, but it&#x27;s terribly slow on my 4080 (4 min&#x2F;image). Trying again with PyTorch now, seems like Diffusers is expected to be slow.
        • Wowfunhappy20 hours ago
          Uh, not sure? I downloaded the portable build of ComfyUI and ran the CUDA-specific batch file it comes with.<p>(I&#x27;m not used to using Windows and I don&#x27;t know how to do anything complicated on that OS. Unfortunately, the computer with the big GPU also runs Windows.)
          • accrual19 hours ago
            Haha, I know how it goes. Thanks, I&#x27;ll give that a try!<p>Update: works great and much faster via ComfyUI + the provided workflow file.
  • cubefox22 hours ago
    I&#x27;m particularly impressed by the fact that they seem to aim for photorealism rather than the semi-realistic AI-look that is common in many text-to-image models.
    • CamperBob221 hours ago
      Exactly, and at the same time, if you <i>want</i> an affected style, all you have to do is ask for it.
  • ForOldHack20 hours ago
    It would be more useful to have some standards on what one could expect in terms of hardware requirements and expected performance.
  • BoredPositron1 day ago
    I wish they would have used the WAN vae.
  • gatane18 hours ago
    Dude, please give money to artists instead of using genAI