I have been using lemonade for nearly a year already. On Strix Halo I am using nothing else - although kyuz0's toolboxes are also nice (<a href="https://kyuz0.github.io/amd-strix-halo-toolboxes/" rel="nofollow">https://kyuz0.github.io/amd-strix-halo-toolboxes/</a>)<p>Nowadays you get TTS, STT, text & image generation and image editing should also be possible. Besides being able to run via rocm, vulkan or on CPU, GPU and NPU. Quite a lot of options. They have a quite good and pragmatic pace in development. Really recommend this for AMD hardware!<p>Edit: OpenAI and i think nowaday ollama compatible endpoints allow me to use it in VSCode Copilot as well as i.e. Open Web UI. More options are shown in their docs.
How much of a speedup might I get for, say, Qwen3.5-122B if I were to run with lemonade on my Strix Halo vs running it using vulkan with llama.cpp ?
You would get similar performance. Lemonade is designed as a turnkey (optimized for AMD Hardware) for local AI models. The software helps you manage backends (llama.cpp, flm, whispercpp, stable‑diffusion.cpp, etc) for different GenAI modalities from a single utility.<p>On the performance side, lemonade comes bundled with ROCm and Vulkan. These are sourced from <a href="https://github.com/lemonade-sdk/llamacpp-rocm" rel="nofollow">https://github.com/lemonade-sdk/llamacpp-rocm</a> and <a href="https://github.com/ggml-org/llama.cpp/releases" rel="nofollow">https://github.com/ggml-org/llama.cpp/releases</a> respectively.
Have you used it with any agents or claw? If so, which model do you run?
I have two Strix Halo devices at hand. Privately a framework desktop with 128gb and at work 64GB HP notebook. The 64GB machine can load Qwen3.5 30B-A3B, with VSCode it needs a bit of initial prompt processing to initialize all those tools I guess. But the model is fighting with the other resources that I need. So I am not really using it anymore these days, but I want to experiment on my home machine with it. I just dont work on it much right now.<p>Lemonade has a Web UI to set the context size and llama.cpp args, you need to set context to proper number or just to 0 so that it uses the default. If its too low, it wont work with agentic coding.<p>I will try some Claw app, but first need to research the field a bit. But I am using different models on Open Web UI. GPT 120B is fast, but also Qwen3.5 27B is fine.
As another data point.<p>Running Qwen3.5 122B at 35t/s as a daily driver using Vulcan llama.cpp on kernel 7.0.0rc5 on a Framework Desktop board (Strix Halo 128).<p>Also a pair of AMD AI Pro r9700 cards as my workhorses for zimageturbo, qwen tts/asr and other accessory functions and experiments.<p>Finally have a Radeon 6900 XT running qwen3.5 32B at 60+t/s for a fast all arounder.<p>If I buy anything nvidia it will be only for compatibility testing. AMD hardware is 100% the best option now for cost, freedom, and security for home users.
Been running local LLMs on my 7900 XTX for months and the ROCm experience has been... rough. The fact that AMD is backing an official inference server that handles the driver/dependency maze is huge. My biggest question is NPU support - has anyone actually gotten meaningful throughput from the Ryzen AI NPU vs just using the dGPU? In my testing the NPU was mostly a bottleneck for anything beyond tiny models.
Aren't NPUs only designed to run on small models? From whast I've seen, most NPUs don't have the architecture to share workloads with a GPU or CPU any better than a GPU or CPU can share workloads with each other. (One exemption being NPU instructions that are executed by the CPU, e.g. RISC-V cores with IME instructions being called NPUs, which speed up operations already happening on the CPU.)<p>You can share workloads between a GPU, CPU, and NPU, but it needs to be proportionally parceled out ahead of time; it's not the kind of thing that's easy to automate. Also, the GPU is generally orders of magnitude faster than the CPU or NPU, so the gains would be minimal, or completely nullified by the overhead of moving data around.<p>The largest advantage of splitting workloads is often to take advantage of dedicated RAM, e.g. stable diffusion workloads on a system with low VRAM but plenty of system RAM may move the latent image from VRAM to system RAM and perform VAE there, instead of on the GPU. With unified memory, that isn't needed.
> Been running local LLMs on my 7900 XTX for months and the ROCm experience has been... rough.<p>Just out of curiosity... how so?<p>I only ask because I've been running local models (using Ollama) on my RX 7900 XTX for the last year and a half or so and haven't had a single problem that was ROCm specific that I can think of. Actually, I've barely had any problems at all, other than the card being limited to 24GB of VRAM. :-(<p>I'm halfway tempted to splurge on a Radeon Pro board to get more VRAM, but ... haven't bitten the bullet yet.
Did you have complete hardware lockups when VRAM is exceeded? I had quite a few on my 7900XTX with llama.cpp (Arch Linux, various driver versions). Once I dial in the quant and context size that never exceed VRAM, it is stable; before that I swear a lot and keep pressing the hardware reset button.
Yes, it completely crashes the machine. I didn't even think it was unexpected until I read your comment. I guess this is what I come to expect when using anything except firefox or neovim
Nope. I've exceeded available VRAM a few times, and never had to do anything other than maybe restart Ollama. To be fair though, that's "exceed available VRAM" in terms of the initial model load (eg, using a model that would never load in 24GB). I don't know that I've ever started working with a successfully loaded model and then pushed past available VRAM by pushing stuff into the context.<p>I've had a few of those "model psychosis" incidents where the context gets so big that the model just loses all coherence and starts spewing gibberish though. Those are always fun.
I have had way better perf with Vulcan than ROCm on kernel 7.0.0. They made some major improvements. 20%+ speedups for me.
the npu is more for power efficiency when on battery. I don't think it's a replacement for gpu.
Is... is this named because they have a lemon they're trying to make the most of?
I think saying "L-L-M" sounds kind of like "lemon," so this is an LLM-aid (sounds like lemonade).
Wonder why they didn't call it LLMonade, which would be unique.
so obvious and yet I didn't connect the dots. thank you
If life keeps giving it them, they should instead invent a combustible lemon.
I exclusively buy AMD hardware for local inference. For open drivers, power efficiency, and cost AMD beats Nvidia easily for consumers.
You have got to be joking.<p>My three NVIDIA cards are more power efficient than my one AMD card, both at idle and during usage.<p>Official ROCm is like pulling teeth with poor support for desktop cards. Debian, a volunteer led project, have better ROCm CI than AMD and support more cards.<p>Look at any benchmarks. NV midrange cards are faster than AMD and at least a generation in front. Owning a 7900XTX is an embarrassing disappointment.<p>I like AMD and want them to succeed, but they are way behind NV in this area.
> Official ROCm is like pulling teeth with poor support for desktop cards...<p>I agree with most of your post and fled the AMD ecosystem some time ago because of the machine learning situation, but their problem seemed to be more the firmware bugs and memory management of compute shaders than the higher level libraries.<p>The obvious solution to this one would be not to use ROCm. ROCm has always been a bit of a train wreck for small users and it doesn't seem to do anything special anyway. The way forward would be something more like Vulkan which the server that today's link points to seems to be using. The existence of a badly managed software package doesn't really imply that users have to use it, they can use an alternative.<p>It would be nice if AMD sorts themselves out though. The NVidia driver situation on linux is painful and if AMD can reliably run LLMs without the hardware locking then I'd much rather move back to using their products.
Yes, AMD themselves even use Vulkan tg numbers in their marketing material, because it's faster than ROCm on everything RDNA2 onwards (seems embarrassing).<p>However for pp, Vulkan is still nowhere near close to ROCm. That matters for long context and/or quick response. A lot of people really care about that time-to-first-token.
Have a Strix Halo 128 running Qwen 3.5 122b at 35t/s using Vulkan and kernel 7.0.0 on a 400w PSU. Pretty hard to beat for the price and power consumption IMO. But to be fair I compile everything myself so proprietary drivers required by nvidia are a non starter for me.
Any recommendations in the current market? Love how plug and play and is on Linux from the driver side of things.
Lemonsqueeze was considered too violent
Feels like this is sitting somewhere between Ollama and something like LM Studio, but with a stronger focus on being a unified “runtime” rather than just model serving.<p>The interesting part to me isn’t just local inference, but how much orchestration it’s trying to handle (text, image, audio, etc). That’s usually where things get messy when running models locally.<p>Curious how much of this is actually abstraction vs just bundling multiple tools together. Also wondering if the AMD/NPU optimizations end up making it less portable compared to something like Ollama in practice.
It bundles tools, model selection, and overall management.<p>It’s portable in the sense it will install on any of the supported OS using CPU or vulkan backends. But it only supports out of the box ROCM builds and AMD NPUs. There is a way to override which llama.cpp version it uses if you want to run it on CUDA, but that adds more overhead to manage.<p>If you have an AMD machine and want to run local models with minimal headache…it’s really the easiest method.<p>This runs on my NAS, handles my home assistant setup.<p>I have a strix halo and another server running various CUDA cards I manage manually by updating to bleeding edge versions of llama.cpp or vllm.
Note that the NPU models/kernels this uses are proprietary and not available as open source. It would be nice to develop more open support for this hardware.
Are they? The docs say "You can also register any Hugging Face model into your Lemonade Server with the advanced pull command options"
That won't give you NPU support, which relies on <a href="https://github.com/FastFlowLM/FastFlowLM" rel="nofollow">https://github.com/FastFlowLM/FastFlowLM</a> . And that says "NPU-accelerated kernels are proprietary binaries", not open source.
I bought one of their machines to play around with under the expectation that I may never be able to use the NPU for models. But I am still angry to read this anyway.
AMD/Xilinx's software support for the NPU is fully open, it's only FFLM's models that are proprietary. See <a href="https://github.com/amd/iron" rel="nofollow">https://github.com/amd/iron</a> <a href="https://github.com/Xilinx/mlir-aie" rel="nofollow">https://github.com/Xilinx/mlir-aie</a> <a href="https://github.com/amd/RyzenAI-SW/" rel="nofollow">https://github.com/amd/RyzenAI-SW/</a> . It would be nice to explore whether one can simply develop kernels for these NPU's using Vulkan Compute and drive them that way; that would provide the closest unification with the existing cross-platform support for GPU's.
The multi-modal bundling is the part that stands out more than the raw inference speed. If you are building an app that needs text generation, image generation, and speech recognition, right now the local setup is three separate services with three different APIs and three different model management stories. Having one server handle all of that behind OpenAI-compatible endpoints is a real quality of life improvement for anyone prototyping locally. The NPU angle is interesting but probably overstated for most use cases. The discussion in the thread confirms what I would expect: NPUs shine for small always-on models and prefill offloading, not for the chatbot workloads most people care about. Where this gets genuinely compelling is if AMD can make the combined GPU plus NPU scheduling transparent enough that developers do not need to think about which hardware is running which part of the pipeline. That is not a solved problem on any platform yet, and if Lemonade gets it right for even a subset of workloads, it becomes the default choice on AMD hardware regardless of how it benchmarks against Ollama on pure text generation.
Surprising that the Linux setup instructions for the server component don't include Docker/Podman as an option, its Snap/PPA for Ubuntu and RPM for Fedora.<p>Maybe the assumption is that container-oriented users can build their own if given native packages?
I’ve read the website and the news announcement, and I still don’t understand what it is. An alternative to LM Studio? Does it support MLX or metal on Macs? I’m assuming it will optimize things for AMD, but are you at a disadvantage using other GPUs?
>Does it support MLX or metal on Macs?<p>This is answered from their Project Roadmap over on Github[0]:<p>Recently Completed: macOS (beta)<p>Under Development: MLX support<p>[0] <a href="https://github.com/lemonade-sdk/lemonade?tab=readme-ov-file#project-roadmap" rel="nofollow">https://github.com/lemonade-sdk/lemonade?tab=readme-ov-file#...</a>
It’s an easy way to get started and maintain a local AI stack that concentrates on AMD optimization. It is a one stop install for endpoints for sst, tts, image generation, and normal LLM. It has its own webui for management and interacting with the endpoints.<p>It also has endpoints that are compatible with OpenAI, Ollama, and Anthropic so you can throw any tool that is compatible with those and it will just run.
It's alternative to LM Studio in a way that it's an abstraction over multiple runtimes. AMD part is that it supports FastFlowML runtime which is the only way to utilize NPU on Ryzen AI CPUs on linux.
I think LM Studio itself uses other software to actually make use of LLMs. If that other software does not support your NPUs, then you are not going to get much performance out of those. This Lemonade thing I am guessing is one such other software, that LM Studio could be using.
Been running lemonade for some time on my Strix Halo box. It dispatches out to other backends that they include, like diffusion and llama. I actually don't like their combined server, and what I use instead is their llama CPP build for ROCm.<p><a href="https://github.com/lemonade-sdk/llamacpp-rocm" rel="nofollow">https://github.com/lemonade-sdk/llamacpp-rocm</a><p>But I'm not doing anything with images or audio. I get about 50 tokens a second with GPT OSS 120B. As others have pointed out, the NPU is used for low-powered, small models that are "always on", so it's not a huge win for the standard chatbot use case.
Even small NPUs can offload some compute from prefill which can be quite expensive with longer contexts. It's less clear whether they can help directly during decode; that depends on whether they can access memory with good throughput and do dequant+compute internally, like GPUs can. Apple Neural Engine only does INT8 or FP16 MADD ops, so that mostly doesn't help.
Anyone compare to ollama? I had good success with latest ollama with ROCm 7.4 on 9070 XT a few days ago
It is optimized for compatibility across different APIs as well as has specific hardware builds for AMD GPUs and NPUs. It’s run by AMD.<p>Under the hood they are both running llama.cpp, but this has specific builds for different GPUs. Not sure if the 9070 is one, I am running it on a 370 and 395 APU.
I just compared this on my Mac book M1 Max 64GB RAM with the following:<p>Model: qwen3.59b
Prompt: "Hey, tell me a story about going to space"<p>Ollama completed in about 1:44
Lemonade completed in about 1:14<p>So it seems faster in this very limited test.
I'm also curious about this one, also I want to compare this to vLLM.
Seconded. Currently on ollama for local inference, but I am curious how it compares.
Lemonade is using llama.cpp for text and vision with a nightly ROCm build. It can also load and serve multiple LLMs at the same time. It can also create images, or use whisper.cpp, or use TTS models, or use NPU (e.g Strix Halo amdxdna2), and more!
better than Vulkan?
In my experience using llama.cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. I have access to an RX 7900 XT I should compare to though.
For me Vulkan performs better on integrated cards, but ROCm (MIGraphX) on 7900 XTX.
Wrong layer. Vulkan is a graphics and compute API, while Lemonade is an LLM server, so comparing them makes about as much sense as comparing sockets to nginx. If your goal is to run local models without writing half the stack yourself, compare Lemonade to Ollama or vLLM.
Just in case anyone isn't aware. NPUs are low power, slow, and meant for small models.
Neat, they have rpm, deb, and a companion AppImage desktop app[1]! Surprised I wasn't aware of this project before. Definitely going to give it a try.<p>[1]: <a href="https://github.com/lemonade-sdk/lemonade/releases/tag/v10.0.1" rel="nofollow">https://github.com/lemonade-sdk/lemonade/releases/tag/v10.0....</a>
Maybe it's a language barrier problem, but "by AMD" makes me think its a project distributed by AMD. Is that actually the case? I'm not seeing any reason to believe it is.
It’s a community project supported and sponsored by AMD according to their GitHub; <a href="https://github.com/lemonade-sdk/lemonade" rel="nofollow">https://github.com/lemonade-sdk/lemonade</a><p>AMD employees work on it/have been making blog posts about it for a bit.
Check the copyright notice at the bottom of the frontage, it says (c) 2026 AMD
> You can reach us by filing an issue, emailing lemonade@amd.com<p>Found this on the github readme.
It is mostly developed by AMD and used to be hosted on the AMD github iirc
A fun observation: pulling models sends ~200mbit of progress updates to your browser
Wow this is super interesting. This creates a local “Gemini” front end and all. This is more or less a generative AI aggregator where it installs multiple services for different gen modes. I’m excited to try this out on my strix halo. The biggest issue I had is image and audio gen so this seems like a great option.
I’m currently optimizing FLUX to run on a cluster of consumer 8GB VRAM cards (RTX 4060s). I noticed Lemonade emphasizes NPU and GPU orchestration. Have you found that offloading the 'aesthetic scoring' or 'text encoding' to the NPU significantly frees up VRAM for the main diffusion process, or is the overhead of moving tensors back and forth too high on consumer hardware?
I’m looking forward to trying this currently Strix halo’s npu isn’t accessible if you’re running Linux, and previously I don’t think lemonade was either. If this opens up the npu that would be great! Resolute raccoon is adding npu support as well.
Maybe you have seen NPU support via FLM already: <a href="https://lemonade-server.ai/flm_npu_linux.html" rel="nofollow">https://lemonade-server.ai/flm_npu_linux.html</a><p>"FastFlowLM (FLM) support in Lemonade is in Early Access. FLM is free for non-commercial use, however note that commercial licensing terms apply. "
The NPU works on Linux (Arch at least) on Strix Halo using FastFlowLM [1]. Their NPU kernels are proprietary though (free up to a reasonable amount of commercial revenue). It's neat you can run some models basically for free (using NPU instead of CPU/GPU), but the performance is underwhelming. The target for NPUs is really low power devices, and not useful if you have an APU/GPU like Strix Halo.<p>[1]: <a href="https://github.com/FastFlowLM/FastFlowLM" rel="nofollow">https://github.com/FastFlowLM/FastFlowLM</a>
I thought the NPU has been available since something like 6.12?
It's pretty annoying that you need vendor specific APIs and a large vendor specific stack to do anything with those NPUs.<p>This way software adoption will be very limited.
Cool but is there a reason they can't just make PRs for vLLM and llama.cpp? Or have their own forks if they take too long to merge?
What is the lowest process I can implement this on?
Which specific NPU’s?
Forget all the vibe coded slop or Ollama. Lemonade is the real deal and very good, been using about a year now.<p>AMD are doing gods work here
my most powerful system is Ryzen+Radeon, so if there are tools that do all the hard work of making AI tools work well on my hardware, I'm all for it. I find it very frustrating to get LLMs, diffusion, etc. working fast on AMD. It's way too much work.
so... what does it do? i dont get it Lol
[flagged]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
the unified api is interesting, but i've found that 'openai compatible' can be leaky. when i switched a rag agent from openai to a local server, my function calling broke even though
[dead]
this is funny I’m working on building an AI project called lemonade right now