I created a CLI wrapper for Kitten TTS: <a href="https://github.com/newptcai/purr" rel="nofollow">https://github.com/newptcai/purr</a><p>BTW, it seems that kitten (the Python package) has the following chain of dependencies: kittentts → misaki[en] → spacy-curated-transformers<p>So if you install it directly via uv, it will pull torch and NVIDIA CUDA packages (several GB), which are not needed to run kitten.
Thanks, your install script worked for me.<p>In case it helps anyone else, the first time I tried to run purr I got "OSError: PortAudio library not found". Installing libportaudio (apt install libportaudio2) got it running.
Thank you so much, that fixes an enormous pain point I was hitting. It's not just the size, that dependency chain was actually breaking on my machine and failing to install. Are we losing something by dropping the extra dependencies?
What I love about OpenClaw is that I was able to send it a message on Discord with just this github URL and it started sending me voice messages using it within a few minutes. It also gave me a bunch of different benchmarks and sample audio.<p>I'm impressed with the quality given the size. I don't love the voices, but it's not bad. Running on an intel 9700 CPU, it's about 1.5x realtime using the 80M model. It wasn't any faster running on a 3080 GPU though.
yeah we'll add some more professional-sounding voices and also support for diy custom voices. we tried to add more anime/cartoon-ish voices to showcase the expressivity.<p>Regarding running on the 3080 gpu, can you share more details on github issues, discord or email? it should be blazing fast on that. i'll add an example to run the model on gpu too.
Oh that is a good use case. Don't connect to email and all that insecure stuff. But as a sandbox for "try this out and deploy a demo". Got me thinking!
I'm jealous. It took me <i>far</i> longer and much more frustration to get it to run.<p>Had to get the right Python version and make sure it didn't break anything with the previous Python version. A friend suggested using Docker, so I started down that path until I realized I'd probably have to set the whole thing up there myself. Eventually got it to run and I <i>think</i> I didn't break anything else.<p>I hate Python so much.
Nowadays these frustrations shouldn't be a thing any more. If the author used uv, the script would be able to install its own dependencies and just work.
why you don't use some kind of environment, Conda or something like that?
I used uv, which should have generated a stable environment. No dice. There's a bug in spacey.<p>I suspect success is highly variable on macOS vs. Linux; the spacey bug is only in newer (3.14 only or later) Pythons, which Linux will have.
Even the built in venv would've solved most of his issues too. But I agree with him in that Python documentation could be better. Or have a more unified system in place. I feel like every other how to doc I read on setting something Python up uses a different environment containment product.
Conda was fantastic up to some point last year and since then I've had quite a few unresolvable version issues with it. It is really annoying, especially when you're tying multiple things together and each requires its own set of mutually exclusive specific versions of libraries. The latest like that was gnu radio and some out-of-tree stuff at the same time as a bluetooth library. High drama. I eventually gave up, rewrote the whole thing in a different language and it took less time than I had spent on trying to get the python solution duct-taped together.<p>I should learn to give up quicker.
Because I need a new version of python very rarely (years go by). I don't remember all the arcane incantations to set everything up.<p>I did eventually do that though, and I'm pretty sure I had to mess about with installing and uninstalling torch.<p>I dread using anything made in python because of this. It's always annoying and never just works (if the version of python is incompatible, otherwise it's fine) .
damnn, really sorry for the inconv, looks like some folks are having bad env issues. we're working on fixing this.
Two words; Nix Flakes
The size/quality tradeoff here is interesting. 25MB for a TTS model that's usable is a real achievement, but the practical bottleneck for most edge deployments isn't model size -- it's the inference latency on low-power hardware and the audio streaming architecture around it. Curious how this performs on something like a Raspberry Pi 4 for real-time synthesis. The voice quality tradeoff at that size usually shows up most in prosody and sentence-final intonation rather than phoneme accuracy.
Very cool :)
Look forward to trying it out<p>Maybe a dumb and slightly tangential question,
(I don't mean this as a criticism!)
but why not release a command line executable?<p>Even the API looks like what you'd see in a manpage.<p>I get it wouldn't be too much work for a user to actually make something like that,
I'm just curious what the thought process is
Good on device TTS is an amazing accessibility tool. Thank you for building this. Way too many of devices that use it rely on online services, this is much preferred.
A very clear improvement from the first set of models you released some time ago. I'm really impressed. Thanks for sharing it all.
I'd love to see a monolingual Japanese model sometime in the future. Qwen3-tts works for Japanese in general, but from time to time it will mix with some Mandarin in between, making it unusable.
You could try a preprocessing step where you convert to hiragana, but I guess that would lose pitch accent information (e.g. 飴 vs 雨)
Exactly. Qwen only has one pitch accent for pure hiragana words, even though it actually work (removing mandarin mixed-in), which requires some great efforts to normalize text in order to disambiguate heteronyms, the result is (if you use voice cloning) your favorite CV speaking in some weird, unknown accent :)
our next model(eta 3ish weeks) will support Japanese. would love to get your feedback then on how the quality is. can you share what usecase you want? would love to support it.
Was playing around a bit and for its size it's very impressive. Just has issues pronounciating numbers. I tried to let it generate "Startup finished in 135 ms."<p>I didn't expect it to pronounciate 'ms' correctly, but the number sounded just like noise. Eventually I got an acceptable result for the string "Startup finished in one hundred and thirty five seconds.
yeah we're fixing this at the model level too. but in the meantime, there is a way to add text preprocessing for you, and if you have a special use-cased, claude code should be able to one-shot custom preprocessing. its the way that most existing tts models (including sota cloud ones) deal w numbers and units, they just convert it into string.
thanks a lot for trying it and giving feedback. custom preprocessing will fix this for 95% of use-cases. and as i mentioned, this will be fixed at the model level in the next release.
I tried it with some "hard mode" text:<p><i>The above SECDED check-bit encoding can be implemented in a similar way, but since it uses only three-bit patterns, mapping syndromes to correction masks can be done with three-input AND gates.</i><p>It sounded quite good indeed for the normal English stuff, but I guess predictably was quite bad at the domain-specific words. It misspoke "SECDED", had wrong emphasis on "syndromes", and pronounced "AND gates" like "and gates".<p>Could you give some example of what kind of preprocessing would help in this case? I tried some local LLMs, but they didn't do a good job (maybe my prompts sucked).
> <i>pronounciating</i><p>I'm not sure if you're misspelling it deliberately or not, but the word you're looking for is "pronounce" and it's verb form "pronouncing", as in "It just has issues pronouncing numbers" and "I didn't expect it to pronounce 'ms' correctly."
You should put examples comparing the 4 models you released - same text spoken by each.
great idea, let me add this. meanwhile, you can try the models on our huggingface spaces demo here:
<a href="https://huggingface.co/spaces/KittenML/KittenTTS-Demo" rel="nofollow">https://huggingface.co/spaces/KittenML/KittenTTS-Demo</a>
They sound like cartoon voices... but I really like them I could listen to a book with those.
Found they struggle with numbers. Like, give them a random four digit number in a sentence and it fumbles.
The <25MB figure is what stands out. Been wanting to add TTS to a few Next.js projects for offline/edge scenarios but model sizes have always made it impractical to ship.<p>At 25MB you can actually bundle it with the app. Going to test whether this works in a Vercel Edge Function context -- if latency is acceptable there it opens up a lot of use cases that currently require a round-trip to a hosted API.
Great stuff. Is your team interested in the STT problem?
I ran install instructions and it took 7.1GB of deps, tf you mean "tiny" ?
damnn, lemme fix it, sorry for that. we may have forgotten to remove the redundant dependencies. i'll comment here once i push the change. thanks a lot for trying it and giving feedback.
It's mostly torch, I think. It pulls in NVIDIA libs (which … makes sense, I guess), and NVIDIA is just not at all judicious when it comes to disk space. I literally run out of disk trying to install this on Linux.<p>On macOS, it's a markedly different experience: it's only ~700 MiB there; I'm assuming b/c no NVIDIA libs get pulled in, b/c why would they.<p>For anyone who might want to play around with this: I can get down to ~3 GiB (& about 1.3 GiB if you wipe your uv cache afterwards) on Linux if I add the following to the end of `pyproject.toml`:<p><pre><code> [tool.uv.sources]
# This tells uv to use the specific index for torch, torchvision, and torchaudio
torch = [
{index = "pytorch-cpu"}
]
torchvision = [
{index = "pytorch-cpu"}
]
torchaudio = [
{index = "pytorch-cpu"}
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
</code></pre>
& add "torch" to the direct dependencies, b/c otherwise it seems like uv is ignoring the source? (… which of course downloads a CPU-only torch.)<p>This is an example of what one sees under Linux:<p><pre><code> nvidia-nvjitlink-cu12 ------------------------------ 23.83 MiB/37.44 MiB
nvidia-curand-cu12 ------------------------------ 23.79 MiB/60.67 MiB
nvidia-cuda-nvrtc-cu12 ------------------------------ 23.87 MiB/83.96 MiB
nvidia-nvshmem-cu12 ------------------------------ 23.62 MiB/132.66 MiB
triton ------------------------------ 23.82 MiB/179.55 MiB
nvidia-cufft-cu12 ------------------------------ 23.76 MiB/184.17 MiB
nvidia-cusolver-cu12 ------------------------------ 23.84 MiB/255.11 MiB
nvidia-cusparselt-cu12 ------------------------------ 23.99 MiB/273.89 MiB
nvidia-cusparse-cu12 ------------------------------ 23.96 MiB/274.86 MiB
nvidia-nccl-cu12 ------------------------------ 23.79 MiB/307.42 MiB
nvidia-cublas-cu12 ------------------------------ 23.73 MiB/566.81 MiB
nvidia-cudnn-cu12 ------------------------------ 23.56 MiB/674.02 MiB
torch ------------------------------ 23.75 MiB/873.22 MiB
</code></pre>
That's not all the libraries, either, but you can see NVIDIA here is easily over 1 GiB.<p>It also then crashes for me, with:<p><pre><code> File "KittenTTS/.venv/lib/python3.14/site-packages/pydantic/v1/fields.py", line 576, in _set_default_and_type
raise errors_.ConfigError(f'unable to infer type for attribute "{self.name}"')
pydantic.v1.errors.ConfigError: unable to infer type for attribute "REGEX"
</code></pre>
Which seems to be [this bug in spacey](<a href="https://github.com/explosion/spaCy/issues/13895" rel="nofollow">https://github.com/explosion/spaCy/issues/13895</a>), so I'm going to have to try adding `<3.14` to `requires-python` in `pyproject.toml` too I think. That is, for anyone wanting to try this out:<p><pre><code> -requires-python = ">=3.8"
+requires-python = ">=3.8,<3.14"
</code></pre>
(This isn't really something KittenTTS should have to do, since this is a bug in spacey … and ideally, at some point, spacey will fix it.)<p>Also:<p><pre><code> + curated-tokenizers==0.0.9
</code></pre>
This version is so utterly ancient that there aren't wheels for it anymore, so that means a loooong wait while this builds. It's pulled in via misaki, and my editor says your one import of misaki is unused.<p>Hilariously, removing it breaks but only on macOS machine. I think you're using it solely for the side-effect that it tweaks phonemizer to use espeakng, but you can just do that tweak yourself, & then I think that dependency can be dropped. That drops a good number of dependencies & <i>really</i> speeds up the installation since we're not compiling a bunch of stuff.<p>You need to add `phonemizer-fork` to your dependencies. (If you remove misaki, you'll find this missing.)
a classic "how to draw an owl" lol :)
Only American voices? For some reason I'm only interested in Irish, British or Welsh accents. American is a no
minor nit to pick: Welsh accents are British accents as Wales is in Britain. In fact by some definitions it's the most British part.<p>People from outside the UK often use British as synonymous with English, and in the context of accents, often a South East English accent or some sort of Received Pronunciation (RP) accent. Technically a "British" accent could be from anywhere in England, Scotland, or Wales, and therefore by extension might not even be the English language.<p>While I'm here, since it's generally confusing, the UK is Great Britain and Northern Ireland. Great Britain is England, Scotland, and Wales.
This is awesome, well done. Been doing lot of work with voice assistants, if you can replicate voice cloning Qwen3-TTS into this small factor, you will be absolute legends!
A lot of good small TTS models in recent times. Most seem to struggle hard on prosody though.<p>Kokoro TTS for example has a very good Norwegian voice but the rhythm and emphasizing is often so out of whack the generated speech is almost incomprehensible.<p>Haven't had time to check this model out yet, how does it fare here? What's needed to improve the models in this area now that the voice part is more or less solved?
small models struggle with prosody due to limited capacity. this version does much better than the precious one and is the best among other <25MB models. Kokoro is a really good model for its size, its competitive on artificial analysis too.
i think by the next release we should have something kokoro quality but a fifth of the size. Adding control for rhythm seems to be quite important too, and we should start looking at that for other languages.
Listened to the video examples, sounded very good though wasn't terribly challenging text.<p>If only I could have that in Norwegian my SO would be pleased.<p>Also I totally misremembered regarding Kokoro TTS. It's good, but not what was butchering Norwegian. Forgot which one I was thinking of, maybe it was the old VITS stuff Rhaspy uses. Points stand, the voice was good but could barely understand what was said.
That, and also using English words in the middle of another language phrase confuses them a lot.
The Github readme doesn't list this: what data trained this? Was it done by the voices of the creators, or was this trained on data scraped from the internet or other archives?
Would an Android app of this be able to replace the built in tts?
How did you make a very small AI model (14M) sound more natural and expressive than even bigger models?
glad you liked it, thank you so much for the kind words. our team is really good at squeezing performance out of small models. we are working on a new launch and hope to release a technical report along with that which includes details. fyi, our current 14M model is better than our previous 80M model. and we expect this trend to continue.
Fingers crossed for a normal-sounding voice this time around. The cute Kitten voices are nice, but I want something I can take seriously when I'm listening to an audiobook.
One of the core features I look for is expressive control.<p>Either in the form of the api via pitch/speed/volume controls, for more deterministic controls.<p>Or in expressive tags such as [coughs], [urgently], or [laughs in melodic ascending and descending arpeggiated gibberish babbles].<p>the 25MB model is amazingly good for being 25MB. How does it handle expressive tags?
Is this open-source or open-weights ML?
How long until I can buy this as a chip for my Arduino projects?
Did they train this on @lauriewired's voice? The demo video sounds exactly like her at 0:18
There's a number of recent, good quality, small TTS models.<p>If the author doesn't describe some detail about the data, training, or a novel architecture, etc, I only assume they just took another one, do a little finetuning, and repackage as a new product.
Any recommendations?
[flagged]
I thought they were going to make kitten sounds instead of speech
The example.py file says "it will run blazing fast on any GPU. But this example will run on CPU."<p>I couldn't locate how to run it on a GPU anywhere in the repo.
25MB is impressive. What's the tradeoff vs the 80M model — is it mainly voice quality or does it also affect pronunciation accuracy on less common words?
80M model is the highest quality while also being quite efficient. it is superior in terms of pronunciation accuracy for less common words, and also is more stable in terms of speed. its my fav model. i think the 40M is quite similar to 80M for most usecases. 15M is for resource cpus, loading onto a browser etc.<p>The new 15M is way better than the previous 80M model(v0.1). So we're able to predictably improve the quality which is very encouraging.
This would be great as a js package - 25mb is small enough that I think it'd be worth it (in-browser tts is still pretty bad and varies by browser)
It is based on onnx, so can i use with transformers.js and the browser?
Yes, someone already made a web demo for it: <a href="https://github.com/clowerweb/kitten-tts-web-demo" rel="nofollow">https://github.com/clowerweb/kitten-tts-web-demo</a> (7 months ago). WebGPU support was marked experimental there, but transformer.js v4 (released last month) seems more stable now with some runtime/perf improvements: <a href="https://huggingface.co/blog/transformersjs-v4#performance--runtime-improvements" rel="nofollow">https://huggingface.co/blog/transformersjs-v4#performance--r...</a>
A lot of these models struggle with small text strings, like "next button" that screen readers are going to speak a lot.
I think I tried on my Android everything I could try and 1. outside webpage reading, not many options; 2. as browser extensions, also not many (I don't like to copy URLs in your app) 3. they all insist reading every little shit, not only buttons but also "wave arrow pointing directly right" which some people use in their texts. So basically reading text aloud is a bunch of shitty options. Anyone jumping in this market opening?
Thanks for working on this!<p>Is there any way to get those running on iPhone ? I would love to have the ability for it to read articles to me like a podcast.
yes, we're releasing an official mobile sdk and inference engine very soon. if you want to use something until then, some folks from the oss community have built ways to run kitten on ios. if you search kittentts ios on github you should find a few.
if you cant find it, feel free to ping me and i can help you set it up. thanks a lot for your support and feedback!
How much work would it be to use the C++ ONNX run-time with this instead of Python? Is it a Claudeable amount of work?<p>The iOS version is Swift-based.
Whats the training data for this?
I'm still looking for the "perfect" setup in order to clone my voice and use it locally to send voice replies in telegram via openclaw. Does anyone have auch a setup?<p>I want to be my own personal assistant...<p>EDIT: I can provide it a RTX 3080ti.
You need to provide info on your hardware. Pocket-TTS does cloning on CPU, but for me randomly outputs something pretty weird sounding mixed in with like 90% good outputs. So it hasn't been quite stable enough to run without checking output. But maybe it depends on your voice sample.<p>Qwen 3 TTS is good for voice cloning but requires GPU of some sort.
Try training a model on piper, you will need to record a lot of utterances but the results are pretty great and the output is a fast TTS model.
Is it not just to train a model on your voice recordings and just use that to generate audio clips from text?
Why not just send text replies? You can already do that
Thanks for open sourcing this.<p>Is there any way to do a custom voice as a DIY? Or we need to go through you? If so, would you consider making a pricing page for purchasing a license/alternative voice? All but one of the voices are unusable in a business context.
thanks a lot for the feedback. yes, we're working on a diy way to add custom voices and will also be releasing a model with more professional voices in the next 2-3 weeks. as of now, we're providing commercial support for custom voices, languages and deployment through the support form on our github. can you share more about your business use-case? if possible, i'd like to ensure the next release can serve that.
Right now it's outgoing calls for a small business client that checks information. Although if they call back they don't mind an automated system, on outgoing calls the person answering will often hang up if they detect AI right away, so we use a realistic custom voice with an accent.<p>This is a mind numbing task that requires workers to make hundreds of calls each day with only minor variations, sometimes navigating phone trees, half the time leaving almost the exact same message.<p>Anyway, I believe almost all such businesses will be automated within months. Human labour just cannot compete on cost.
Really cool to see innovation in terms of quality of tiny models. Great work!
What's the actual install size for a working example? Like similar "tiny" projects, do these models actually require installing 1GB+ of dependencies?
Running the example is 3 MiB for the repo, +667 MiB of Python dependencies, +86 MiB of models that will get downloaded from HuggingFace. =756 MiB.<p>(That's using the example as-is. If you switch it to the smaller model, modify the above with +57 MiB of models from HuggingFace, or =727 MiB.)<p>So I toyed with this a bit + the Rust library "ort", and ort is only 224M in release (non-debug) mode, and it was pretty simple to run this model with it. (I did not know ort before just now.) I didn't replicate the preprocessing the Python does before running the model, though. (You have to turn the text into an array of floats, essentially; the library is doing text -> phonemes -> tokens; the latter step is straight-forward.)
So, that was on macOS. It's actually huge on Linux, and I've run out of disk space trying to pull dependencies. It's nvidia, who always shows great judgement in their use of disk.
My quick test showed 670m of python libraries required on top of the model.
are there plans to output text alignment?
So, one thing I noticed, and this could easily be user error, is that if I set the text & voice in the example to:<p><pre><code> text ="""
Hello world. This is Kitten TTS.
Look, it's working!
"""
voice = 'Luna'
</code></pre>
On macOS, I get "Kitten TTS", but on Linux, I get "Kit… TTS". Both OSes generate the same phonemes of,<p><pre><code> Phonemes: ðɪs ɪz kˈɪʔn ̩ tˌiːtˌiːˈɛs ,
</code></pre>
which makes me really confused as to where it's going off the rails on Linux, since from there it should just be invoking the model.<p>edit: it really helps to use the same model <i>facepalm</i>. It's the 80M model, and it happens on both OS. Wildly the nano gets it better? I'm going to join the Discord lol.
This is great. Demo looks awesome.
I'm thinking of giving "voice" to my virtual pets (think Pokemon but less than a dozen). The pets are made up animals but based on real animal, like Mouseier from Mouse (something like that). Is this possible?<p>Tldr: generate human-like voice based on animal sound. Anyway maybe it doesn't make sense.
Is it English only?
Wow, what an amazing feat. Congratulations!
sounds amazing! does it stream? or is it so fast you don't need to?
This is something I've been looking for (the <50MB models in particular). Unfortunately my feedback is as follows:<p><pre><code> Downloading https://github.com/KittenML/KittenTTS/releases/download/0.8.1/kittentts-0.8.1-py3-none-any.whl (22 kB)
Collecting num2words (from kittentts==0.8.1)
Using cached num2words-0.5.14-py3-none-any.whl.metadata (13 kB)
Collecting spacy (from kittentts==0.8.1)
Using cached spacy-3.8.11-cp314-cp314-win_amd64.whl.metadata (28 kB)
Collecting espeakng_loader (from kittentts==0.8.1)
Using cached espeakng_loader-0.2.4-py3-none-win_amd64.whl.metadata (1.3 kB)
INFO: pip is looking at multiple versions of kittentts to determine which version is compatible with other requirements. This could take a while.
ERROR: Ignored the following versions that require a different python version: 0.7.10 Requires-Python >=3.8,<3.13; 0.7.11 Requires-Python >=3.8,<3.13; 0.7.12 Requires-Python >=3.8,<3.13; 0.7.13 Requires-Python >=3.8,<3.13; 0.7.14 Requires-Python >=3.8,<3.13; 0.7.15 Requires-Python >=3.8,<3.13; 0.7.16 Requires-Python >=3.8,<3.13; 0.7.17 Requires-Python >=3.8,<3.13; 0.7.5 Requires-Python >=3.8,<3.13; 0.7.6 Requires-Python >=3.8,<3.13; 0.7.7 Requires-Python >=3.8,<3.13; 0.7.8 Requires-Python >=3.8,<3.13; 0.7.9 Requires-Python >=3.8,<3.13; 0.8.0 Requires-Python >=3.8,<3.13; 0.8.1 Requires-Python >=3.8,<3.13; 0.8.2 Requires-Python >=3.8,<3.13; 0.8.3 Requires-Python >=3.8,<3.13; 0.8.4 Requires-Python >=3.8,<3.13; 0.9.0 Requires-Python >=3.8,<3.13; 0.9.2 Requires-Python >=3.8,<3.13; 0.9.3 Requires-Python >=3.8,<3.13; 0.9.4 Requires-Python >=3.8,<3.13; 3.8.3 Requires-Python >=3.9,<3.13; 3.8.5 Requires-Python >=3.9,<3.13; 3.8.6 Requires-Python >=3.9,<3.13; 3.8.7 Requires-Python >=3.9,<3.14; 3.8.8 Requires-Python >=3.9,<3.14; 3.8.9 Requires-Python >=3.9,<3.14
ERROR: Could not find a version that satisfies the requirement misaki>=0.9.4 (from kittentts) (from versions: 0.1.0, 0.3.0, 0.3.5, 0.3.9, 0.4.0, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.5.6, 0.5.7, 0.5.8, 0.5.9, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 0.6.7, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.7.4)
ERROR: No matching distribution found for misaki>=0.9.4
</code></pre>
I realize that I can run a multiple versions of python on my system, and use venv to managed them (or whatever equivalent is now trendy), but as I near retirement age all those deep dependencies nets required by modern software is really depressing me. Have you ever tried to build a node app that hasn't been updated in 18 months? It can't be done. Old man yelling at cloud I guess <i>shrugs</i>.
[dead]
[flagged]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]