This is exactly what I needed. I've been thinking about making this tool. For running and experimenting with local models this is invaluable.
This is a great idea, but the models seem pretty outdated - it's recommending things like qwen 2.5 and starcoder 2 as perfect matches for my m4 macbook pro with 128gb of memory.
In the screenshots, each model has a use case of General, Chat, or Coding. What might be the difference between General and Chat?
I wish there was more support for AMD GPUs on Intel macs. I saw some people on github getting llama.cpp working with it, would it be addable in the future if they make the backend support it?
Claude is pretty good at among recommendations if you input your system specs.
Personally I would have found a website where you enter your hardware specs more useful.
Same, I opened HN on my phone and was hoping to get an idea before I booted my computer up.
I was hoping for the same thing.