10 comments

  • eek21211 minute ago
    I gotta say, the local models are catching up quick. Claude is definitely still ahead, but things are moving right along.
  • alexhans1 hour ago
    Useful tip.<p>From a strategic standpoint of privacy, cost and control, I immediately went for local models, because that allowed to baseline tradeoffs and it also made it easier to understand where vendor lock-in could happen, or not get too narrow in perspective (e.g. llama.cpp&#x2F;open router depending on local&#x2F;cloud [1] ).<p>With the explosion of popularity of CLI tools (claude&#x2F;continue&#x2F;codex&#x2F;kiro&#x2F;etc) it still makes sense to be able to do the same, even if you can use several strategies to subsidize your cloud costs (being aware of the lack of privacy tradeoffs).<p>I would absolutely pitch that and evals as one small practice that will have compounding value for any &quot;automation&quot; you want to design in the future, because at some point you&#x27;ll care about cost, risks, accuracy and regressions.<p>[1] - <a href="https:&#x2F;&#x2F;alexhans.github.io&#x2F;posts&#x2F;aider-with-open-router.html" rel="nofollow">https:&#x2F;&#x2F;alexhans.github.io&#x2F;posts&#x2F;aider-with-open-router.html</a><p>[2] - <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA</a>
    • cyanydeez15 minutes ago
      I think control should be top of the list here. You&#x27;re talking about building work flows, products and long term practices around something that&#x27;s inherently non-deterministic.<p>And the probability that any given model you use today is the same as what you use tomorrow is doubly doubtful:<p>1. The model itself will change as they try to improve the cost-per-test improves. This will necessarily make your expectations non-deterministic.<p>2. The &quot;harness&quot; around that model will change as business-cost is tightened and the amount of context around the model is changed to improve the business case which generates the most money.<p>Then there&#x27;s the &quot;cataclysmic&quot; lockout cost where you accidently use the wrong tool that gets you locked out of the entire ecosystem and you are black listed, like a gambler in vegas who figures out how to count cards and it works until the house&#x27;s accountant identifies you as a non-negligible customer cost.<p>It&#x27;s akin to anti-union arguments where everyone &quot;buying&quot; into the cloud AI circus thinks they&#x27;re going to strike gold and completely ignores the fact that very few will and if they really wanted a better world and more control, they&#x27;d unionize and limit their illusions of grandeur. It should be an easy argument to make, but we&#x27;re seeing about 1&#x2F;3 of the population are extremely susceptible to greed based illusions.,
    • mogoman59 minutes ago
      can you recommend a setup with ollama and a cli tool? Do you know if I need a licence for Claude if I only use my own local LLM?
      • alexhans43 minutes ago
        What are your needs&#x2F;constraints (hardware constraints definitely a big one)?<p>The one I mentioned called continue.dev [1] is easy to try out and see if it meets your needs.<p>Hitting local models with it should be very easy (it calls APIs at a specific port)<p>[1] - <a href="https:&#x2F;&#x2F;github.com&#x2F;continuedev&#x2F;continue" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;continuedev&#x2F;continue</a>
  • btbuildem3 minutes ago
    I&#x27;m confused, wasn&#x27;t this already available via env vars? ANTHROPIC_BASE_URL and so on, and yes you may have to write a thin proxy to wrap the calls to fit whatever backend you&#x27;re using.<p>I&#x27;ve been running CC with Qwen3-Coder-30B (FP8) and I find it just as fast, but not nearly as clever.
  • wkirby9 minutes ago
    My experience thus far is that the local models are a) pretty slow and b) prone to making broken tool calls. Because of (a) the iteration loop slows down enough to where I wander off to do other tasks, meaning that (b) is way more problematic because I don&#x27;t see it for who knows how long.<p>This is, however, a major improvement from ~6 months ago when even a single token `hi` from an agentic CLI could take &gt;3 minutes to generate a response. I suspect the parallel processing of LMStudio 0.4.x and some better tuning of the initial context payload is responsible.<p>6 months from now, who knows?
  • hkpatel339 minutes ago
    Openrouter can also be used with claude code. <a href="https:&#x2F;&#x2F;openrouter.ai&#x2F;docs&#x2F;guides&#x2F;claude-code-integration" rel="nofollow">https:&#x2F;&#x2F;openrouter.ai&#x2F;docs&#x2F;guides&#x2F;claude-code-integration</a>
  • zingar54 minutes ago
    I guess I should be able to use this config to point Claude at the GitHub copilot licensed models (including anthropic models). That’s pretty great. About 2&#x2F;3 of the way through every day I’m forced to switch from Claude (pro license) to amp free and the different ergonomics are quite jarring. Open source folks get copilot tokens for free so that’s another pro license I don’t have to worry about.
  • esafak16 minutes ago
    Or they could just let people use their own harnesses again...
    • usef-11 minutes ago
      They do? That&#x27;s what the API is.<p>The subscription always seemed clearly advertised for client usage, not general API usage, to me. I don&#x27;t know why people are surprised after hacking the auth out of the client. (note in clients they can control prompting patterns for caching etc, it can be cheaper)
      • esafak4 minutes ago
        End users -- people who use harnesses -- have subscriptions so that makes no sense. General API usage is for production.
  • baalimago1 hour ago
    Or better yet: Connect to some trendy AI (or web3) company&#x27;s chatbot. It almost always outputs good coding tips
  • swyx56 minutes ago
    i mean the other obvious answer is to plug in to the other claude code proxies that other model companies have made for you:<p><a href="https:&#x2F;&#x2F;docs.z.ai&#x2F;devpack&#x2F;tool&#x2F;claude" rel="nofollow">https:&#x2F;&#x2F;docs.z.ai&#x2F;devpack&#x2F;tool&#x2F;claude</a><p><a href="https:&#x2F;&#x2F;www.cerebras.ai&#x2F;blog&#x2F;introducing-cerebras-code" rel="nofollow">https:&#x2F;&#x2F;www.cerebras.ai&#x2F;blog&#x2F;introducing-cerebras-code</a><p>or i guess one of the hosted gpu providers<p>if you&#x27;re basically a homelabber and wanted an excuse to run quantized models on your own device go for it but dont lie and mutter under your own tin foil hat that its a realistic replacement
  • raw_anon_111118 minutes ago
    Or just don’t use Claude Code and use Codex CLI. I have yet to hit a quota with Codex working all day. I hit the Claude limits within an hour or less.<p>This is with my regular $20&#x2F;month ChatGpT subscription and my $200 a year (company reimbursed) Claude subscription.