3 comments

  • divan3 hours ago
    One of the main reasons for me for sticking with Claude Code (also for non-coding tasks, I think the name is a misnomer) is the fixed price plan. Pretty much any other open-source alternative requires API key, which means that as soon as I start using it _for real_, I&#x27;ll start overpaying and&#x2F;or hitting limits too fast. At least that was my initial experience with API from OpenAI&#x2F;Claude&#x2F;Gemini.<p>Am I biased&#x2F;wrong here?
    • giancarlostoro5 minutes ago
      You&#x27;re not wrong, though I suspect the AI &quot;bubble burst&quot; begins to happen when companies like Anthropic stop giving us so much compute for &#x27;free&#x27; the only hope is that as things get better their cheaper models get as good as their best models today and so it costs drastically less to use them.
    • segmenta3 hours ago
      Yep, this is a fair take. Token usage shoots up fast when you do agentic stuff for coding. I too end up doing the same thing.<p>But for most background automations your might actually run, the token usage is way lower and probably an order of magnitude cheaper than agentic coding. And a lot of these tasks run well on cheaper models or even open-source ones.<p>So I don&#x27;t think you are wrong at all. It is just that I believe the expensive token pattern mostly comes from coding-style workloads.
      • kej2 hours ago
        I don&#x27;t doubt you, but it would be interesting to see some token usage measurements for various tasks like you describe.
        • segmenta2 hours ago
          For example, the NotebookLM-style podcast generator workflow in our demo uses around 3k tokens end to end. Using Claude Sonnet 4.5’s blended rate (about $4.5 per million tokens for typical input&#x2F;output mix), you can run this every day for roughly eight months for a bit over three dollars. Most non-coding automations end up in this same low range.
  • nl3 hours ago
    I&#x27;m increasingly seeing code-adjacent people who are using coding agents for non-coding things because the tooling support it better, and the agents work really well.<p>It&#x27;s an interesting area, and glad to see someone working on this.<p>The other program in the space that I&#x27;m aware of is Block&#x27;s Goose.
    • segmenta3 hours ago
      Yep, totally agree. We actually had an earlier web version, and the big learning was that without access to code-related tools the agent feels pretty limited. That pushed us toward a CLI where it can use the full shell and behave more like a real worker.<p>Really appreciate the support and the Goose pointer. Would love to hear what you think of RowboatX once you try it.
  • jckahn6 hours ago
    Can this use local LLMs?
    • segmenta6 hours ago
      Yes - you can use local LLMs through LiteLLM and Ollama. Would you like us to support anything else?
      • thedangler5 hours ago
        LM Studio?
        • ramnique5 hours ago
          Yes, because LM Studio is openai-compatible. When you run rowboatx the first time, it creates a ~&#x2F;.rowboat&#x2F;config&#x2F;models.json. You can then configure LM Studio there. Here is an example: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;ramnique&#x2F;9e4b783f41cecf0fcc8d92b277d4d308" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;ramnique&#x2F;9e4b783f41cecf0fcc8d92b277d...</a>