4 comments

  • ebbi13 hours ago
    The &#x27;multi-model consensus&#x27; feature actually looks very useful! I&#x27;m going to give this a go.<p>A question on OpenRouter - is it just a place to consolidate the various AI models through one billing platform, or does it do more than that? And are the costs slightly more as they take a cut in between?
    • joshstrange13 hours ago
      &gt; is it just a place to consolidate the various AI models through one billing platform, or does it do more than that<p>You can easily switch models, use the cheapest provider (especially for open models), and not have to reach certain &quot;tiers&quot; to get access to limits like you might on OpenAI&#x2F;Anthropic&#x27;s direct offerings.<p>&gt; And are the costs slightly more as they take a cut in between?<p>5% more, you buy credits upfront and pay 5% extra. Aside from that you pay the normal prices listed (which have always matched the direct providers as well AFAIK).
      • KronisLV10 hours ago
        Note that you also might need to think a little bit about caching: <a href="https:&#x2F;&#x2F;openrouter.ai&#x2F;docs&#x2F;guides&#x2F;best-practices&#x2F;prompt-caching" rel="nofollow">https:&#x2F;&#x2F;openrouter.ai&#x2F;docs&#x2F;guides&#x2F;best-practices&#x2F;prompt-cach...</a><p>Depending on the way how the context grows, it can matter quite a bit!
        • hivetechs9 hours ago
          Great call out! Yes I have tried to follow these, to make Consensus compliant with OpenRouter&#x27;s prompt caching best practices.
      • ebbi13 hours ago
        Appreciate the reply mate, thank you.
    • hivetechs10 hours ago
      What&#x27;s great about OpenRouter is you have access to all providers and models and they do the work of standardizing the interface. Our new HiveTechs Consensus IDE configures 8 profiles for you and your AI conversations, each using its own LLM from OpenRouter and unlimited custom profiles, you pick the providers and LLM&#x27;s from a list and name the profile. Also, we have our own built in HiveTechs CLI that gives you the ability to use any LLM from OpenRouter, updated daily. So the moment a new model drops, you can test it out without waiting for it to release in your other favorite apps.
  • infinet14 hours ago
    My apologies for the digression. But it reminds me a post I saw long time ago when a guy installed all the antivirus&#x2F;antimalware software he could find on a Windows machine. It started an antivirus civil war and the Windows fell into a coma within seconds.
    • exe3414 hours ago
      I haven&#x27;t admined windows for 16 years, but I had this conspiracy theory back in the days that when you install an anti virus A, you&#x27;d find some viruses, and then you&#x27;d install anti virus B, you&#x27;d find some more, but then when you went back to anti virus A, you&#x27;d find a couple more - that the free versions were installing their own viruses. It might just have been because I was using bootleg copies.
  • xnx13 hours ago
    I started using Gemini and see no need for other models.
    • hivetechs10 hours ago
      Hey, I fully understand. A model or CLI like Gemini releases a new version and it seems like a new place to call home. However, in this time of AI growth, each providers new advancements are reason to make a change, today for you Gemini, perhaps next week, Claude or OpenAI. With HiveTechs Consensus you have use of every leading provider at all times, so use the one you love and compare others any time. You may discover that Gemini excels in frontend but Claude and its latest model excels at backend, reducing your development time.
  • vivzkestrel15 hours ago
    is this a vscode fork? how compatible are existing vscode extensions with this? what is your tech stack
    • hivetechs15 hours ago
      No this is not a fork, I built it from scratch. It is not intended to be used with vscode extensions. Its an Electron app. Desktop Framework<p><pre><code> - Electron - Desktop app with main&#x2F;renderer process architecture - TypeScript - Primary language (strict mode) Frontend&#x2F;UI - Monaco Editor - VS Code-style code editing - HTML&#x2F;CSS - UI rendering - WebSockets - Real-time communication with backend Backend Services - Node.js - Runtime - Express - Memory Service API server - SQLite - Local database for memory persistence - Cloudflare D1 - Remote sync for memory backup</code></pre>
      • vivzkestrel7 hours ago
        interesting, did you ever consider building it with tauri?