25 comments

  • viraptor43 days ago
    I&#x27;ve played with this a bit and it&#x27;s ok. I&#x27;d place it somewhere around sonnet 4.5 level, probably below. But with this aggressive pricing you can just run 3 copies to do the same thing, choose the one that succeeded and still come out way ahead with the cost. Not as great as following instructions as Claude models and can get lost, but still &quot;good enough&quot;.<p>I&#x27;m very happy with using it to just &quot;do things&quot;. When doing in depth debugging or a massive plan is needed, I&#x27;d go with something better, but later going through the motions? It works.
  • gcanyon43 days ago
    Would it kill them to use the words &quot;AI coding agent&quot; somewhere prominent?<p>&quot;MiniMax M2.1: Significantly Enhanced Multi-Language Programming, Built for Real-World Complex Tasks&quot; could be an IDE, a UI framework, a performance library, or, or...
    • spoaceman777743 days ago
      It&#x27;s not an AI coding agent. It&#x27;s an LLM that can be used for whatever you&#x27;d like, including powering coding agents.
      • pdyc43 days ago
        That reinforces OP’s point that it isn’t clear from their wording. I initially thought it was a speech model, then I saw Python, etc., and it took me a bit more reading to understand what it actually is
      • gcanyon43 days ago
        HA! I <i>almost</i> added a disclaimer to the original message that I wasn&#x27;t certain in my identification, hence the request&#x2F;complaint that they didn&#x27;t make it clear. But I figured the message would be more effective if I &quot;confidently got it wrong&quot; rather than asking, so I went with it.
        • martin-t43 days ago
          Some sad irony: just like saying the wrong thing is more likely to get you a reply, using a poor title gets them more engagement.
    • tw198443 days ago
      its main Chinese competitor GLM is like making 50 cents USD each in the past 6 months from its 40 million &quot;developer users&quot;, calling your flagship model &quot;AI coding agent&quot; is like telling investors &quot;we are doing this for fun, not for money&quot;.
  • Tepix43 days ago
    The weights got released on huggingface now.<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;MiniMaxAI&#x2F;MiniMax-M2.1" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;MiniMaxAI&#x2F;MiniMax-M2.1</a>
  • kachapopopow43 days ago
    I think people should stop comparing to sonnet, but to opus instead since it&#x27;s so far ahead on producing code I would actually want to use (gemini 3 pro tends to be lacking in generalization and wants things to be using it&#x27;s own style rather than adapting).<p>Whatever benchmark opus is ahead in should be treated as a very important metric of proper generalization in models.
    • azuanrb43 days ago
      I generally prefer Sonnet as comparison too. Opus, as good as it is, is just too expensive. The &quot;best&quot; model is the one I can use, not the one I can&#x27;t afford.<p>These days, by default I just use Sonnet&#x2F;Haiku. In most cases it&#x27;s more than good enough for me. It&#x27;s plenty with $20 plan.<p>With MiniMax, or GLM-4.7, some people like me are just looking for Sonnet level capability at much cheaper price.
      • mjburgess43 days ago
        Are you using GLM-4.7? I&#x27;ve just spent a fortune on Opus, and I heard GLM was close -- but after integrating it into cursor, it seems to spin forever, loose tool use, and generates partial? plans. I did look into using it with the claude cli tool, so it could be cursor specific -- but I havent had the best experience despite going for the pro plan with them. Any advise on how you&#x27;re using GLM effectively? If at all<p>At the moment Opus is the only model i can trust even when it generates &quot;refactoring work&quot;, it can do the refactoring.
        • azuanrb43 days ago
          I’m on the Lite plan. For coding, I still prefer Claude because the models are simply better. I mainly use CLI tools like Claude Code and OpenCode.<p>I’m also managing a few projects and teams. One way I’m getting value from my GLM subscription is by building a daily GitHub PR summary bot using a GitHub Action. It’s good enough for me to keep up with the team and to monitor higher-risk PRs.<p>Right now I’m using GLM more as an agent&#x2F;API rather than as a coding tool. Claude works best for agentic coding for me.<p>I’m on Claude $20 plan and I usually start with Haiku, then I switch to Sonnet or Opus for harder or longer tasks.
        • sumedh43 days ago
          &gt; I did look into using it with the claude cli tool, so it could be cursor specific<p>Claude Code with GLM seems ok to me, I just it use it as a backup LLM if in case I hit usage limits but for some light refactoring it did the job well.<p>Are you also facing issues with Claude Code and GLM?
      • kachapopopow42 days ago
        No matter the price they&#x27;re far cheaper than a developer and opus &#x2F; gemini 3 pro are both at a level where they&#x27;re really useful pair programmers and opus at times can be given a spec to implement and it will do it after 30 minutes with no input from me.
      • baq43 days ago
        are you counting price per token or price per successful task? I&#x27;m pretty sure opus 4.5 is cheaper per task than sonnet in some use cases.
        • azuanrb43 days ago
          Per successful tasks. The result are mixed. Like you mentioned, it can be cheaper but only in some use cases. I&#x27;m only on the $20 plan. If I use Opus and it&#x27;s not as efficient for my current tasks, I&#x27;ll burn through my limit pretty fast. Ended up can&#x27;t use any anymore for the next few hours.<p>Whereas with Sonnet&#x2F;Haiku, I&#x27;m much more guaranteed to have 100% AI assistance throughout my coding session. This matters more to me right now. Just a tradeoff I&#x27;m willing to make.
      • andai43 days ago
        Opus is 3x cheaper now.<p>I think it&#x27;s still not on the $20 plan tho which is sad.
        • azuanrb43 days ago
          Available since few weeks ago.<p>&gt; Claude Opus 4.5, our frontier coding model, is now available in Claude Code for Pro users. Pro users can select Opus 4.5 using the &#x2F;model command in their terminal.<p>Opus 4.5 will consume rate limits faster than Sonnet 4.5. We recommend using Opus for your most complex tasks and using Sonnet for simpler tasks.
        • sheepscreek42 days ago
          Use Claude Opus in Antigravity. Google is very generous with the limits. The best part is, if you hit your limit, you can switch to Gemini Pro High.<p>I think Google is able to do this because they host Claude on their own TPUs in their datacentres (probably for Vertex AI customers). So they can undercut just about anyone include Anthropic on costs!<p>No matter which model you start with, having the other frontier model as a backup is fantastic. Essentially you’re getting 2x the limit.
        • WiSaGaN43 days ago
          It is now. But the limit on $20 plan is quite low and easy to use up.
  • jondwillis43 days ago
    &gt; MiniMax has been continuously transforming itself in a more AI-native way. The core driving forces of this process are models, Agent scaffolding, and organization. Throughout the exploration process, we have gained increasingly deeper understanding of these three aspects. Today we are releasing updates to the model component, namely MiniMax M2.1, hoping to help more enterprises and individuals find more AI-native ways of working (and living) sooner.<p>This compresses to: “We are updating our model, MiniMax, to 2.1. Agent harnesses exist and Agents are getting more capable.”<p>A good model and agent harness, pointed at the task of writing this post, might suggest less verbosity and complexity— it comes off as fake and hype-chasing to me, even if your model is actually good. I disengage there.<p>I saw yall give a lightning talk recently and it was similarly hype-y. Perhaps this is a translation or cultural thing.
    • tw198443 days ago
      so when MiniMax released a pretty capable model, you choose to ignore the model itself and just focus a single sentence they wrote in the release note and started bad mouthing it.<p>is it a cultural thing?
      • pembrook43 days ago
        It’s called bikeshedding and yes it’s a cultural thing on HN. [1]<p>Most people here are big company worker bees where they take zero risks and do very little of substance.<p>In these organizations, it’s common for large groups of people to get together in “meetings” and endlessly nitpick surface-level details of unimportant things while completely missing the big picture because it’s far too complex to allow for easy opinions or smart-sounding critique.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Law_of_triviality" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Law_of_triviality</a>
      • jondwillis43 days ago
        It’s the first thing in this press release. Start with garbage? I’m going to assume it’s all garbage.
        • tw198442 days ago
          you are more than welcomed to call it garbage or whatever else you like. they will just catch up fast and eat your lunch in 6-12 months time.<p>btw, full weights are now available for download.
          • jondwillis42 days ago
            I’m not even sitting at the table. I’m a spectator. What’s your argument&#x2F;allegiance here? The original article is hyped-up drivel. The model could be amazing and that’s still the case.
      • simlevesque43 days ago
        If I use a software I need to trust it.
        • tw198443 days ago
          a model is not software, it is a bunch of weights.<p>you are more than welcomed to pick whatever model or software you choose to trust, that is totally fine. However, that is vastly different from bad mouthing a model or software just because its release note contains a single sentence you don&#x27;t like.
          • LoganDark43 days ago
            The API is software. You don&#x27;t get the weights.
            • logicprog43 days ago
              The weights are open.
              • homarp43 days ago
                here <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;MiniMaxAI&#x2F;MiniMax-M2.1" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;MiniMaxAI&#x2F;MiniMax-M2.1</a><p>GGUF <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;MiniMax-M2.1-GGUF" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;MiniMax-M2.1-GGUF</a>
              • LoganDark42 days ago
                Huh, I couldn&#x27;t find that in the article when I posted my comment. I checked again now and it&#x27;s there.
    • zaptrem43 days ago
      Not sure it’s a cultural thing since most of the copy coming out of DeepSeek has been pretty straightforward.
  • tomcam43 days ago
    I still can’t figure out what it does
    • esafak43 days ago
      It&#x27;s an LLM for coding.
    • yinuoli43 days ago
      It&#x27;s a neural network model, and it could generate text following a given text.
    • prmph43 days ago
      You are not alone
    • tucnak43 days ago
      You should ask ChatGPT.
    • dist-epoch43 days ago
      Money, it does money
      • tomcam41 days ago
        NOW I UNDERSTAND
  • gempir43 days ago
    Very anecdotal but for me this model has very weak prompt adherence. I compared it a tiny bit to gemini flash 3.0 and simple things like &quot;don&#x27;t use markdown tables in output&quot; was very hard to get with m2.1<p>Took me like 5 prompt iterations until it finally listened.<p>But it&#x27;s very good, better than flash 3.0 in terms of code output and reasoning while being cheaper.
  • p5v43 days ago
    Has anyone used this in earnest with something like OpenCode? Over the past few months I’ve tested a dozen models that were claimed to be nearly as good Claude Code or Codex, but the overall experience when using them with OpenCode was close to abysmal. Not even a single one was able to do a decent code editing job on a real-world codebase.
    • t1amat43 days ago
      With M2, yes - I’ve used it in Claude Code (e.g. native tool calling), Roo&#x2F;Cline (e.g. custom tool parsing), etc. It’s quite good and for some time the best model to self-host. At 4bit it can fit on 2x RTX 6000 Pro (e.g. ~200GB VRAM) with about 400k context at fp8 kv cache. It’s very fast due to low active params, stable at long context, quite capable in any agent harness (its training specialty). M2.1 should be a good bump beyond M2, which was undertrained relative to even much smaller models.
  • Invictus043 days ago
    How is everyone monitoring the skill&#x2F;utility of all these different models? I am overwhelmed by how many they are, and the challenge of monitoring their capability across so many different modalities.
    • redman2543 days ago
      <a href="https:&#x2F;&#x2F;www.swebench.com" rel="nofollow">https:&#x2F;&#x2F;www.swebench.com</a><p><a href="https:&#x2F;&#x2F;swe-rebench.com" rel="nofollow">https:&#x2F;&#x2F;swe-rebench.com</a><p><a href="https:&#x2F;&#x2F;livebench.ai&#x2F;#&#x2F;" rel="nofollow">https:&#x2F;&#x2F;livebench.ai&#x2F;#&#x2F;</a><p><a href="https:&#x2F;&#x2F;eqbench.com&#x2F;#" rel="nofollow">https:&#x2F;&#x2F;eqbench.com&#x2F;#</a><p><a href="https:&#x2F;&#x2F;contextarena.ai&#x2F;?needles=8" rel="nofollow">https:&#x2F;&#x2F;contextarena.ai&#x2F;?needles=8</a><p><a href="https:&#x2F;&#x2F;metr.org&#x2F;blog&#x2F;2025-03-19-measuring-ai-ability-to-complete-long-tasks&#x2F;" rel="nofollow">https:&#x2F;&#x2F;metr.org&#x2F;blog&#x2F;2025-03-19-measuring-ai-ability-to-com...</a><p><a href="https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;leaderboards&#x2F;models" rel="nofollow">https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;leaderboards&#x2F;models</a><p><a href="https:&#x2F;&#x2F;gorilla.cs.berkeley.edu&#x2F;leaderboard.html" rel="nofollow">https:&#x2F;&#x2F;gorilla.cs.berkeley.edu&#x2F;leaderboard.html</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;lechmazur&#x2F;confabulations" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lechmazur&#x2F;confabulations</a><p><a href="https:&#x2F;&#x2F;dubesor.de&#x2F;benchtable" rel="nofollow">https:&#x2F;&#x2F;dubesor.de&#x2F;benchtable</a><p><a href="https:&#x2F;&#x2F;help.kagi.com&#x2F;kagi&#x2F;ai&#x2F;llm-benchmark.html" rel="nofollow">https:&#x2F;&#x2F;help.kagi.com&#x2F;kagi&#x2F;ai&#x2F;llm-benchmark.html</a><p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;DontPlanToEnd&#x2F;UGI-Leaderboard" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;spaces&#x2F;DontPlanToEnd&#x2F;UGI-Leaderboard</a>
      • Alifatisk43 days ago
        I’d stick to artificial analysis
        • pylotlight43 days ago
          That has many of its own problems as well.
    • spoaceman777743 days ago
      This is the best summary, in my opinion. You can also see the individual scores on the benchmarks they use to compute their overall scores.<p>It&#x27;s nice and simple in the overview mode though. Breaks it down into an intelligence ranking, a coding ranking, and an agentic ranking.<p><a href="https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;</a>
      • Invictus042 days ago
        Unfortunately it&#x27;s completely unusable on mobile
        • spoaceman777741 days ago
          Works fine for me, but you could also just turn on desktop view in your mobile browser if it isn&#x27;t big enough on your screen.<p>I use Firefox Mobile, so perhaps there is a difference on Chromium-based browsers?
  • esafak43 days ago
    &gt; It exhibits consistent and stable results in tools such as Claude Code, Droid (Factory AI), Cline, Kilo Code, Roo Code, and BlackBox, while providing reliable support for Context Management mechanisms including Skill.md, Claude.md&#x2F;agent.md&#x2F;cursorrule, and Slash Commands.<p>One of the demos shows them using Claude Code, which is interesting. And the next sections are titled &#x27;Digital Employee&#x27; and &#x27;End-to-End Office Automation&#x27;. Their ambitions obviously go beyond coding. A sign of things to come...
    • atombender43 days ago
      Claude doesn&#x27;t officially support using other, non-Anthropic models, right? So did they patch the code or fake the Claude API, or some other hack to get around that?
      • homarp43 days ago
        you have a few &#x27;claude&#x27; proxies on github<p>llama.cpp recently added Anthropic API support <a href="https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;llama.cpp&#x2F;pull&#x2F;17570" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;llama.cpp&#x2F;pull&#x2F;17570</a>
    • jimmydoe43 days ago
      they are going IPO in HKEX in a few weeks. some hype up are necessary, not too far fetched imo, pretty much same as anthropic playbook.
      • tw198443 days ago
        anthropic playbook does include the false claim publicly made by its CEO that &quot;in six months AI would be writing 90 percent of code&quot;. he made that claim 10 months ago. it is a criminal offence for intentionally misleading investors in many countries.<p>MiniMax is like 100x more honest.
        • fluoridation43 days ago
          Does it come as misleading if you honestly believe what you&#x27;re saying but are simply mistaken?
        • sumedh43 days ago
          &gt; in six months AI would be writing 90 percent of code<p>Are you still writing code by hand?
  • m00dy43 days ago
    I used gemini-3-pro-preview on Deepwalker [0]. It was good, then switched to gemini-3-flash, It&#x27;s ok. It gets the job done. Looking for some alternatives such as GLM and Minimax. Very curious about their agentic performance. Like long running tasks with reasoning.<p>[0]: <a href="https:&#x2F;&#x2F;deepwalker.xyz" rel="nofollow">https:&#x2F;&#x2F;deepwalker.xyz</a>
  • sosodev43 days ago
    I’ve spent a little bit of time testing Minimax M2. It’s quite good given the small size but it did make some odd mistakes and struggle with precise instructions.
    • viraptor43 days ago
      This is an announcement for M2.1 not M2. It got a decent bump in agent capabilities.
  • stpedgwdgfhgdd43 days ago
    Internal Server Error
  • big-chungus442 days ago
    can you please fix the login, when I try to log in, it says Unable to process request due to missing initial state. This may happen if browser sessionStorage is inaccessible or accidentally cleared. Some specific scenarios are - 1) Using IDP-Initiated SAML SSO. 2) Using signInWithRedirect in a storage-partitioned browser environment.
  • jdright43 days ago
    <a href="https:&#x2F;&#x2F;www.minimax.io&#x2F;news&#x2F;minimax-m21" rel="nofollow">https:&#x2F;&#x2F;www.minimax.io&#x2F;news&#x2F;minimax-m21</a>
  • mr_o4743 days ago
    I won&#x27;t say it&#x27;s same on the level of claude models but it&#x27;s definitely good at coming up with frontend designs
  • integricho43 days ago
    Their site crashes my phone browser while scrolling. Is that the expected quality of output of their product?
    • Tepix43 days ago
      Should a website be able to crash a browser?
    • jedisct143 days ago
      If a website can crash your browser, the problem is your browser...
  • sillyboi42 days ago
    Internal server error..
  • p-e-w43 days ago
    One of the cited reviews goes:<p>“We&#x27;re excited for powerful open-source models like M2.1 […]”<p>Yet as far as I can tell, this model isn’t open at all. Not even open weights, nevermind open source.
    • viraptor43 days ago
      It&#x27;s scheduled for release. They jumped the gun with the news. But at far as we know, it&#x27;s still coming out, just like M2.
      • p-e-w43 days ago
        I don’t get it. What’s the holdup? Uploading a model to Hugging Face isn’t exactly difficult.
    • NitpickLawyer43 days ago
      Repo made public a few minutes ago:<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;MiniMaxAI&#x2F;MiniMax-M2.1" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;MiniMaxAI&#x2F;MiniMax-M2.1</a>
    • bearjaws43 days ago
      Yeah I don&#x27;t see anyway to download this, ollama has it as cloud only.
  • boredemployee43 days ago
    Internal Server Error
  • erdemo42 days ago
    The intro video is so cringe as their AI agent name.
  • Yash1643 days ago
    [dead]
  • GavinNewsom42 days ago
    [flagged]
  • maximgeorge43 days ago
    [dead]
  • monster_truck43 days ago
    That they are still training models against Objective-C is all the proof you need that it will outlive Swift.<p>When is someone going to vibe code Objective-C 3.0? Borrowing all of the actual good things that have happened since 2.0 is closer than you&#x27;d think thanks to LLVM and friends.
    • viraptor43 days ago
      Why would they not? Existing objective-c apps will still need updates and various work. Models are still trained on assembler for architectures that don&#x27;t meaningfully exist today as well.
    • victorbjorklund43 days ago
      I’m sure you can find some COBOL code in many of the training sets. Not sure I would build my next startup using COBOL.