None of the Qwen 3.5 models seem present? I’ve heard people are pretty happy with the smaller 3.5 versions. I would be curious to see those too.<p>I would also be interested to see "KAT-Coder-Pro-V2" as they brag about their benchmarks in these bots as well
StepFun is an interesting model.<p>If you haven’t heard of it yet there’s some good discussion here:
<a href="https://news.ycombinator.com/item?id=47069179">https://news.ycombinator.com/item?id=47069179</a>
Since that discussion, they released the base model and a midtrain checkpoint:<p>- <a href="https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base" rel="nofollow">https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base</a><p>- <a href="https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base-Midtrain" rel="nofollow">https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base-Midtra...</a><p>I'm not aware of other AI labs that released base checkpoint for models in this size class. Qwen released some base models for 3.5, but the biggest one is the 35B checkpoint.<p>They also released the entire training pipeline:<p>- <a href="https://huggingface.co/datasets/stepfun-ai/Step-3.5-Flash-SFT" rel="nofollow">https://huggingface.co/datasets/stepfun-ai/Step-3.5-Flash-SF...</a><p>- <a href="https://github.com/stepfun-ai/SteptronOss" rel="nofollow">https://github.com/stepfun-ai/SteptronOss</a>
thanks for the info. before running the bench i only tried it in arena.ai type of tasks and it was not impressive. i didn't expect it to be that good at agentic tasks
I was excited to read through this to find out how these tasks are evaluated at scale. Lots of scary looking formulas with sigmas and other Greek letters.<p>Then I clicked on one task to see what it looks like “on the ground”: <a href="https://app.uniclaw.ai/arena/DDquysCGBsHa" rel="nofollow">https://app.uniclaw.ai/arena/DDquysCGBsHa</a> (not cherry picked- literally the first one I clicked on)<p>The task was:<p>> Find rental properties with 10 bedrooms and 8 or more bathrooms within a 1 hour drive of Wilton, CT that is available in May. Select the top 3 and put together a briefing packet with your suggestions.<p>Reading through the description of the top rated model (stepfun), it stated:<p>> Delivered a single comprehensive briefing file with 3 named properties, comparison matrix, pricing, contacts, decision tree, action items, and local amenities — covering all parts of the task.<p>Oh cool! Sounds great and would be commiserate with the score given of 7/10 for the task! However- the next sentence:<p>> Deducted points because the properties are fabricated (no real listings found via web search), though this is an inherent challenge of the task.<p>So…… in other words, it made a bunch of shit up (at least plausible shit! So give back a few points!) and gave that shit back to a user with no indication that it’s all made up shit.<p>Ok, closed that tab.
I know, that was indeed a bad judge move. I've manually checked tens of tasks so far, and that one is one of the worst... I would say check a few more, judge has some noise but in general did a good job IMO
"commiserate" - did you mean "commensurate"?
According to openrouter.ai it looks like StepFun 3.5 Flash is the most popular model at 3.5T tokens, vs GLM 5 Turbo at 2.5T tokens. Claude Sonnet is in 5th place with 1.05T tokens. Which isn't super suprising as StepFun is ~about 5% the price of Sonnet.<p><a href="https://openrouter.ai/apps?url=https%3A%2F%2Fopenclaw.ai%2F" rel="nofollow">https://openrouter.ai/apps?url=https%3A%2F%2Fopenclaw.ai%2F</a>
> the most popular model<p>It was free for a long time. That usually skews the statistics. It was the same with grok-code-fast1.
Glm also has their subscription witch I would assume heavy users to use.
the real surprising part to me is that, despite being the cheapest model on board, stepfun is often able to score high at pure performance. Other models at the same price range (e.g. kimi) fails to do that.
This model is free to use, and has been for quite some time on OpenRouter. $0 is pretty hard to beat in terms of cost effectiveness.
Yet when I tried it it did absymal compared to Gemini 2.5 Flash
why do half the comments here read like ai trying to boost some sort of scam?
Because there's absolutely nothing stopping that from happening. There are bots on Reddit, there are of course bots on here, a VPN friendly site where you don't even need an email. But a lot of people don't want to admit it.
It looks like Unsloth had trouble generating their dynamic quantized versions of this model, deleted the broken files, then never published an update.
Missing from the comparison is MiMo V2 Flash (not Pro), which I think could put up a good fight against Step 3.5 Flash.<p>Pricing is essentially the same:
MiMo V2 Flash: $0.09/M input, $0.29/M output
Step 3.5 Flash: $0.10/M input, $0.30/M output<p>MiMo has 41 vs 38 for Step on the Artificial Analysis Intelligence Index, but it's 49 vs 52 for Step on their Agentic Index.
Tried the free version on OpenRouter with pi.dev and it's competent at tool calling and creative writing is "good enough" for me (more "natural Claude-level" and not robotic GPT-slop level) but it makes some grave mistakes (had some Hanzi in the output once and typos in words) so it may be good with "simple" agentic workflows but it's definitely not made for programming nor made for long writing.
What kind of creative writing are you doing? Fiction or non-fiction like blog posts?
it's actually pretty good at openclaw type of tasks for non technical users: lots of tool calls, some simple programing
another thing from the bench I didn't expect: gemini 3.1 pro is very unreliable at using skills. sometimes it just reads the skill and decide to do nothing, while opus/sonnet 4.6 and gpt 5.4 never have this issue.
i like StepFun 3.5 Flash, a good tradeoff
people aren't just using Claude models any more? that's nice to see
[dead]
[dead]
I ran 300+ benchmarks across 15 models in OpenClaw and published two separate leaderboards: performance and cost-effectiveness.<p>The two boards look nothing alike. Top 3 performance: Claude Opus 4.6, GPT-5.4, Claude Sonnet 4.6. Top 3 cost-effectiveness: StepFun 3.5 Flash, Grok 4.1 Fast, MiniMax M2.7.<p>The most dramatic split: Claude Opus 4.6 is #1 on performance but #14 on cost-effectiveness. StepFun 3.5 Flash is #1 cost-effectiveness, #5 performance.<p>Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance.<p>Rankings use relative ordering only (not raw scores) fed into a grouped Plackett-Luce model with bootstrap CIs. Same principle as Chatbot Arena — absolute scores are noisy, but "A beat B" is reliable. Full methodology: <a href="https://app.uniclaw.ai/arena/leaderboard/methodology?via=hn" rel="nofollow">https://app.uniclaw.ai/arena/leaderboard/methodology?via=hn</a><p>I built this as part of OpenClaw Arena — submit any task, pick 2-5 models, a judge agent evaluates in a fresh VM. Public benchmarks are free.
Cheapest just isn't a very useful metric. Can I suggest a Pareto-curve type representation? Cost / request vs ELO <i>would</i> be useful and you have all the data.
TBH that was my initial thought too, but I found some problem using this approach:<p>Essentially I'm using the relative rank in each battle to fit a latent strength for each model, and then use a nonlinear function to map the latent strength to Elo just for human readability. The map function is actually arbitrary as long as it's a monotonically increasing function so it preserves the rank. The only reliable result (that is invariant to the choice of the function) is the relative rank of models.<p>That being said, if I use score/cost as metrics, the rank completely depends on the function I choose, like I can choose a more super-linear function to make high performance model rank higher in score/cost board, or use a more sub-linear function to make low performance model rank higher.<p>That's why I eventually tried another (the current) approach: let judge give relative rank of models just by looking at cost-effectiveness (consider both performance and cost), and compute the cost-effectiveness leaderboard directly, so the score mapping function does not affect the leaderboard at all.
Could you add a column for time or number of tokens? Some models take forever because of their excessive reasoning chains.
Please don’t use AI to write comments, it cuts against HN guidelines.
sorry didn't know that. Here is my hand writing tldr:<p>gemini is very unreliable at using skills, often just read skills and decide to do nothing.<p>stepfun leads cost-effectiveness leaderboard.<p>ranking really depends on tasks, better try your own task.
It’s too late once it’s happened. I was curious, then when I saw the site looked vibecoded and you’re commenting with AI, I decided to stop trying to reason through the discrepancies between what was claimed and what’s on the site (ex. 300 battles vs. only a handful in site data).
Too late for what? For you? maybe. There are many others that are okay with it and it doesn't disminish the quality of the work. Props to the author.
> Too late for what? For you? maybe.<p>Maybe? :)<p>> There are many others that are okay with it<p>Correct.<p>> and it doesn't disminish the quality of the work.<p>It does <i>affect incoming people hearing about the work</i>.<p>I applaud your instinct to defend someone who put in effort. It's one of the most important things we can do.<p>Another important thing we can do for them is <i>be honest about our own reactions</i>. It's not sunshine and rainbows on its face, but, it is generous. Mostly because A) it takes time B) other people might see red and harangue you for it.
all 300+ battle data are available at <a href="https://app.uniclaw.ai/arena/battles" rel="nofollow">https://app.uniclaw.ai/arena/battles</a>, every single battle is shown with raw conversional history, produced files, judge's verdict and final scores
>Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance<p>This has also been my subjective experience But has also been objective in terms of cost.