GLM-5V-Turbo is a model I wanted to like due to its speed and API reliability, but it didn't perform well in our coding and reasoning testing. More recent open source models have made it obsolete. GLM 5.1 is so many light years ahead of it on everything except speed, that I'm not sure why it's still being served.<p>Comprehensive evaluation results at <a href="https://gertlabs.com/rankings" rel="nofollow">https://gertlabs.com/rankings</a>
>but it didn't perform well in our coding and reasoning testing<p>>Comprehensive evaluation results at <a href="https://gertlabs.com/rankings" rel="nofollow">https://gertlabs.com/rankings</a><p>But if you go to the linked site, it seems like the only thing that's part of the evaluation is how well the models play various games? I suppose that counts as "reasoning", but I don't see how coding ability tested?
Games is loosely defined here, as we run the bench across hundreds of unique environments. For some, the models write code to play a game, either one-shot or via a harness where they can iterate and use tools. Some they play directly, making a decision on each game tick. Some are real-time, giving the models a harness where they can write code handlers or submit decisions to interact with environments directly.<p>Coding is what we test for most heavily. Testing this via a game format (instead of correct/incorrect answers) allows us to score code objectively, scale to smarter models, and directly compare performance to other models. When we built the first iteration last year, I was surprised by how well it mapped to subjective experience with using models for coding. Games really are great for measuring intelligence.
GLM-5.1 does not support image input.
This may be a strange request, but is it at all possible to include Cursor's Composer models in your tests?
I think the point is to use them both with GLM 5.1 delegating vision tasks to GLM-5V-Turbo
Click coordinates. Agentic GUI is really annoying when the multi-modal agent cannot click on x,y coordinates.<p>I tested Qwen3.6, Gemma4, Nemotron3-nano-omni. They fully hallucinate x,y coords.
(did not try GLM-5V yet)<p>GPT-5.5 can easily do it. But also Vocaela, a tiny 500M model, is quite good at it. Hope they improve the training for x,y clicking soon on the smallish multi-modals.<p>Recently slopped a http service together just so my local models can click, instead of relying on all the wild ways agents currently hack into the browser (browser-use, browser-harness, agent-browser, dev-browser etc) <a href="https://github.com/julius/vocaela-click-coords-http" rel="nofollow">https://github.com/julius/vocaela-click-coords-http</a>
I've had lots of success with generating coordinates and answering questions using the UI-TARS model <a href="https://github.com/bytedance/UI-TARS" rel="nofollow">https://github.com/bytedance/UI-TARS</a>.
Qwen3.5 is able to output click coordinates and bounding boxes just fine, as values normalized to 0..1000, I’d hope Qwen3.6 didn’t loose this capability.
This sounds a lot like another hacker news posted in the last few days. The same problem image generators have with a prompt like, produce numbers 1-50 in a spiral pattern and it can't count properly. But if you break it into a raster/vector where you have it first produce the visual content and then a SVG overlay, it's completely capable.<p>Have you tried doing a two step: review the image, then render a vector?
Maybe there is a smart trick to get them to do the right thing, but the things I tried did not work.<p>At one point I had some smaller model draw bounding boxes around everything that looked interactable and labels like "e3" ... then asked the model to tell me "click on e3". Did not work in my tests was pretty much as bad as x,y.
We just migrated an AI agent from Kimi to GLM and frankly I am surprised by the results. It feels premium.<p>However, both Kimi and GLM can end up in doom loops so be careful how you use them. Without a proper harness the agent can easily get into some tricky situations with no escape.<p>We had to develop new heuristics in our cloud harness just because of this but I am really grateful that we did as the platform feels now more robust.<p>A small price to pay for model plug & play!
Looks like this was not an open release, the latest GLM-xV release was 4.6V and Turbo models were never open.
I've been using GLM pretty much exclusively last 6-8 months. I have access to Anthropic and OpenAI models and others. I always keep returning to GLM, it isn't the best, sometimes I would go to Codex to help it, but overall, especially with Turbo, it is everyday good model.<p>Turbo makes a huge difference in everyday use because it saves you time and you are not in the mood always to wait endlessly.
[dead]
z.ai will use quantized models in off hours. Buyer beware
I have a subscription and I have not seen any difference in performance during on/off hours. What exactly are you basing this on?
Do you have proof for this?
I hear a lot of people complaining, I am on their Max plan, I never hit limits, use it non-stop and overall it has been fantastic experience.