It’s great to see this pattern of people realising that agents can specify the desired behavior then write code to conform to the specs.<p>TDD, verification, whatever your tool; verification suites of all sorts accrue over time into a very detailed repository of documentation of <i>how things are supposed to work</i> that, being executable, puts zero tokens in the context when the code is correct.<p>It’s more powerful than reams upon reams of markdown specs. That’s because it encodes <i>details</i>, not intent. Your intent is helpful at the leading edge of the process, but the codified <i>result</i> needs shoring up to prevent regression. That’s the area software engineering has always ignored because we have gotten by on letting teams hold context in their heads and docs.<p>As software gets more complex we need better solutions than “go ask Jim about that, bloke’s been in the code for years”.
> That’s because it encodes details, not intent.<p>Be careful here - make sure you encode the right details. I've seen many cases where the tests are encoding the details of how it was implemented and not what it is intended to do. This means that you can't refactor anything because your tests are enforcing a design. (refactor is changing code without deleting tests, the trick is how can you make design changes without deleting tests - which means you have to test as much as possible at a point where changing that part of the design isn't possible anyway)
I feel like the difference is minimal, if not entirely dismissable. Code in this sense is just a representation of the same information as someone would write in an .md file. The resolution changes, and that's where both detail and context are lost.<p>I'm not against TDD or verification-first development, but I don't think writing that as code is the end-goal. I'll concede that there's millions of lines of tests that already exist, so we should be using those as a foundation while everything else catches up.
Tests (and type-checkers, linters, formal specs, etc.) ground the model in reality: they show it that it got something wrong (without needing a human in the loop). It's empiricism, "nullius in verba"; the scientific approach, which lead to remarkable advances in a few hundred years; that over a thousand years of ungrounded philosophy couldn't achieve.
The scientific approach is not only or primarily empiricism. We didn't test our way to understanding. The scientific approach starts with a theory that does it's best to explain some phenomenon. Then the theory is criticized by experts. Finally, if it seems to be a promising theory tests are constructed. The tests can help verify the theory but it is the theory that provides the explanation which is the important part. Once we have explanation then we have understanding which allows us to play around with the model to come up with new things, diagnose problems etc.<p>The scientific approach is theory driven, not test driven. Understanding (and the power that gives us) is the goal.
> The scientific approach starts with a theory that does it's best to explain some phenomenon<p>At the risk of stretching the analogy, the LLM's internal representation is that theory: gradient-descent has tried to "explain" its input corpus (+ RL fine-tuning), which will likely contain relevant source code, documentation, papers, etc. to our problem.<p>I'd also say that a piece of software is a theory too (quite literally, if we follow Curry-Howard). A piece of software generated by an LLM is a more-specific, more-explicit subset of its internal NN model.<p>Tests, and other real CLI interactions, allow the model to find out that it's wrong (~empiricism); compared to going round and round in chain-of-thought (~philosophy).<p>Of course, test failures don't tell us how to make it <i>actually pass</i>; the same way that unexpected experimental/observational results don't tell us what an appropriate explanation/theory should be (see: Dark matter, dark energy, etc.!)
The ai is just pattern matching. Vibing is not understanding, whether done by humans or machines. Vibe programmers (of which there are many) make a mess of the codebase piling on patch after patch. But they get the tests to pass!<p>Vibing gives you something like the geocentric model of the solar system. It kind of works but but it's much more complicated and hard to work with.
The theory still emanated from actual observations, didn't it ?
It most certainly is not. All your tests are doing is seeding the context with tokens that increase the probability of tokens related to solving the problem being selected next. One small problem: if the dataset doesn't <i>have</i> sufficiently well-represented answers to the specific problem, no amount of finessing the probability of token selection is going to lead to LLMs solving the problem. The scientific method is grounded in the ability to <i>reason</i>, not probabilistically retrieve random words that are statistically highly correlated with appearing near other words.
This only holds if you understand what's in the tests, and the tests are realistic. The moment you let the LLM write the tests without understanding them, you may as well just let it write the code directly.
> The moment you let the LLM write the tests without understanding them, you may as well just let it write the code directly.<p>I disagree. Having tests (even if the LLM wrote them itself!) gives the model <i>some</i> grounding, and exposes <i>some</i> of its inconsistencies. LLMs are not logically-omniscient; they can "change their minds" (next-token probabilities) when confronted with evidence (e.g. test failure messages). Chain-of-thought allows more computation to happen; but it doesn't give the model any extra <i>evidence</i> (i.e. Shannon information; outcomes that are surprising, given its prior probabilities).
I disagree to some degree. Tests have value even beyond whether they test the right thing. At the very least they show something worked and now doesnt work or vice versa. That has value in itself.
This assumes that tests are realistic, which for the most part they are not.
Say you describe your kitchen as “I want a kitchen” - where are the knives? Where’s the stove? Answer: you abdicated control over those details, so it’s wherever the stochastic parrot decided to put them, which may or may not be where they ended up last time you pulled your LLM generate-me-a-kitchen lever. And it may not be where you want.<p>Don’t like the layout? Let’s reroll! Back to the generative kitchen agent for a new one! ($$$)<p>The big labs will gladly let you reroll until you’re happy. But software - and kitchens - should not be generated in a casino.<p>A finished software product - like a working kitchen - is a fractal collection of tiny details. Keeping your finished software from falling apart under its own weight means upholding as many of those details as possible.<p>Like a good kitchen a few differences are all that stands between software that works and software that’s hell. In software the probability that an agent will get 100% of the details right is very very small.<p>Details matter.
If it is fast enough, and cheap enough, people would very happily reroll <i>specific subsets</i> of decisions until happy, and <i>then lock that down</i>. And specify in more details the corner cases that it doesn't get just how you want it.<p>People metaphorically <i>do that all the time</i> when designing rooms, in the form of endless browsing of magazines or Tik Tok or similar to find something they like instead of starting from first principles and designing exactly what they want, because usually they don't know exactly what they want.<p>A lot of the time we'd be happier with a spec <i>at the end</i> of the process than at the beginning. A spec that ensures the current understanding of what is intentional vs. what is an accident we haven't addressed yet is nailed down would be valuable. Locking it all down at the start, on the other hand, is often impossible and/or inadvisable.
AI is the reality that TDD never before had the opportunity to live up to
Not just TDD. Amazon, for instance, is heading towards something between TDD and lightweight formal methods.<p>They are embracing property-based specifications and testing à la Haskell's QuickCheck: <a href="https://kiro.dev" rel="nofollow">https://kiro.dev</a><p>Then, already in formal methods territory, refinement types (e.g. Dafny, Liquid Haskell) are great and less complex than dependent types (e.g. Lean, Agda).
What about model-driven development? Spec to code was the name of the game for UML.
Setting aside that model means something different now … MDD never really worked because the tooling never really dealt with intent. You would get so far with your specifications (models) but the semantic rigidity of the tooling mean that at some point your solution would have to part way. LLM is the missing piece that finally makes this approach viable where the intent can be inferred dynamically and this guides the
implementation specifics. Arguably the purpose of TDD/BDD was to shore up the gaps in communicating intent, and people came to understand that was its purpose, whereas the key intent in the original XP setting was to capture and preserve “known good” operation and guard against regression (in XP mindset, perhaps fatefully clear intent was assumed)
It makes sense to me as long as you're not vibe coding the PBTs.
The deluge of amazon bugs ive been seeing recently makes me hesitant to follow in amazon's lead.
Kiro is such garbage though
I've seen this sentiment and am a big fan of it, but I was confused by the blog post, and based on your comment you might be able to help: how does <i>Lean</i> help me? FWIW, context is: code Dart/Flutter day to day.<p>I can think of some strawmen: for example, prove a state machine in Lean, then port the proven version to Dart? But I'm not familiar enough with Lean to know if that's like saying "prove moon made of cheese with JavaScript, then deploy to the US mainframe"
yesterday I had to tell a frontier model to translate my code to tla+ to find a tricky cache invalidation bug which nothing could find - gpt 5.4, gemini 3.1, opus 4.6 all failed. translation took maybe 5 mins, the bug was found in seconds, total time to fix from idea to commit - about 15 mins.<p><i>if</i> you can get a model to quickly translate a relevant subset of your code to lean to find tricky bugs and map lean fixes back to your codebase space, you've got yourself a huge unlock. (spoiler alert: you basically can, today)
This matches my experience too. The bugs that survive every test suite and code review are always in the state space, not in any single code path. Cache invalidation, distributed coordination, anything with concurrent writes. TLA+ or Lean as a targeted debugging tool for those specific pain points is genuinely practical now, especially with models handling the translation. You don't need to formally verify your whole codebase to get value from formal methods.
I don't think he's referring to Lean specifically, but any sort of executable testing methodology. It removes the human in the loop in the confidence assurance story, or at least greatly reduces their labor. You cannot ever get such assurance just by saying, "Well this model seems really smart to me!" At best, you would wind up with AI-Jim.<p>(One way Lean or Rocq could help you directly, though, would be if you coded your program in it and then compiled it to C via their built-in support for it. Such is very difficult at the moment, however, and in the industry is mostly reserved for low-level, high-consequence systems.)
>Such is very difficult at the moment<p>What do you mean? It's a nice and simple language. Way easier to get started than OCaml or Haskell for example. And LLMs write programs in Lean4 with ease as well. Only issue is that there are not as many libraries (for software, for math proofs there is plenty).<p>But for example I worked with Claude Code and implemented a shell + most of unix coreutils in like a couple of hours. Claude did some simple proofs as well, but that part is obvs harder. But when the program is already in Lean4, you can start moving up the verification ladder up piece by piece.
Well, if you do not need to care about performance everything can be extremely simple indeed. Let me show you some data structure in coq/rocq while switching off notations and diplaying low level content.<p>Require Import String.<p>Definition hello: string := "Hello world!".<p>Print hello.<p>hello =
String (Ascii.Ascii false false false true false false true false)
(String (Ascii.Ascii true false true false false true true false)
(String (Ascii.Ascii false false true true false true true false)
(String (Ascii.Ascii false false true true false true true false)
(String (Ascii.Ascii true true true true false true true false)
(String (Ascii.Ascii false false false false false true false false)
(String (Ascii.Ascii true true true false true true true false)
(String (Ascii.Ascii true true true true false true true false)
(String (Ascii.Ascii false true false false true true true false)
(String (Ascii.Ascii false false true true false true true false)
(String
(Ascii.Ascii false false true false false true true false)
(String
(Ascii.Ascii true false false false false true false false)
EmptyString)))))))))))
: string
But isn't that tantamount with "his comment is a complete non-sequitor"?
I don't think so? Lean is formal methods, so it makes sense to discuss the boons of formal and semiformal methods more generally.<p>I used to think that the only way we would be able to trust AI output would be by leaning heavily into proof-carrying code, but I've come to appreciate the other approaches as well.
[dead]
The real world success they report reminds me of Simon Willison’s Red Green TDD: <a href="https://simonwillison.net/guides/agentic-engineering-patterns/red-green-tdd/" rel="nofollow">https://simonwillison.net/guides/agentic-engineering-pattern...</a><p>> Instead of taking a stab in the dark, Leanstral rolled up its sleeves. It successfully built test code to recreate the failing environment and diagnosed the underlying issue with definitional equality. The model correctly identified that because def creates a rigid definition requiring explicit unfolding, it was actively blocking the rw tactic from seeing the underlying structure it needed to match.
That article is literally a definition of TDD that has been around for years and years. There's nothing novel there at all. It's literally test driven development.
If Agent is writing the tests itself, does it offer better correctness guarantees than letting it write code and tests?
In my experience the agent regularly breaks some current features while adding a new one - much more often than a human would. Agents too often forget about the last feature when adding the next and so will break things. Thus I find Agent generated tests important as they stop the agent from making a lot of future mistakes.
It is definitely not foolproof but IMHO, to some extent, it is easier to describe what you expect to see than to implement it so I don't find it unreasonable to think it might provide some advantages in terms of correctness.
Given the issues with AWS with Kiro and Github, We already have just a few high-profile examples of what happens when AI is used at scale and even when you let it generate tests which is something you should absolutely not do.<p>Otherwise in some cases, you get this issue [0].<p>[0] <a href="https://sketch.dev/blog/our-first-outage-from-llm-written-code" rel="nofollow">https://sketch.dev/blog/our-first-outage-from-llm-written-co...</a>
Don't "let it" generate tests. Be intentional. Define them in a way that's slightly oblique to how the production code approaches the problem, so the seams don't match. Heck, that's why it's good to write them before even thinking about the prod side.
The linked article does not speak of tests, it speaks of a team that failed to properly review an LLM refactor then proceeds to blame the tooling.<p>LLMs are good at writing tests in my experience.
TDD == Prompt Engineering, for Agentic coding tasks.
Wild it’s taken people this long to realize this. Also lean tickets / tasks with all needed context to complete the task, including needed references / docs, places to look in source, acceptance criteria, other stuff.
AI agents will become a comodity.<p>Europeans not wanting to be dependent, and they are giving for free what US investors planed to charge with 90% margin.<p>Amazing! What a blast. Thank you for your service (this first 100M$ burned to POC GPT1 and from here, we are so good to go)
The problem with the European independence story is, that it seems Mistral runs its own stuff also on US cloud act affected infrastructure. This makes them a very weird value proposition: If I accept a level of "independence" whereby I run on AWS or Azure, I could as well pay for Anthropic or GPT to have SOTA performance.<p>If I do not accept that level of independence but want more, I need to buy what's on OVH, Scaleway, Ionos etc. or host my own, but that usually means even smaller, worse models or a lot of investment.<p>Nevertheless, the "band" that Mistral occupies for economic success is very narrow. Basically just people who need independence "on paper" but not really. Because if I'm searching for actual independence, there's no way I could give them money at the moment for one of their products and it making sense, cause none of their plans are an actual independence-improvement over, let's say, Amazon Bedrock.<p>I really really want to support them, but it must make economic sense for my company, too, and it doesn't.
I don’t care about the servers, they are a comodity already.<p>The key is to avoid chantage, remember Oracle with DBs, people learned not to build on top of unreplaceable stuff
They are building their own infra - south of Paris and another one was announced in Sweden recently.
Then why does their list of subprocessors list Google and Microsoft "for cloud infrastructure", specifically for "Le Chat, La Plateforme, Mistral Code"? Sounds to me as if they're mainly running on Azure.<p>Also, they're listing CoreWeave as inference provider in "EEA" area, but CoreWeave is of course also an US company. Even if they have their data center physically in the EU, it must be considered open access for the USA due to the CLOUD act.<p><a href="https://trust.mistral.ai/subprocessors" rel="nofollow">https://trust.mistral.ai/subprocessors</a><p>If what you say is true, they have a communications problem and they need to fix that urgently. Right now, this is why they don't get my business. Others will have made the same decision based on their own subprocessor list.<p>Or did you mean, they're like, right now building it and plan to move there, but it's not up yet?
I really hope you're right. Sadly, though, I don't see any evidence of UK companies disinvesting from big US tech. There aren't good alternatives and what there is is too complex. As long as 'everyone else is still using MS', it seems like it's a brave CTO that switches to European providers. Unless that happens, the network effect of having AI+data is likely to mean US tech still has a big advantage in corp settings. But, HN - please tell me I'm wrong!
> There aren't good alternatives and what there is is too complex.<p>Sounds like a worth challenge for this community, mind giving actual examples and see what others can suggest?
I wonder what the biggest (non-AI) moats are for US tech against the alternatives?
they will, but the jagged frontier is fractal and each one will have different capabilities; you'll want to mix models and to get best results consistently you'll need to.
There have been a lot of conversations recently about how model alignment is relative and diversity of alignment is important - see the recent podcast episode between Jack Clark (co-founder of Anthropic) and Ezra Klein.<p>Many comments here point out that Mistral's models are not keeping up with other frontier models - this has been my personal experience as well. However, we need more diversity of model alignment techniques and companies training them - so any company taking this seriously is valuable.
Very cool but I haven’t been able to convince software developers in industry to write property based tests. I sometimes joke that we will start writing formal proofs until the tests improve. Just so that they will appreciate the difference a little more.<p>I can’t even convince most developers to use model checkers. Far more informal than a full proof in Lean. Still highly useful in many engineering tasks. People prefer boxes and arrows and waving their hands.<p>Anyway, I don’t know that I’d want to have a system vibe code a proof. These types of proofs, I suspect, aren’t going to be generated to be readable, elegant, and be well understood by people. Like programs they generate it will look plausible.<p>And besides, you will still need a human to review the proof and make sure it’s specifying the right things. This doesn’t solve that requirement.<p>Although I have thought that it would be useful to have a system that could prove trivial lemmas in the proof. That would be very neat.
Curious if anyone else had the same reaction as me<p>This model is specifically trained on this task and significantly[1] underperforms opus.<p>Opus costs about 6x more.<p>Which seems... totally worth it based on the task at hand.<p>[1]: based on the total spread of tested models
Agreed. The idea is nice and honorable. At the same time, if AI has been proving one thing, it's that quality usually reigns over control and trust (except for some sensitive sectors and applications). Of course it's less capital-intense, so makes sense for a comparably little EU startup to focus on that niche. Likely won't spin the top line needle much, though, for the reasons stated.
> quality usually reigns over control and trust<p>Most Copilot customers use Copilot because Microsoft has been able to pinky promise some level of control for their sensitive data. That's why many don't get to use Claude or Codex or Mistral directly at work and instead are forced through their lobotomised Copilot flavours.<p>Remember, as of yet, companies haven't been able to actually measure the value of LLMs ... so it's all in the hands of Legal to choose which models you can use based on marketing and big words.
Ha, keep putting your prompts and workflows into cloud models. They are not okay with being a platform, they intend to cannibalize all businesses. Quality doesn't always reign over control and trust. Your data and original ideas are your edge and moat.
Treating "quality" as something you can reliably measure in AI proof tools sounds nice until you try auditing model drift after the 14th update and realize the "trust" angle stops being a niche preference and starts looking like the whole product. Brand is not a proof. Plenty of orgs will trade peak output for auditability, even if the market is bigger for YOLO feature churn.
EU could help them very much if they would start enforcing the Laws, so that no US Company can process European data, due to the Americans not willing to budge on Cloud Act.<p>That would also help to reduce our dependency on American Hyperscalers, which is much needed given how untrustworthy the US is right now. (And also hostile towards Europe as their new security strategy lays out)
Alignment tax directly eats to model quality, double digit percents.
[dead]
[dead]
[dead]
I'm never sure how much faith one can put into such benchmarks but in any case the optics seem to shift once you have pass@2 and pass@3.<p>Still, the more interesting comparison would be against something such as Codex.
But you can run this model for free on a common battery powered laptop sitting on your laps without cooking your legs.
Sorry, but what are you talking about? This is a 120B-A6B model, which isn't runnable on any laptop except the most beefed up Macbooks, and then will certainly drain its battery and cook your legs.
Yeah my bad, it requires an expensive MacBook.<p>I think it would still be fine for the legs and on battery for relatively short loads: <a href="https://www.notebookcheck.net/Apple-MacBook-Pro-M5-2025-review-The-fastest-single-core-performance-in-the-world.1144391.0.html#c14746160" rel="nofollow">https://www.notebookcheck.net/Apple-MacBook-Pro-M5-2025-revi...</a><p>But 40 degrees and 30W of heat is a bit more than comfortable if you run the agent continuously.
You can easily run a quant of this on a DGX Spark though. Seems like a small investment if it meaningful improves Lean productivity.
Is it though?<p>Most people I know that use agents for building software and tried to switch to local development, every single time they switch back to Claude/codex.<p>It's just not worth it. The models are that much better and continue to get released / improve.<p>And it's much cheaper unless you're doing like 24/7 stuff.<p>Even on the $200/m plan, that's cheaper than buying a $3k dgx or $5k m4 max with enough ram.<p>Not to mention you can no longer use your laptop as a laptop as the power draw drains it - you'd need to host separately and connect
A single DGX Spark can service a whole department of mathematicians (or programmers), and you can cluster up to 4 of them them to fit very large models like GLM-5 and quants of Kimi K2.5. This is nearing frontier-level model size.<p>I understand the value proposition of the frontier cloud models, but we're not as far off from self-hosting as you think, and it's becoming more viable for domain-specific models.
the model is open source, you can run it locally. You don't think thats significant ?
Can someone please explain... If I don't know any Lean (and I suspect most people don't), is it of any direct value? Trying to understand if there's something it can help me with (e.g. automatically write proofs for my Go programs somehow... I'm not sure) or should I just cheer solely for more open models out there, but this one isn't for me?
Trustworthy vibe coding. Much better than the other kind!<p>Not sure I really understand the comparisons though. They emphasize the cost savings relative to Haiku, but Haiku kinda sucks at this task, and Leanstral is worse? If you're optimizing for correctness, why would "yeah it sucks but it's 10 times cheaper" be relevant? Or am I misunderstanding something?<p>On the promising side, Opus doesn't look great at this benchmark either — maybe we can get better than Opus results by scaling this up. I guess that's the takeaway here.
I also don't understand the focus on vibe coding in the marketing. Vibe coding kind of has the image of being for non-devs, right?<p>I do like agents (like Claude Code), but I don't consider myself to be vibe coding when I use them. Either I'm using a language/framework I know and check every step. OR I'm learning, checking every step and asking for explanations.<p>I tried vibe coding, and really dislike the feeling I have when doing it. It feels like building a house, but without caring about it, and just using whatever tech. Sure I may have moisture problems later, but it's a throwaway house anyway. That's how I feel about it. Maybe I have a wrong definition.<p>Maybe it's good to not use "vibe coding" as a synonym for programming with agent assistance. Just to protect our profession. Like: "Ah you're vibing" (because you have Claude Code open), "No, I'm using CC to essentially type faster and prevent syntax errors and get better test coverage, maybe to get some smart solutions without deep research. But I understand and vouch for every loc here. 'We are not the same.'"
Yeah, the original meaning of Vibe Coding was "not looking at the code, just going on vibes", but a lot of people now use it to mean "AI was involved in some way".<p>I see a whole spectrum between those two. I typically alternate between "writing code manually and asking AI for code examples" (ChatGPT coding), and "giving AI specific instructions like, write a function blarg that does foo".<p>The latter I call Power Coding, in the sense of power armor, because you're still in control and mostly moving manually, but you're much stronger and faster.<p>I like this better than "tell agent to make a bunch of changes and come back later" because first of all it doesn't break flow (you can use a smaller model for such fine-grained changes so it goes very fast -- it's "realtime"), and second, you don't ever desync from the codebase and need to spend extra time figuring out what the AI did. Each change is sanity-checked as it comes in.<p>So you stay active, and the code stays slop-free.<p>I don't hear a lot of people doing this though? Maybe we just don't have good language for it.
> I tried vibe coding, and really dislike the feeling I have when doing it. It feels like building a house, but without caring about it, and just using whatever tech. Sure I may have moisture problems later, but it's a throwaway house anyway. That's how I feel about it. Maybe I have a wrong definition.<p>No, I feel the same. I vibe-coded a few projects and after a few weeks I just threw them away, ultimately I felt I just wasted my time and wished I coudl get it back to do something useful.
> It feels like building a house, but without caring about it, and just using whatever tech.<p>So, most homebuilders (in the US) unfortunately.
I myself am now and expert at insulation and all the vapor-permeable and vapor-blocking membranes/foils/foams that come with it.<p>It came at great cost though, I hated the process of learning and the execution. I was less than happy for some years. But I feel even more uncomfortable vibe-home-improving than I do vibe-coding. The place is starting to look nice now though.
They haven't made the chart very clear, but it seems it has configurable passes and at 2 passes it's better than Haiku and Sonnet and at 16 passes starts closing in on Opus although it's not quite there, while consistently being less expensive than Sonnet.
pass@k means that you run the model k times and give it a pass if any of the answers is correct. I guess Lean is one of the few use cases where pass@k actually makes sense, since you can automatically validate correctness.
Oh my bad. I'm not sure how that works in practice. Do you just keep running it until the tests pass? I guess with formal verification you can run it as many times as you need, right?
It’s really not hard — just explicitly ask for <i>trustworthy outputs only</i> in your prompt, and Bob’s your uncle.
I absolutely called this a couple of weeks ago, nice to be vindicated!<p>> I'm interested to see what it is in the age of LLMs or similar future tools. I suspect a future phase change might be towards disregarding how easy it is for humans to work with the code and instead focus on provability, testing, perhaps combined with token efficiency.<p>> Maybe Lean combined with Rust shrunk down to something that is very compiler friendly. Imagine if you could specify what you need in high level language and instead of getting back "vibe code", you get back proven correct code, because that's the only kind of code that will successfully compile.<p><a href="https://news.ycombinator.com/item?id=47192116">https://news.ycombinator.com/item?id=47192116</a>
It's important to keep in mind that no proof system ensures your proof is the <i>correct</i> proof, only that it's a <i>valid</i> proof. Completely understanding what a proof proves is often nearly as difficult as understanding the program it's proving. Normally you benefit because the process of building a proof forces you to develop your understanding more fully.
Uhm, no?
Even with "simple" examples like Dijkstra's shortest path, the spec is easier than the implementation. Maybe not for you, but try it out on an <i>arbitrary</i> 5-yr old.
On the extreme end, you have results in maths, like Fermat's Last Theorem. Every teenager can understand the statement (certainly after 10 mins of explanation) but the proof is thousands of pages of super-specialized maths.
It is a spectrum. For cryptography, compression, error-correction, databases, etc, the spec is often much simpler than the implementation.
I don't know why you created a new account for this, but take the textbook example of a nontrivial formally verified system: SeL4. That implementation was 8.7k of C code, which correspondend to 15k lines of Isabelle that ultimately needed 100k+ lines of proof to satisfy. And that was with the formal model excluding lots of important properties like hardware failure that actual systems deal with.
You are confusing the proof with the spec/theorem. A correct proof and a valid proof are the same thing. It doesn't really matter how long the proof is, and you don't even need to understand it for it to be correct, the machine can check that.<p>But indeed, if the spec includes 8.7k of C code, that is problematic. If you cannot look at the theorem and see that it is what you mean, that is a problem. That is why abstraction is so important; your ultimate spec should not include C-code, that is just too low-level.
> I don't know why you created a new account for this<p>What value does this add to the conversation? I’m not seeing it: am I missing something? It comes across as a kind of insult.<p>They made a good point in my opinion! (The “Uhm no” part got it off on the wrong foot, I will admit.) But even if you felt annoyed or didn’t agree with the point, it was substantive and moved the conversation forward. I’m here for the (genuine) questions and (constructive) debate and (civil) pushback.<p>I like to welcome new users before they take too much of a beating. That can come later when they are too invested to leave and/or when morale needs improving.<p>So welcome! Bring a helmet, and don’t stop disagreeing.
[dead]
Maybe a naive question: given that they see better performance with more passes but the effect hits a limit after a few passes, would performance increase if they used different models per pass, i.e leanstral, kimi, qwen and leanstral again instead of 4x leanstral?
This is called a "LLM alloy", you can even do it in agentic, where you simply swap the model on each llm invocation.<p>It does actually significantly boost performance. There was an article on here about it recently, I'll see if I can find it.<p>Edit: <a href="https://news.ycombinator.com/item?id=44630724">https://news.ycombinator.com/item?id=44630724</a><p>They found the more different the models were (the less overlap in correctly solved problems), the more it boosted the score.
Pleasant surprise: someone saying "open source" and <i>actually meaning Open Source</i>. It looks like the weights are Apache-2.0 licensed.
Is anyone using this approach with lean to ship production code? Writing lean spec as human, implementation and proof by agent? And then shipping lean or exporting to C? Would be great to understand how you are actually using this.
FYI The Lean 4 paper: <a href="https://dl.acm.org/doi/10.1007/978-3-030-79876-5_37" rel="nofollow">https://dl.acm.org/doi/10.1007/978-3-030-79876-5_37</a>
Naturally the Microsoft-owned language is getting the AI hype instead of the more mature options that could do this sort of work… Agda, ATS, Coq/Rocq, Dafny, Fstar, Idris, Isabelle, Why3 just to name a few.
You should check out the recent PR's to the Agda repo... the community is currently very divided about AI. For better or worse, the people driving the Lean project have been interested in AI for quite some time.
A bit uncharitable. I'm a diehard fan of Rocq, but it's nothing unusual to see the young new hotness that is Lean continue to get the spotlight. It's not a sign of Microsoft putting its thumb on the scales, and the hype for Lean has long predated LLMs.<p>It's certainly less mature when it comes to verified programming, but its appeal to mathematicians (rather than formal methods experts) has earned it much respect.
Am I missing something? Isn’t that the language most are using currently when looking at research at openai, google, deepseek etc?
I don't understand how this can impact my JS (+yaml, css, etc) code writing in a complex app.
Automated theorem provers running on a $5k piece of hardware is a cool version of the future
What are these "passes" they reference here? Haven't seen that before in LLM evals<p>Could definitely be interesting for having another model run over the codebase when looking for improvements
I read it as Lanestra, and thought of that story :D
Does Mistral come close to Opus 4.6 with any of their models?
I use mistral-medium-3.1 for a lot of random daily tasks, along with the vibe cli. I'd state from my personal opinion that mistral is my preferred 'model vendor' by far at this point. They're extremely consistent between releases while each of them just feels better. I also have a strong personal preference to the output.<p>I actively use gemini-3.1-pro-preview, claude-4.6-opus-high, and gpt-5.3-codex as well. I prefer them all for different reasons, however I usually _start_ with mistral if it's an option.
Mistral hasn't been in the running for SOTA for quite awhile now
Not at the moment, but a release of Mistral 4 seems close which likely bridges the gap.
I don’t know a single person using Mistral models.
Isn't their latest speech to text model SOTA? When I tested it on jargon, it was amazing.<p><a href="https://news.ycombinator.com/item?id=46886735">https://news.ycombinator.com/item?id=46886735</a>
I'm using this model for my first python project, coding using opencode along with devstral and Mistral Large 3. I know it's not as capable as other, more expensive models, but working with it this way is teaching me python. More directly to your point though, the speech to text model is really good.<p>It's funny because I just took a break from it to read some hn and found this post.
I used Ministral for data cleaning.<p>I was surprised: even tho it was the cheapest option (against other small models from Anthropic) it performed the best in my benchmarks.
Pretty much all of my LLM usage has been using Mistral's open source models running on my PC. I do not do full agentic coding as when i tried it with Devstral Small 2 it was a bit too slow (though if i could get 2-3 times the speed of my PC from a second computer it'd be be a different story and AFAIK that is doable if i was willing to spend $2-3k on it). However i've used Mistral's models for spelling and grammar checks[0], translations[1][2], summaries[3] and trying to figure out if common email SPAM avoidance tricks are pointless in the LLM age :-P [4]. FWIW that tool you can see in the shots is a Tcl/Tk script calling a llama.cpp-based command-line utility i threw together some time ago when experimenting with llama.cpp.<p>I've also used Devstral Small to make a simple raytracer[5][6] (it was made using the "classic" chat by copy/pasting code, not any agentic approach and i did fix bits of it in the process) and a quick-and-dirty "games database" in Python+Flask+Sqlite for my own use (mainly a game backlog DB :-P).<p>I also use it to make various small snippets, have it generate some boilerplate stuff (e.g. i have an enum in C and want to write a function that prints names for each enum value or have it match a string i read from a json file with the appropriate enum value), "translate" between languages (i had it recently convert some matrix code that i had written in Pascal into C), etc.<p>[0] <a href="https://i.imgur.com/f4OrNI5.png" rel="nofollow">https://i.imgur.com/f4OrNI5.png</a><p>[1] <a href="https://i.imgur.com/Zac3P4t.png" rel="nofollow">https://i.imgur.com/Zac3P4t.png</a><p>[2] <a href="https://i.imgur.com/jPYYKCd.png" rel="nofollow">https://i.imgur.com/jPYYKCd.png</a><p>[3] <a href="https://i.imgur.com/WZGfCdq.png" rel="nofollow">https://i.imgur.com/WZGfCdq.png</a><p>[4] <a href="https://i.imgur.com/ytYkyQW.png" rel="nofollow">https://i.imgur.com/ytYkyQW.png</a><p>[5] <a href="https://i.imgur.com/FevOm0o.png" rel="nofollow">https://i.imgur.com/FevOm0o.png</a> (screenshot)<p>[6] <a href="https://app.filen.io/#/d/e05ae468-6741-453c-a18d-e83dcc3de926%23415872384d784541457577707773433048564d4a6c7a76503366617755666c34" rel="nofollow">https://app.filen.io/#/d/e05ae468-6741-453c-a18d-e83dcc3de92...</a> (C code)<p>[7] <a href="https://i.imgur.com/BzK8JtT.png" rel="nofollow">https://i.imgur.com/BzK8JtT.png</a>
That's likely because they're chasing enterprise - see deals with HSBC, ASML, AXA, BNP Paribas etc... Given swelling anti-US sentiment and their status as a French 'national champion', Mistral are probably in a strong position for now regardless of model performance, research quality or consumer uptake.
I'm building a knowledge graph on personal data (emails, files) with Ministral 3:3b. I try with Qwen 3.5:4b as well but mostly Ministral.<p>Works really well. Extracts companies you have dealt with, people, topics, events, locations, financial transactions, bills, etc.
Me neither, they're not ready for prime imo. I have a yearly sub and the product is just orders of magnitude behind Anthropic's offering. I use Code for real world stuff and I am happy with the result, Mistral is just not something I can trust right now.
I use them solely.
[dead]
love the opensource push for agents, the fleet grows!
Public service announcement to
hopefully reduce unnecessary knife fights*:<p>There are <i>two</i> compatible and important (but different) questions in play:<p>1. Is a program correct relative to a formal specification?<p>2. Is the formal specification what we mean/want?<p>*: Worth asking: “What that other person necessarily wrong? Or perhaps they are discussing a different aspect or framing?” AKA: “be curious and charitable” I’m not going to link to the specific threads, but they are happened / are happening. Le Sigh.
Curious if pass@2 was tested for haiku and sonnet?
The TDD foundation! We might need one of those. :)
"and continues to scale linearly"<p>it clearly and demonstrably does not. in fact, from eyeballing their chart Qwen, Kimi, and GLM scale linearly whereas Leanstral does not. But this is not surprising because the Alibaba, Moonshot, and Zhipu have hundreds of employees each and hundreds of millions of dollars of investment each.
From <a href="https://mistral.ai/news/leanstral" rel="nofollow">https://mistral.ai/news/leanstral</a> :<p><pre><code> Model Cost ($) Score
..
Claude Opus 1,650 39.6
..
Leanstral pass@8 145 31.0
Leanstral pass@16 290 31.9</code></pre>
This is great, congratulations to the Mistral team! I'm looking forward to the code arena benchmark results. Thanks for sharing.
Congratulations on the launch!<p>Mistral seems to focus on a different market than the others. Their best model is meh, their best ASR model locally is either rather slow compared to Parakeet on similar languages, or not as good for others (like qwen ASR).<p>Side note: Lean seems quite unreadable with tons of single letter variable names. Part of it is me being unaccustomed with it, but still.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
Truly exciting
Here we go.
is the haiku comparison because they've distilled from the model?
lol, why does the paper abstract assume I know what Lean is and it goes on to talk about lean 4 improvements?