An LLM is optimized for its training data, not for newly built formats or abstractions. I don’t understand why we keep building so-called "LLM-optimized" X or Y. It’s the same story we’ve seen before with TOON.
If a new programming language doesn’t need to be written by humans (though should ideally still be readable for auditing), I hope people research languages that support formal methods and model checking tools. Formal methods have a reputation for being too hard or not scaling, but now we have LLMs that can write that code.<p><a href="https://martin.kleppmann.com/2025/12/08/ai-formal-verification.html" rel="nofollow">https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...</a>
The Validation Locality piece is very interesting and really got my brain going. Would be cool to denote test conditions in line with definitions. Would get gross for a human, but could work for an LLM with consistent delimiters. Something like (pseudo code):<p>```
fn foo(name::"Bob"|genName(2)):
if len(name) < 3
Err("Name too short!")<p><pre><code> print("Hello ", name)
return::"Hello Bob"|Err</code></pre>
```<p>Right off the bat I don't like that it relies on accurately remembering list indexes to keep track of tests (something you brought up), but it was fun to think about this and I'll continue to do so. To avoid the counting issue you could provide tools like "runTest(number)", "getTotalTests", etc.<p>One issue: The Loom spec link is broken.
I get that this is essentially vibe coding a language, but it still seems lazy to me. He just asked the language model zero-shot to design a language unprompted. You could at least use the Rosetta code examples and ask it to identify design patterns for a new language.
I was thinking the same. Maybe if he tried to think instead of just asking the model. The premise is interesting "We optimize languages for humans, maybe we can do something similar for llms". But then he just ask the model to do the thing instead of thinking about the problem, maybe instead of prompting "Hey made this" a more granular, guided approach could've been better.<p>For me this is just a lost of potential on the topic, and an interesting read made boring pretty fast.
A language is LLM-optimized if there’s a huge amount of high-quality prior art, and if the language tooling itself can help the LLM iterate and catch errors
> Humans don't have to read or write or undestand it. The goal is to let an LLM express its intent as token-efficiently as possible.<p>Maybe in the future, humans don't have to verify the spelling, logic or grounding truth either in programs because we all have to give up and assume that the LLM knows everything. /s<p>Sometimes, I read these blogs from vibe-coders that have become completely complacent with LLM slop, I have to continue to remind others why regulations exist.<p>Imagine if LLMs should become fully autonomous pilots on commercial planes or planes optimized for AI control and the humans just board the plane and fly for the vibes, maybe call it "Vibe Airlines".<p>Why didn't anyone think of that great idea? Also completely remove the human from the loop as well?<p>Good idea isn't it?
There are multiple layers and implicit perspectives that I think most are purposefully omitting as a play for engagement or something else.<p>The reason why LLMs are still restricted to higher level programming languages is because there are no guarantees of correctness - any guarantee needs to be provided by a human - and it is already difficult for humans to review other human's code.<p>If there comes a time where LLMs can generate code - whether some term slop or not - that has a guarantee of correctness - it is indeed probably a correct move to probably have a more token-efficient language, or at least a different abstraction compared to the programming abstractions of humans.<p>Personally, I think in the coming years there will be a subset of programming that LLMs can probably perform while providing a guarantee of correctness - likely using other tools, such as Lean.<p>I believe this capability can be stated as - LLMs should be able to obfuscate any program code - which is pretty decent guarantee.