Essentially, I've tried to throw a task which, I thought, Claude won't handle. It did with minimal supervision. Some things had to be done in "adversarial" mode where Claude coded and Codex criticized/reviewed, but it is what it is. An LLM was able to implement generics and many other language features with very little supervision in less than a day o_O.<p>I've been thrilled to see it using GDB with inhuman speed and efficiency.
I am very impressed with the kind of things people pull out of Claude's жопа but can't see such opportunities in my own work. Is success mostly the result of it being able to test its output reliably, and of how easy it is to set up the environment for this testing?
> Is success mostly the result of it being able to test its output reliably, and of how easy it is to set up the environment for this testing?<p>I won't say so. From my experience the key to success is the ability to split big tasks into smaller ones and help the model with solutions when it's stuck.<p>Reproducible environments (Nix) help a lot, yes, same for sound testing strategies. But the ability to plan is the key.
One other thing I've observed is that Claude fares much better in a well engineered pre-existing codebase. It adopts to most of the style and has plenty of "positive" examples to follow. It also benefits from the existing test infrastructure. It will still tend to go in infinite loops or introduce bugs and then oscillate between them, but I've found it to be scarily efficient at implement medium sized features in complicated codebases.
Yes, that too, but this particular project was an ancient C++ codebase with extremely tight coupling, manual memory management and very little abstraction.
жопа -> jopa (zhopa) for those who don't spot the joke
Claude will also tend to go for the "test-passing" development style where it gets super fixated on making the tests pass with no regards to how the features will work with whatever is intended to be built later.<p>I had to throw away a couple days worth of work because the code it built to pass the tests wasn't able to do the actual thing it was designed for and the only workaround was to go back and build it correctly while, ironically, still keeping the same tests.<p>You kind of have to keep it on a short leash but it'll get there in the end... hopefully.
> Some things had to be done in "adversarial" mode where Claude coded and Codex criticized/reviewed<p>How does one set up this kind of adversarial mode? What tools would you need to use? I generally use Cline or KiloCode - is this possible with those?
My own (very dirty) tool, there are some public ones, probably I'll try to migrate to one of the more mature tools later. Example: <a href="https://github.com/ruvnet/claude-flow" rel="nofollow">https://github.com/ruvnet/claude-flow</a><p>> is this possible with those?<p>You can always write to stdin/read from stdout even if there is no SDK available I guess. Or create your own agent on top of an LLM provider.
how did you get gdb working with Claude? There are a few mcp servers that looks fine, curious what you used
Well, just told it to use gdb when necessary, MCP wasn't required at all! Also it helps to tell it to integrate cpptrace and always look at the stacks.
MCP is more or less obsolete for code generation since agents can just run CLI tools directly.
Jopa means ass in russian, this reminded me of Pidora.
I came here for this comment! TIL about Pidora :D
Don't forget NPM packages Mocha and Chai (Pee and Tea)
There is also "mudyla" repo in the org, so
There's JEPA too
I'm genuinely curious on how well this is working, is there an independent Java test suite that covers major Java 5/6 features that can verify that the JOPA compiler works per the spec? I.e. I see that Claude has wrote a few tests in it's commits, but it would be wonderful if there's a non-Clauded independent test suite (probably from other Java implementations?) that tracks progress.<p>I do feel that that is pretty much needed to claim that Claude is adding features to match the Java spec.
Well, it's complicated. The original jdk compliance tests are notoriously hard to deal with. Currently I parse nearly 100% of positive testcases from JDK 7 test suite (in one of Java 7 branches) but I only have several dozens of true end to end tests (build .java with jopa, validate classfile with javap, run classfile with javac).<p>So, I can't tell how good it actually is but it definitely handles reasonably complex source files with generics (something the original compiler was unable to do).<p>The actual goal of the project is to be able to build at least ANT to simplify clean bootstrap of OpenJDK.
That is perilously close to the usual:<p>"AI DID EVERYTHING IN A DAY"<p>"How do you know it works?"<p>"... it just looks like it does"<p>Like when I ask AIs to port sed to java, and it writes test cases ... running sed on a CLI and doesn't implement the full lang spec no matter how much prompting I give it.
Ah, by the way. I've tried to do the same with Codex (gpt-5.1-codex-max) and Gemini (2.5 pro), both failed spectacularly. This job was done mostly by Sonnet 4.5. Java 6 did not require intensive supervision. Java 7 parts are done with Opus 4.5 and it constantly hits its limits, I have to regularly intervene.
Btw, working on Java 7 support. At this moment I sorta have working Java 7 compiler targeting Java 6 bytecode (Java 7 has StackMapTable which is sort of annoying).<p>Also, I've tried to replace parser with a modern one. Claude succeeds in generating Java 8 parsers with various parser generators/parser combinators but fails to resolve extremely tight coupling.
what is the feasibility/crazyness level of "llm porting" the javac source code to c++ ?<p>setting copyright issues aside, javac is a pretty clean textual-input-output program, and It can probably be reduced to a single thread variant
Claude won't handle a project of that scale. Even with Java 7 modernization project, which is much simpler than full javac translation, I constantly hit context limits and Claude throws things like "API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"messages.3.content.76: `thinking` or `redacted_thinking` blocks in the latest assistant message cannot be modified.
These blocks must remain as they were in the original response."},"request_id":"req_011CVWwBJpf3ZrmYGkYZLQVf"}" at me.
looking at the "att" branches (excuse my unhealthy curiosity) I can only say - "jesus fucking christ".<p>from the old parser ast -> to json -> to new ast representation (that is basically again copy of the old one) -> to some new incomplete bytecode generation<p>im sure there is some good explanation, but....why?! :)
I've been looking for a way to decouple legacy parser from the rest of the compiler, plus create a way to dump parser output in a readable form. Unfortunately, the coupling is too tight there. In my own compilers all the outputs of all the phases are serializable.<p>In the end I've just reanimated the original parser generator and progressed to full Java 7 syntactically (-att5 branch), but there are some major obstacles with bytecode.
Related: I recently got javac working with OpenLDK, my JVM bytecode to Common Lisp transpiler. The `javacl` binary is a dumped sbcl image that behaves just like OpenJDK javac program, but with CL under the hood (eg. java objects/methods are all CLOS).
I remember discovering and using jikes in the 90's. It was /so/ much faster than javac back then.<p>"Modernizing" to Java 6 is amusing.
[j|y]ikes
tangential: Isn't there from the same time period, a java compiler written in java?
It's much older, but even now this is THE ONLY viable pathway to bootstrap a modern JDK from scratch. I'm trying to modernize it so the bootstrap path might be shortened.<p>See <a href="https://bootstrappable.org/projects/java.html" rel="nofollow">https://bootstrappable.org/projects/java.html</a>
Java-to-bytecode compiler (javac) has always been written in Java. There was a JVM written in Java: Jikes RVM.
[dead]
[dead]