According to this page, LLVM-MOS seems to be pretty soundly beaten in performance of generated code by Oscar64.<p><a href="https://thred.github.io/c-bench-64/" rel="nofollow">https://thred.github.io/c-bench-64/</a><p>I think the ideal compiler for 6502, and maybe any of the memory-poor 8-bit systems would be one that supported both native code generation where speed is needed as well as virtual machine code for compactness. Ideally would also support inline assembler.<p>The LLVM-MOS approach of reserving some of zero page as registers is a good start, but given how valuable zero page is, it would also be useful to be able to designate static/global variables as zero page or not.
I've implemented Atari 2600 library support for both LLVM-MOS and CC65, but there are too many compromises to make it suitable for writing a game.<p>The lack of RAM is a major factor; stack usage must be kept to a minimum and you can forget any kind of heap. RAM can be extended with a special mapper, but due to the lack of a R/W pin on the cartridge, reads and writes use different address ranges, and C does not handle this without a hacky macro solution.<p>Not to mention the timing constraints with 2600 display kernels and page-crossing limitations, bank switching, inefficient pointer chasing, etc. etc. My intuition is you'd need a SMT solver to write a language that compiles for this system without needing inline assembly.
AIUI, Oscar64 does not aim to implement a <i>standard</i> C/C++ compiler as LLVM does, so the LLVM-MOS approach is still very much worthwhile. You can help by figuring out which relevant optimizations LLVM-MOS seems to be missing compared to SOTA (compiled or human-written) 6502 code, and filing issues.
Aztec C had both native and interpreted code generation, back in the day.
With regard to code size in this comparison someone associated with llvm-mos remarked that some factors are: their libc is written in C and tries to be multi-platform friendly, stdio takes up space, the division functions are large, and their float support is not asm optimized.
I wasn't really thinking of the binary sizes presented in the benchmarks, but more in general. 6502 assembler is compact enough if you are manipulating bytes, but not if you are manipulating 16 bit pointers or doing things like array indexing, which is where a 16-bit virtual machine (with zero page registers?) would help. Obviously there is a trade-off between speed and memory size, but on a 6502 target both are an issue and it'd be useful to be able to choose - perhaps VM by default and native code for "fast" procedures or code sections.<p>A lot of the C library outside of math isn't going to be speed critical - things like IO and heap for example, and there could also be dual versions to choose from if needed. Especially for retrocomputing, IO devices themselves were so slow that software overhead is less important.
This was a nice surprise when learning to code for NES, that I could write pretty much normal C and have it work on the 6502. A lot of tutorials warn you, "prepare for weird code" and this pretty much moots that.
I don't know this world well (I know what llvm is) but - does anyone know why this was made as a fork vs. contributing to llvm? I suppose it's harder to contribute code to the real llvm..?<p>Thanks
Hey, llvm-mos maintainer here. I actually work on LLVM in my dayjob too, and <i>I</i> don't particularly want llvm-mos upstream. It stretches LLVM's assumptions a lot, which is a good thing in the name of generality, but the <i>way</i> it stretches those assumptions isn't particularly relevant anymore. That is, it's difficult to find modern platforms that break the same assumptions.<p>Also, maintaining a fork is difficult, but doable. I work on LLVM a ton, so it's pretty easy for it to fold in to my work week-to-week.
LLVM has very high quality standards in my experience. Much higher than I've ever had even at work. It might be a challenge to get this upstreamed.<p>LLVM is also very modular which makes it easy to maintain forks for a specific backend that don't touch core functionality.
My experience is that while LLVM is very modular, it also has a pretty high amount of change in the boundaries, both in where they're drawn and in the interfaces between them. Maintaining a fork of LLVM with a new back-end is very hard.
Super interesting, thanks. I specifically thought that its modular aspect made it possible to just "load" architectures or parsers as ... "plugins"<p>But I'm sure it's more complicated than that. :-)<p>Thanks again
LLVM backends are indeed modular, and the LLVM project does allow for experimental backends. Some of the custom optimization passes introduced by this MOS backend are also of broader interest for the project, especially the automated static allocation for provably non-reentrant functions, which might turn out to be highly applicable to GPU-targeting backends.<p>It would be interesting to also have a viable backend for the Z80 architecture, which also seems to have a highly interested community of potential maintainers.
These processors were very very different from what we have today.<p>They usually only had a single general purpose register (plus some helpers). Registers were 8-bit but addresses (pointers) were 16-bit.
Memory was highly non-uniform, with (fast) SRAM, DRAM and (slow) ROM all in one single address space.
Instructions often involved RAM directly and there were a plethora of complicated addressing modes.<p>Partly this was because there was no big gap between processing speed and memory access, but this makes it very unlikely that similar architectures will ever come back.<p>As interesting as experiments like LLVM-MOS are, they would not be a good fit for upstream LLVM.
> ... there was no big gap between processing speed and memory access, but this makes it very unlikely that similar architectures will ever come back. ...<p>Don't think "memory access" (i.e. RAM), think "accessing generic (addressable) scratchpad storage" as a viable alternative to <i>both</i> low-level cache and a conventional register file. This is not too different from how GPU low-level architectures might be said to work these days.
Pretty sure that the prospects of successfully pitching the LLVM upstream to include a 6502 (or any 8/16-bit arch) backend are only slightly better than a snowball’s chances in hell.
There is a similar project for the Game Boy (sm83 cpu) with a fork of LLVM.<p><a href="https://github.com/DaveDuck321/gb-llvm" rel="nofollow">https://github.com/DaveDuck321/gb-llvm</a><p><a href="https://github.com/DaveDuck321/libgbxx" rel="nofollow">https://github.com/DaveDuck321/libgbxx</a><p>It seems to be first reasonably successful attempt (can actually be used) among a handful of previous abandoned llvm Game Boy attempts.
Rust fork that works on this LLVM fork, for 6502, genering code that can be executed on a Commodore-64: <a href="https://github.com/mrk-its/rust-mos" rel="nofollow">https://github.com/mrk-its/rust-mos</a>
Slightly off-topic. If you want to learn low level assembly programming in the XXI century, 6502 is still an EXCELLENT choice!<p>Simple architecture and really really joyful to use even for casual programmers born a decade, or two later :)
I'd argue that 68K is simpler to learn and use. You get a similar instruction set, but 32-bit registers, many of them. It's even got a relocatable stack so it can handle threading when you get to that point.
I agree, I feel like the 68k architecture was a dream for assembly programming. each register is large enough to store useful values, there are lots of them, there are instructions for multiply and divide. This allows you to focus on the essence of what you want to accomplish, and not have to get side-tracked into how to represent the X-coordinate of some object because it's just over 8 bits wide, or how to multiply to integers. Both of these seemingly trivial things already require thought on the 6502.
And registers are actually pointer width, so you don't have to go through memory just to do arbitrary pointer arithmetic.
If 8 bit: 6809. If 32 bit: 68K. Those are miles ahead of the 6502. Otoh if you want to see a fun quirky chip the 6502 is definitely great, and I'd recommend you use a (virtual) BBC Micro to start you off with.
LLVM includes an equivalent to binutils, with a macro assembler, linker, objdump with disassembler, library tools, handling formats like ELF and raw binary etc.<p>LLVM-MOS includes all of that. It is the same process as using LLVM to cross-assemble targeting an embedded ARM board. Same GNU assembler syntax just with 6502 opcodes. This incidentally makes LLVM-MOS one of the best available 6502 assembly development environments, if you like Unix-style cross-assemblers.
It's been amazing to see the progress on this project over the last 5 years. As someone who poked around looking at the feasibility of this myself, and gave up thinking it'd never be practical, I'm super happy to see how far they've gotten.<p>Maybe someday the 65816 target will get out there, a challenge in itself.
How does it compare to cc65 with regard to code size and speed?
Here's a benchmark of all modern 6502 C compilers: <a href="https://thred.github.io/c-bench-64/" rel="nofollow">https://thred.github.io/c-bench-64/</a> - do note that binary sizes also include the size of the standard libraries, which means it is not a full picture of the code generation density of the compilers themselves.