8 comments

  • HarHarVeryFunny10 hours ago
    According to this page, LLVM-MOS seems to be pretty soundly beaten in performance of generated code by Oscar64.<p><a href="https:&#x2F;&#x2F;thred.github.io&#x2F;c-bench-64&#x2F;" rel="nofollow">https:&#x2F;&#x2F;thred.github.io&#x2F;c-bench-64&#x2F;</a><p>I think the ideal compiler for 6502, and maybe any of the memory-poor 8-bit systems would be one that supported both native code generation where speed is needed as well as virtual machine code for compactness. Ideally would also support inline assembler.<p>The LLVM-MOS approach of reserving some of zero page as registers is a good start, but given how valuable zero page is, it would also be useful to be able to designate static&#x2F;global variables as zero page or not.
    • sehugg6 hours ago
      I&#x27;ve implemented Atari 2600 library support for both LLVM-MOS and CC65, but there are too many compromises to make it suitable for writing a game.<p>The lack of RAM is a major factor; stack usage must be kept to a minimum and you can forget any kind of heap. RAM can be extended with a special mapper, but due to the lack of a R&#x2F;W pin on the cartridge, reads and writes use different address ranges, and C does not handle this without a hacky macro solution.<p>Not to mention the timing constraints with 2600 display kernels and page-crossing limitations, bank switching, inefficient pointer chasing, etc. etc. My intuition is you&#x27;d need a SMT solver to write a language that compiles for this system without needing inline assembly.
    • zozbot2348 hours ago
      AIUI, Oscar64 does not aim to implement a <i>standard</i> C&#x2F;C++ compiler as LLVM does, so the LLVM-MOS approach is still very much worthwhile. You can help by figuring out which relevant optimizations LLVM-MOS seems to be missing compared to SOTA (compiled or human-written) 6502 code, and filing issues.
      • djmips2 hours ago
        I feel like no amount of optimizations will close the gap - it&#x27;s an intractable problem.
        • fooker1 hour ago
          It&#x27;s performance of generated code, not performance of the compiler.
    • kwertyoowiyop6 hours ago
      Aztec C had both native and interpreted code generation, back in the day.
    • bbbbbr8 hours ago
      With regard to code size in this comparison someone associated with llvm-mos remarked that some factors are: their libc is written in C and tries to be multi-platform friendly, stdio takes up space, the division functions are large, and their float support is not asm optimized.
      • HarHarVeryFunny7 hours ago
        I wasn&#x27;t really thinking of the binary sizes presented in the benchmarks, but more in general. 6502 assembler is compact enough if you are manipulating bytes, but not if you are manipulating 16 bit pointers or doing things like array indexing, which is where a 16-bit virtual machine (with zero page registers?) would help. Obviously there is a trade-off between speed and memory size, but on a 6502 target both are an issue and it&#x27;d be useful to be able to choose - perhaps VM by default and native code for &quot;fast&quot; procedures or code sections.<p>A lot of the C library outside of math isn&#x27;t going to be speed critical - things like IO and heap for example, and there could also be dual versions to choose from if needed. Especially for retrocomputing, IO devices themselves were so slow that software overhead is less important.
        • djmips1 hour ago
          More often than not the slow IO devices were coupled with optimized speed critical code due to cost savings or hardware simplification. Heap is an approach that rarely works well on a 6502 machine - there are no 16 bit stack pointers and it&#x27;s just slower than doing without - However I tend to agree that a middle ground 16 bit virtual machine is a great idea. The first one I ever saw was Sweet16 by Woz.
  • mtklein12 hours ago
    This was a nice surprise when learning to code for NES, that I could write pretty much normal C and have it work on the 6502. A lot of tutorials warn you, &quot;prepare for weird code&quot; and this pretty much moots that.
  • gregsadetsky10 hours ago
    I don&#x27;t know this world well (I know what llvm is) but - does anyone know why this was made as a fork vs. contributing to llvm? I suppose it&#x27;s harder to contribute code to the real llvm..?<p>Thanks
    • mysterymath54 minutes ago
      Hey, llvm-mos maintainer here. I actually work on LLVM in my dayjob too, and <i>I</i> don&#x27;t particularly want llvm-mos upstream. It stretches LLVM&#x27;s assumptions a lot, which is a good thing in the name of generality, but the <i>way</i> it stretches those assumptions isn&#x27;t particularly relevant anymore. That is, it&#x27;s difficult to find modern platforms that break the same assumptions.<p>Also, maintaining a fork is difficult, but doable. I work on LLVM a ton, so it&#x27;s pretty easy for it to fold in to my work week-to-week.
    • jjmarr10 hours ago
      LLVM has very high quality standards in my experience. Much higher than I&#x27;ve ever had even at work. It might be a challenge to get this upstreamed.<p>LLVM is also very modular which makes it easy to maintain forks for a specific backend that don&#x27;t touch core functionality.
      • codebje7 hours ago
        My experience is that while LLVM is very modular, it also has a pretty high amount of change in the boundaries, both in where they&#x27;re drawn and in the interfaces between them. Maintaining a fork of LLVM with a new back-end is very hard.
        • jjmarr6 hours ago
          I know my company (AMD) maintains an llvm fork for ROCm. YMMV.
          • codebje4 hours ago
            I should have qualified: it&#x27;s hard to do for an individual or very small team as a passion side-project. It&#x27;s pretty time consuming to keep up with the rate of change in LLVM.
      • gregsadetsky9 hours ago
        Super interesting, thanks. I specifically thought that its modular aspect made it possible to just &quot;load&quot; architectures or parsers as ... &quot;plugins&quot;<p>But I&#x27;m sure it&#x27;s more complicated than that. :-)<p>Thanks again
        • zozbot2348 hours ago
          LLVM backends are indeed modular, and the LLVM project does allow for experimental backends. Some of the custom optimization passes introduced by this MOS backend are also of broader interest for the project, especially the automated static allocation for provably non-reentrant functions, which might turn out to be highly applicable to GPU-targeting backends.<p>It would be interesting to also have a viable backend for the Z80 architecture, which also seems to have a highly interested community of potential maintainers.
          • codebje7 hours ago
            <a href="https:&#x2F;&#x2F;github.com&#x2F;jacobly0&#x2F;llvm-project" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jacobly0&#x2F;llvm-project</a><p>... but now three years out of date, because it&#x27;s hard to maintain :-)
    • weinzierl8 hours ago
      These processors were very very different from what we have today.<p>They usually only had a single general purpose register (plus some helpers). Registers were 8-bit but addresses (pointers) were 16-bit. Memory was highly non-uniform, with (fast) SRAM, DRAM and (slow) ROM all in one single address space. Instructions often involved RAM directly and there were a plethora of complicated addressing modes.<p>Partly this was because there was no big gap between processing speed and memory access, but this makes it very unlikely that similar architectures will ever come back.<p>As interesting as experiments like LLVM-MOS are, they would not be a good fit for upstream LLVM.
      • zozbot2347 hours ago
        &gt; ... there was no big gap between processing speed and memory access, but this makes it very unlikely that similar architectures will ever come back. ...<p>Don&#x27;t think &quot;memory access&quot; (i.e. RAM), think &quot;accessing generic (addressable) scratchpad storage&quot; as a viable alternative to <i>both</i> low-level cache and a conventional register file. This is not too different from how GPU low-level architectures might be said to work these days.
        • djmips1 hour ago
          Great point. And you can even extend that to think like a 6502 or GPU programmer on an AMD, ARM or Intel CPU as well if you want the very best performance. Caches are big enough on modern CPUs that you can almost run portions of your code in the same manner. I bet TPUs at Google also qualify.
    • Sharlin10 hours ago
      Pretty sure that the prospects of successfully pitching the LLVM upstream to include a 6502 (or any 8&#x2F;16-bit arch) backend are only slightly better than a snowball’s chances in hell.
      • alexrp2 hours ago
        Worth noting that LLVM has AVR and MSP430 backends, so there&#x27;s no particular resistance to 8-bit&#x2F;16-bit targets.
  • bbbbbr8 hours ago
    There is a similar project for the Game Boy (sm83 cpu) with a fork of LLVM.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;DaveDuck321&#x2F;gb-llvm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;DaveDuck321&#x2F;gb-llvm</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;DaveDuck321&#x2F;libgbxx" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;DaveDuck321&#x2F;libgbxx</a><p>It seems to be first reasonably successful attempt (can actually be used) among a handful of previous abandoned llvm Game Boy attempts.
    • retrac7 hours ago
      Presumably it would be straightforward to port the GB code generation to the Intel 8080 &#x2F; Z80. There have been a few attempts for LLVM for those CPUs over the years. But none which panned out, I think?
      • codebje7 hours ago
        The CE-dev community&#x27;s LLVM back-end for the (e)Z80 &#x27;panned out&#x27; in that it produced pretty decent Z80 assembly code, but like most hobby-level back-ends the effort to keep up to date with LLVM&#x27;s changes overwhelmed the few contributors active and it&#x27;s now three years since the last release. It still works, so long as you&#x27;re OK using the older LLVM (and clang).<p>This is why these back-ends aren&#x27;t accepted by the LLVM project: without a significant commitment to supporting them, they&#x27;re a liability for LLVM.
      • zozbot2347 hours ago
        Most attempts at developing new LLVM downstream architectures simply fail at keeping up with upstream LLVM, especially across major releases. Perhaps these projects should focus a bit more on getting at least some of their simpler self-contained changes to be adopted upstream, such as custom optimization passes. Once that is done successfully, it might be easier to make an argument for also including support for a newly added ISA, especially a well-known ISA that can act as convenient reference code for the project as a whole.
  • self_awareness11 hours ago
    Rust fork that works on this LLVM fork, for 6502, genering code that can be executed on a Commodore-64: <a href="https:&#x2F;&#x2F;github.com&#x2F;mrk-its&#x2F;rust-mos" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;mrk-its&#x2F;rust-mos</a>
  • iberator9 hours ago
    Slightly off-topic. If you want to learn low level assembly programming in the XXI century, 6502 is still an EXCELLENT choice!<p>Simple architecture and really really joyful to use even for casual programmers born a decade, or two later :)
    • 1000100_10001019 hours ago
      I&#x27;d argue that 68K is simpler to learn and use. You get a similar instruction set, but 32-bit registers, many of them. It&#x27;s even got a relocatable stack so it can handle threading when you get to that point.
      • chihuahua8 hours ago
        I agree, I feel like the 68k architecture was a dream for assembly programming. each register is large enough to store useful values, there are lots of them, there are instructions for multiply and divide. This allows you to focus on the essence of what you want to accomplish, and not have to get side-tracked into how to represent the X-coordinate of some object because it&#x27;s just over 8 bits wide, or how to multiply to integers. Both of these seemingly trivial things already require thought on the 6502.
      • monocasa8 hours ago
        And registers are actually pointer width, so you don&#x27;t have to go through memory just to do arbitrary pointer arithmetic.
    • jacquesm8 hours ago
      If 8 bit: 6809. If 32 bit: 68K. Those are miles ahead of the 6502. Otoh if you want to see a fun quirky chip the 6502 is definitely great, and I&#x27;d recommend you use a (virtual) BBC Micro to start you off with.
      • bsder6 hours ago
        Yeah, the 6809 is just ridiculously good to learn assembly language on. Motorola cleaned up all the idiocies from the 6800 on the 6809.<p>The attention the 6502 get is just because of history. The advantage the 6502 had was that it was <i>cheap</i>--on every other axis the 6502 sucked.
        • classichasclass3 hours ago
          Sucked, compared to? If the 6502 sucked on every other metric but cost, while it would have gotten some use I don&#x27;t think it would have been as heavily used as it was.
        • jacquesm5 hours ago
          Imagine a world where the Apple II had a 6800 later upgraded to 6809...
          • hashmash4 hours ago
            It wouldn&#x27;t have happened because the 6809 wasn&#x27;t binary compatible with the 6800.
            • djmips1 hour ago
              The 6809 was SOURCE compatible with the 6800 - you can assemble 6800 code on a 6809 assembler and it will run with perhaps very minor tweaks.
            • jacquesm4 hours ago
              So?
              • hashmash2 hours ago
                Because none of the existing software would work. The idea of running a Rosetta-like feature on an 8-bit CPU isn&#x27;t feasible. The Apple II eventually received an upgraded processor, the 65816, which was compatible with the 6502.
          • bsder4 hours ago
            Then the Apple II would never have sold.<p>The 6800 was <i>expensive</i> versus the 6502--almost 10x (6502 was $25 when the 6800 was $175 which was <i>already</i> reduced from $360)!
            • djmips1 hour ago
              And yeah, there was a 6502 Apple I too!
            • jacquesm4 hours ago
              Yes, I was thinking more from a tech perspective, not from a price perspective.
    • retrac7 hours ago
      LLVM includes an equivalent to binutils, with a macro assembler, linker, objdump with disassembler, library tools, handling formats like ELF and raw binary etc.<p>LLVM-MOS includes all of that. It is the same process as using LLVM to cross-assemble targeting an embedded ARM board. Same GNU assembler syntax just with 6502 opcodes. This incidentally makes LLVM-MOS one of the best available 6502 assembly development environments, if you like Unix-style cross-assemblers.
  • cmrdporcupine12 hours ago
    It&#x27;s been amazing to see the progress on this project over the last 5 years. As someone who poked around looking at the feasibility of this myself, and gave up thinking it&#x27;d never be practical, I&#x27;m super happy to see how far they&#x27;ve gotten.<p>Maybe someday the 65816 target will get out there, a challenge in itself.
    • jacquesm8 hours ago
      Instead of the 65816 we got the ARM, which I think was the better thing to happen in the longer term.
  • michalpleban11 hours ago
    How does it compare to cc65 with regard to code size and speed?
    • asiekierka10 hours ago
      Here&#x27;s a benchmark of all modern 6502 C compilers: <a href="https:&#x2F;&#x2F;thred.github.io&#x2F;c-bench-64&#x2F;" rel="nofollow">https:&#x2F;&#x2F;thred.github.io&#x2F;c-bench-64&#x2F;</a> - do note that binary sizes also include the size of the standard libraries, which means it is not a full picture of the code generation density of the compilers themselves.
      • michalpleban10 hours ago
        Thank you, that&#x27;s really helpful.