22 comments

  • malmeloo6 hours ago
    One big issue with FPGAs is how annoying it is to learn how to use them. I did a course on embedded systems a few years ago and nobody could truly get to enjoy it because we spent most of our time downloading and installing huge toolchains, waiting for synthesis and PnR to complete and debugging weird IDE issues. We need to open up the space to allow people to develop better solutions than what these companies are forcing down our throats.<p>There already exist fantastic open source tools such as Yosys, Nextpnr, iverilog, OpenFPGALoader, ... that together implement most features that a typical hardware dev would want to use. But chip support is unfortunately limited, so fewer people are using these tools.<p>We decided to build a VSCode extension that wraps these open source tools (<a href="https:&#x2F;&#x2F;edacation.github.io" rel="nofollow">https:&#x2F;&#x2F;edacation.github.io</a> for the interested) to combat this problem. Students are already using it during the course and are generally very positive about the experience. It&#x27;s by no means a full IDE, but if you&#x27;re just getting started with HDL it&#x27;s great to get familiar with it. Instead of a mess of a toolchain that nobody truly knows how to use, you now get a few buttons to visualize and (soon) program onto an FPGA.<p>There&#x27;s also Lushay Code for the slightly more advanced users. But we need more of these initiatives to really get the ball rolling and make an impact, so I&#x27;d highly recommend people to check out and contribute to projects like this.
    • Neywiny5 hours ago
      I&#x27;m all for your endeavor, but didn&#x27;t see a device support list on your front page. Clicked the first of 2 links in your sidebar (docs) and got a 404. I&#x27;m not saying it&#x27;s telling that your issues page works when your docs page doesn&#x27;t, but it&#x27;s not the foot I would have put forward.
    • random31 hour ago
      I just remembered I have a Xilinx I bought over a decade ago lying around somewhere. I don&#x27;t remember ever plugging it in, but I do both the excitement of getting it and trying to figure out the toolchain and getting confused.
    • delifue5 hours ago
      Modern large productivity software (including IDE) are often &quot;fragile&quot;.<p>Sometimes some configuration is wrong and it behave wrongly but you don&#x27;t know which configuration.<p>Sometimes it relies on another software installed on system and if you installed the incompatible version it malfunctions without telling you incompatibility.<p>Sometimes the IDE itself has random bugs.<p>A lot of time is spent workarounding IDE issues
      • mikepurvis2 hours ago
        Building for an fpga shouldn’t be any harder than building for cortex mcus, and there are lots of free&#x2F;oss toolchains and configurations for those.
  • exmadscientist6 hours ago
    FPGAs need their &quot;Arduino moment&quot;. There have been so, so, so many projects where I&#x27;ve wanted just a little bit of moderately-complicated glue logic. Something pretty easy to dash off in VHDL or whatever. But the damn things require so much support infrastructure: they&#x27;re complicated to put down on boards, they&#x27;re complicated to load bitstreams in to, they&#x27;re complicated to <i>build</i> those bitstreams for, and they&#x27;re complicated to manage the software projects for.<p>As soon as they reach the point where it&#x27;s as easy to put down an FPGA as it is an old STM32 or whatever, they&#x27;ll get a lot more interesting.
    • tverbeure6 hours ago
      The strong point of FPGAs is their versatility. If you wanted an FPGA that would be easy to put on a board, you’d have to drop support for multiple voltage rails and thus multiple IO standards, which is exactly what you don’t want to lose.<p>Building bitstreams is IMO not complicated. (I just copy a Makefile from a previous project and go from there.)<p>Loading them is a matter of plugging in a JTAG cable and typing “make program”.<p>I don’t know what you mean with the “manage SW projects for”?
      • exmadscientist5 hours ago
        &gt; you’d have to drop support for multiple voltage rails and thus multiple IO standards, which is exactly what you don’t want to lose.<p>Yes? Yes it is? 9 times out of 10, my entire board is LVCMOS33. I would <i>love</i> to have the option to drop all of the power rail complexity in a simplified series of parts.<p>Sometimes you need maximum I&#x2F;O speed. Sometimes you need maximum I&#x2F;O flexibility. Sometimes you need processing horsepower. And sometimes you need the certainty of hardware timing, which you get on a gate array and don&#x27;t get any time there&#x27;s a processor involved. Or, often, what I actually need is just a little bit of weird logic that&#x27;s asynchronous, but too hard to do with the remnants of 74-series or 4000-series logic that are still available.<p>&gt; Building bitstreams is IMO not complicated. (I just copy a Makefile from a previous project and go from there.)<p>It is not complicated for people who have spent a long time learning and who have past designs they can copy from. (I have a few of those myself.) It is nasty to explain to a new person and very nasty to explain well enough to reproduce in the future without me around.<p>&gt; Loading them is a matter of plugging in a JTAG cable and typing “make program”.<p>Yes, for you on the bench. Now program them into a product on an assembly line. Of course it is <i>possible</i>. It is still a giant headache, and quite a bit worse than just dealing with an MCU.<p>&gt; I don’t know what you mean with the “manage SW projects for”?<p>Two words: Xilinx ISE.
        • exmadscientist5 hours ago
          &gt; often, what I actually need is just a little bit of weird logic that&#x27;s asynchronous<p>As a concrete example of this: two weeks ago I wanted a 21-input OR gate. It would have been <i>wonderful</i> if I could spend a little bit of money, buy a programmable thing in a 24-pin package, put it down, figure out some way to get the bitstream in (this is never pleasant in medium-volume manufacturing, so it&#x27;s not like we&#x27;re going to solve it now), and get my gate function that is literally <i>one line of HDL</i>. One. Line.<p>As it was, a 21-input OR gate is so much work in 74-series logic that I abandoned that whole thing and we did the bigger-picture job in a different, worse, way.
          • tverbeure5 hours ago
            The device that you were looking was not an FPGA but a GAL22V10L.
            • exmadscientist4 hours ago
              No, it wasn&#x27;t. Those are mostly available in PLCC and DIP packages and even if you can get the SOIC&#x2F;TSSOP versions they still cost $1.20 each at 10k volume. That&#x27;s flat-out unacceptable for 99% of the things I do. The <i>entire rest of the board</i> I was talking about was $4.60. Processor included. $1.20 is not going to fly.
              • tverbeure4 hours ago
                “Reduces use case and requirements to something impossibly niche and low volume then yells at the clouds.”<p>Anyway, just tie the output of 21 emitter followers together, add a resistor and - tadaaa - 21 input OR!
                • mitthrowaway242 minutes ago
                  If only someone could make a single part that is very versatile, so that it could get production economies of scale while solving all the thousands of different random problems various people might have, whether they need a 21-input OR or something else. Like an array of gates, but field-programmable!<p>(That&#x27;s a pretty steep price target even for a small FPGA though. With 16 pins maybe, but with 25?)
                • exmadscientist3 hours ago
                  Hey, that was my problem from last week. And, yes, I agree with you -- it was best solved another way.<p>But please don&#x27;t complain when I give concrete examples of things I&#x27;d like to do but couldn&#x27;t. (And please do recognize that there was a lot more context to the mess than just &quot;I need an OR gate&quot;, but no one cares about the <i>real</i> gory details.)
        • 151555 hours ago
          &gt; and quite a bit worse than just dealing with an MCU.<p>Unless you&#x27;re using some kind of USB DFU mode (which is annoying on assembly lines), SWD-based flashing of an MCU is substantially more complicated than the JTAG sequences that some internal-flash FPGAs use for programming..<p>These chips are just as easy or easier to program than any ARM MCU. Raw SPI NOR flash isn&#x27;t &quot;easy&quot; to program if you&#x27;ve never done it before, either.
          • exmadscientist4 hours ago
            It&#x27;s mostly the whole &quot;two binaries&quot; problem.<p>Oh look, the factory screwed up and isn&#x27;t flashing the MCU this week! Does the board survive?<p>Oh look, the factory screwed up and isn&#x27;t flashing the PLD <i>this</i> week! Does the board survive?<p>Oh look, the factory... wait, what <i>is</i> the factory doing and why are they putting <i>that</i> sticker on <i>that</i>....<p>You get the idea. Yes, yes, it is all solvable. I have never claimed it isn&#x27;t. I am just claiming it is a giant pain in the ass and limits use of these things. I will <i>bend over backwards</i> to keep boards at one binary that needs to be loaded.
            • 151554 hours ago
              Embed the bitstream into your MCU firmware binary, bitbang the 50-100KB bitstream into SRAM via JTAG from your MCU in all of 10ms. This is &lt;100 lines of Rust.
              • exmadscientist3 hours ago
                Yes, it&#x27;s solvable. But my whole argument is that the entire experience is death by a thousand cuts. I&#x27;m not seeing how &quot;it&#x27;s possible in 100 lines of Rust&quot; (a language most people don&#x27;t even use for embedded work) is really countering my argument.
            • tverbeure4 hours ago
              I honestly start to wonder how in the world we survived flashing 3 different binaries, for years (bitstream, 2 MCUs), without ever getting a complaint from the production floor.<p>I should check my spam folder.
        • tverbeure5 hours ago
          &gt; sometimes … sometimes … sometimes …<p>And sometimes you need support for multiple IO standards.<p>I don’t understand what point you’re trying to get across.<p>But if all you need is LVCMOS33, why do you not use a MAX10 FPGA with built-in voltage regulator? Or a similar FPGA device from GoWin that is positioned as a MAX10 alternative? What is wrong with those?<p>&gt; JTAG<p>On our production line, we use JTAG to program the FPGA? We literally used the same “make program” command for development and production. That was for production volumes considerably larger than 100k.<p>&gt; ISE<p>ISE was end of life’d when I started using FPGAs professionally. That was in 2012. The only reason it still exists is because some hold-outs are still using Spartan 6.
          • exmadscientist4 hours ago
            &gt; I don’t understand what point you’re trying to get across.<p>My point is twofold:<p>1. There are many niches. Your main needs are not the same as my main needs. And my needs are poorly met by existing products, so I want to see something better. (And I do buy chips.)<p>2. All of this is way, way harder than it needs to be. It <i>could</i> be easy, but it isn&#x27;t. Everything is <i>possible</i> right now. But I wasn&#x27;t random when I used the dreaded A-word (&quot;Arduino&quot;). Arduino is a kind of horrible product that did not make anything possible and did not really invent anything. It did not make anything really hard suddenly become easy. Hard things before Arduino were still hard after Arduino. It &quot;just&quot; made some things that used to be medium-hard pains-in-the-butt actually really quick and easy (at a little backend complexity cost: now you&#x27;ve got the Arduino IDE around, hope it doesn&#x27;t break!).<p>It turns out that is very valuable.<p>And is what I would like to see happen with FPGAs: make them <i>easy</i> to drop in instead of pains in the butt. All pieces for this exist, nothing is new tech, no major revolutions need to happen. &quot;Just&quot; ease of use.
            • PunchyHamster1 hour ago
              &gt; It did not make anything really hard suddenly become easy.<p>It did. Onboarding people onto embedded programmer.<p>You just ran it, wrote few lines and you had working blinky. Write some more and you have useful toy. You could even technically make products with it but going from this to C++ was easier coz you already know what you could do, just needed to go thru pain of switching the toolchain once you&#x27;re already invested.<p>Compare that to &quot;you need to setup compiler, toolchain, SDK, figure out how to program the resulting binary, map the registers to your devboard pins etc.&quot;
            • 151554 hours ago
              &gt; make them easy to drop in instead of pains in the butt<p>How much easier does it need to be than putting down a single 1mm^2 LDO and a QFN IC? Is this really <i>that</i> difficult?
    • willis9363 hours ago
      It&#x27;s already happened, people just haven&#x27;t realized. iCE40-UP5K costs a few bucks, needs minimal support circuitry, and is supported by FOSS toolchains (yosys). Fun packages like the pico-ice bring it all the way down to the entry-level arduino crowd. It just doesn&#x27;t have the marketing mindshare.
    • PunchyHamster2 hours ago
      They can&#x27;t, by nature of the proprietary bitstream. Arduino was only built thanks to ability to do whatever they wanted with open source compilers
    • timthorn6 hours ago
      Sounds like a PLD might suit your usecase? Simpler than an FPGA, programmed like an EEPROM, perfect for glue logic.
      • javawizard6 hours ago
        I wish CPLDs were more well known in the common vernacular.<p>The industry draws a distinction between CPLDs and FPGAs, and rightly so, but most &quot;Arduino-level&quot; hobbyists think &quot;I want something I can program so that it acts like such-and-such a circuit, I know, I need an FPGA!&quot; when what they probably want is what the professional world would call a CPLD - and the distinction in terminology between the two does more to confuse than to clarify.<p>I don&#x27;t know how to fix this; it&#x27;d be lovely if the two followed convergent paths, with FPGAs gaining on-board storage and the line between them blurring. Or maybe we need a common term that encompasses both. (&quot;Programmable logic device&quot; is technically that, but no-one knows that.)<p>Anyway. CPLDs are neat.
        • tverbeure25 minutes ago
          I don’t see how CPLDs solve anything?<p>You write RTL for them just like you do for FPGAs, you need to configure them as well. The only major benefit is that they don’t have a delay between power up and logic active? But that’s not something that would make a difference for most people.<p>CPLDs are also a dying breed and being replaced with FPGAs that have parallel on-board flash to allow fast configuration after power up. (e.g. MAX10)
        • bee_rider2 hours ago
          I don’t know anything about this (other than doing mediocre in some undergrad Verilog classes one million years ago). Wikipedia seems to call FPGAs a type of PLD. Of course, everybody has heard of FPGAs; is it right to think they’ve sort of branched off, become their own thing, and eclipsed their superset?
      • exmadscientist6 hours ago
        &quot;Programmed like an EEPROM&quot; is part of the problem, any system that needs more than one piece of firmware to be wrangled during the assembly&#x2F;bringup process is asking for pain.<p>But, really, no one cares what&#x27;s inside the box. CPLD or FPGA, they&#x27;re all about the same. The available PLDs are still not really acceptable. There&#x27;s a bunch of 5V dinosaurs that the manufacturers would obviously love to axe, and a few tiny little micro-BGA things where you&#x27;ve got to be buying 100k to even submit a documentation bug report. Not much for stuff in the middle.
    • 151555 hours ago
      &gt; they&#x27;re complicated to put down on boards<p><a href="https:&#x2F;&#x2F;gowinsemi.com&#x2F;en&#x2F;product&#x2F;detail&#x2F;46&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gowinsemi.com&#x2F;en&#x2F;product&#x2F;detail&#x2F;46&#x2F;</a><p>- Requires just 1V2 + 3V3<p>- Available in QFN<p>- Bitstream is saved in internal flash or programmed to SRAM via a basic JTAG sequence<p><a href="https:&#x2F;&#x2F;www.efinixinc.com&#x2F;products-trion.html" rel="nofollow">https:&#x2F;&#x2F;www.efinixinc.com&#x2F;products-trion.html</a>
      • pclmulqdq5 hours ago
        The Altera Max 10 devices are also relatively simple to support (flash on the chip, few power rails, etc.)
      • exmadscientist4 hours ago
        &gt; Contact Sales<p>&gt; Request Sample<p>&gt; Please login to download the document.<p>I mean, yeah. My argument isn&#x27;t that anything is <i>impossible</i>. My argument is that all of this is <i>harder than it needs to be</i> and this is not countering me!
        • 151554 hours ago
          This is your job, and it really shouldn&#x27;t feel difficult. This is really not tedious: the minimum board design for these chips literally consists of just power, JTAG pins, and a clock (if the internal oscillator isn&#x27;t good enough.)<p>The Gowin FPGAs are available (at a massive premium) from Mouser, just like whatever MCU you are already using. Many are available for &lt;$1-2 in China. Efinix are available from DigiKey, with some SKUs under &lt;$10.<p>All of the Gowin documentation is available on their site with a free, approval-less email login and no NDA, or via Google directly (PDFs, just like Xilinx, even numbered similarly.)
          • mschuster913 minutes ago
            &gt; All of the Gowin documentation is available on their site with a free, approval-less email login<p>The problem is trust. I&#x27;m hesitant to hand out my e-mail anywhere because far too often I have been hounded by salespeople as a result, not to mention data breaches or bombardment of newsletters.
    • fennecbutt4 hours ago
      It&#x27;s basically because they&#x27;re so locked down, hard to get docs, stupid toolchains and ides like others have mentioned.<p>It&#x27;s like fpga companies don&#x27;t want people using them, much like others like the pixart sensor I wanted to use: NDA because some parasite dipshit executive or manager thinks that register layouts are extremely sensitive information.<p>I&#x27;ve had dozens of uses for an fpga...but every single time I just can&#x27;t be bothered. Why, when they make it a pain in the ass on purpose.
      • 151554 hours ago
        None of these things are true for the new, cheap Chinese contenders.
    • rramadass3 hours ago
      &gt; FPGAs need their &quot;Arduino moment&quot;.<p>This is exactly it. Why hasn&#x27;t some expert group produced a very simple open design board with a simple Arduino-like IDE for FPGAs? Make it easy to access and use, get it into the hands of makers&#x2F;hobbyists and watch the apps&#x2F;ecosystem explode.<p>As an example, one could provide soft-cores for 8051&#x2F;RISC-V etc. right out of the box with a menu of peripherals to mix and match. Provide a simple language library wrapper say over SystemVerilog (or whatever the community settles on) just like Arduino did (with C++) that makes it &quot;easy&quot; to program the FPGA.<p>For apps, one good example would be putting TinyML (or any other ML&#x2F;LLM models) on a FPGA. This would take advantage of the current technology wave to make this project a success.<p>PS: Folks might find the book <i>FPGAs for Software Programmers by Dirk Koch et al.</i> (<a href="https:&#x2F;&#x2F;link.springer.com&#x2F;book&#x2F;10.1007&#x2F;978-3-319-26408-0" rel="nofollow">https:&#x2F;&#x2F;link.springer.com&#x2F;book&#x2F;10.1007&#x2F;978-3-319-26408-0</a>) useful.
  • mgilroy7 hours ago
    The issue with the software team using an FPGA is that software developers generally aren&#x27;t very good at doing things in parallel. They generally do a poor job in implementing hardware. I previously taught undergraduates VHDL, the software students generally struggles with the dealing with things running in parallel.<p>VHDL and Verilog are used because they are excellent languages to describe hardware. The tools don&#x27;t really hold anyone back. Lack of training or understanding might.<p>Consistently the issue with FPGA development for many years was that by the time you could get your hands on the latest devices, general purpose CPUs were good enough. The reality is that if you are going to build a custom piece of hardware then you are going to have to write the driver&#x27;s and code yourself. It&#x27;s achievable, however, it requires more skill than pure software programming.<p>Again, thanks to low power an slow cost arm processors a class of problems previously handled by FPGAs have been picked up by cheap but fast processors.<p>The reality is that for major markets custom hardware tends to win as you can make it smaller, faster and cheaper. The probability is someone will have built and tested it on an FPGA first.
    • gnull1 hour ago
      &gt; VHDL and Verilog are used because they are excellent languages to describe hardware.<p>Maybe they were in the 80. In 2025, language design has moved ahead quite a lot, you can&#x27;t be saying that seriously.<p>Have a look at how clash-lang does it. It uses functional paradigm, which is much more suitable for circuits than pseudo-pricedural style of verilog. You can also parameterize modules by modules, not just by bitness. Take a functional programmer, hive him clash and he&#x27;ll have no problems doing things in parallel.<p>Back when I was a systems programmer, I tried learning system verilog. Had zero conceptual difficulty, but I just couldn&#x27;t justify to myself why I should spend my time on something so outdated and badly designed. Hardware designers at my company at the time were on the other hand ok with verilog because they haven&#x27;t seen any programming languages other than C and Python, and had no expectations.
    • j-pb7 hours ago
      VHDL is ok, Verilog is a sin.<p>The issue isn&#x27;t the languages, it&#x27;s the horrible tooling around them. I&#x27;m not going to install a multi GB proprietary IDE that needs a GUI for everything and doesn&#x27;t operate with any of my existing tools. An IDE that costs money, even though I already bought the hardware. Or requires an NDA. F** that.<p>I want to be able to do `cargo add risc-v` if I need a small cpu IP, and not sacrifice a goat.
      • VonTum6 hours ago
        Well really, the language _is_ the difficulty of much of hardware design, both Verilog and VHDL are languages that were designed for simulation of hardware, and not synthesis of hardware. Both languages have of similar-but-not-quite ways of writing things, like blocking&#x2F;nonblocking assigns causing incorrect behavior that&#x27;s incredibly difficult to spot on the waveform, not being exhaustive in assigns in always blocks causing latches, maybe-synthesizeable for loops, etc. Most of this comes from their paradigm of an event loop, handling all events and the events that those events trigger, etc, until all are done, and advancing time until the next event. They simulate how the internal state of a chip changes every clock cycle, but not to actually do the designing of said chip itself.<p>I&#x27;m tooting my own horn with this, as I&#x27;m building my own language for doing the actual designing. It&#x27;s called SUS.<p>Simple things look pretty much like C:<p><pre><code> module add : int#(FROM:-8, TO: 8) a, int#(FROM: 2, TO: 20) b -&gt; int c { c = a+b } </code></pre> It automatically compensates for pipelining registers you add, and allows you to use this pipelining information in the type system.<p>It&#x27;s a very young language, but me, a few of my colleagues, and some researchers in another university are already using it. Check it out =&gt; <a href="https:&#x2F;&#x2F;github.com&#x2F;pc2&#x2F;sus-compiler" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pc2&#x2F;sus-compiler</a>
        • oscillonoscope2 hours ago
          Language really isn&#x27;t the difficulty. That&#x27;s why there&#x27;s a thousand alt-HDLs that have been used for little more than blinking LEDs.
        • kevin_thibedeau5 hours ago
          VHDL was designed for specification. Verilog is the one with the warts from its simulator heritage.
      • blackguardx6 hours ago
        You can pretty much do everything in Vivado from the command line as long as you know Tcl...<p>Also, modern Verilog (AKA Systemverilog) fixes a bunch of the issues you might have had. There isn&#x27;t much advantage to VHDL these days unless perhaps you are in Europe or work in certain US defense companies.
        • Cadwhisker6 hours ago
          # Here&#x27;s the general flow for Vivado TCL projects that takes you from source code to a bit-file with no interaction. Read UG835 for details.<p>create_project -in_memory -part ${PART}<p>set_property target_language VHDL [ current_project ]<p>read_vhdl &quot;my_hdl_file.vhd&quot;<p>synth_design -top my_hdl_top_module_name -part ${PART}<p>opt_design<p>place_design<p>route_design<p>check_timing -file my_timing.txt<p>report_utilization -file my_util.txt<p>write_checkpoint my_routed_design.dcp<p>write_bitstream my_bitfile.bit
        • exmadscientist6 hours ago
          The main advantage to VHDL is the style of thinking it enforces. If you write your Verilog or SystemVerilog like it&#x27;s VHDL, everything works great. If you write your VHDL like it&#x27;s Verilog, you&#x27;ll get piles of synthesis errors... and many of them will be real problems.<p>So if you learn VHDL first, you&#x27;ll be on a solid footing.
          • blackguardx5 hours ago
            I think this can just be summarized to &quot;write any HDL like you are modeling real hardware.&quot; Both VHDL and Systemverilog were primarily intended for validation and synthesis is a second class citizen.
          • pclmulqdq5 hours ago
            There is a trend among programmers to assume that everything supported by the syntax can be done. This is not even true in C++, but it&#x27;s something people think. If you are writing synthesizable SystemVerilog, only a small subset of the language used in a particular set of ways works. You have to resist the urge to get too clever (in some ways, but in other ways you can get extremely clever with it).
            • d_tr3 hours ago
              I thought that if you have some idea about how hardware works, it is kind of more or less obvious whether something is synthesizable or not.
      • tverbeure6 hours ago
        Or you could do the right thing, ignore the GUI for 99% of what you’re doing, and treat the FPGA tools as command line tools that are invoked by running “make”…
        • pclmulqdq5 hours ago
          This is how most FPGA users interact with vivado&#x2F;quartus these days.
          • tverbeure4 hours ago
            One really wonders when reading some of the comments here…
            • pclmulqdq3 hours ago
              I should have said &quot;most _professional_ FPGA users&quot; because I assume many people here who don&#x27;t know this (including the author of the article) are not.
    • checker6591 hour ago
      &gt; software developers generally aren&#x27;t very good at doing things in parallel<p>If only hardware people would stop stereotyping. Also, do you guys not use use formal tools (BMC etc) now? Who do you think wrote those tools? Heck all the EDA stuff was designed by software people.<p>I just can&#x27;t with the gatekeeping.<p>(Btw, this frustration isn&#x27;t just pointed at you. I find this sentiment being parroted allover &#x2F;r&#x2F;FPGA on reddit and elsewhere. It&#x27;s damn frustrating to say the least. Also, the worst thing is all the hardware folks only know C so they think all programming is imperative. VDHL is Ada for crying out loud.)
    • makestuff6 hours ago
      Yeah I agree it is a lack of understanding on how to use the tools. The main issue I ran into in my undergrad FPGA class as a CS student was a lack of understanding on how to use the IDE. We jumped right into trying to get something running on the board instead of taking time to get everything set up. IMO it would have been way easier if my class used an IDE that was as simple as Arduino instead of everyone trying to run a virtual machine on their macbooks to run Quartus Prime.
  • mikewarot24 minutes ago
    An FPGA is like a spreadsheet for bits that can recalculate at hundreds of millions of times per second.<p>It&#x27;s a declarative programming system, and there&#x27;s a massive impedance match when you try to write source code for it in text. I suspect that something closer to flow charts, would be much easier to grok. Verilog is about as good at match as you are likely to get, if you stick with the source code approach to designing with them.
  • mastax4 hours ago
    My prediction is one of the Chinese FPGA makers will embrace open source, hire a handful of talented open source contributors, and within a handful of years end up with tooling that is way easier to use for hobbyists, students, and small businesses. They use this as an inroad and slowly move upmarket. Basically the Espressif strategy.<p>Xilinx, Altera, and Lattice are culturally incapable of doing this. For lattice especially it seems like a no brainer but they don’t understand the appeal of open source still.
    • tverbeure18 minutes ago
      Define “upmarket” ?<p>For me, that means higher capacity and advanced blocks such as SERDES, high-speed DRAM interfaces etc.<p>The bottleneck in using these kind of FPGAs has rarely been the tools, it’s the amount of time it takes to write and verify correct RTL. That’s not an FPGA specific problem, it applies to ASIC just the same.<p>I don’t see how GoWin and other alternative brands would be better placed to solve that problem.
    • 151554 hours ago
      Gowin and Efinix&#x27;s tools are extremely spartan compared to Vivado or Quartus: they&#x27;re pretty much straight HDL to bitstream compilers. There&#x27;s also a FOSS implementation flow available for the Gowin chips (but I haven&#x27;t used it.)<p>HDL isn&#x27;t getting any easier, though, and that&#x27;s where most of the complexity is.
    • bsder3 hours ago
      &gt; My prediction is one of the Chinese FPGA makers will embrace open source<p>Sadly, this doesn&#x27;t seem to be panning out because the Chinese domestic market has perfectly functional Xilinx and Altera clones for a fraction of the price. Consequently, they don&#x27;t care about anything else.<p>It irritates me to no end that Gowin won&#x27;t open their bitstream format because they&#x27;d displace a bunch of the low end almost immediately.
  • Peteragain36 minutes ago
    Programming languages were originally designed by mathematicians based on a Turing machine. A modern language for FPGAs is a challenge for theoretical computer science, but we should keep computer literate researchers away from it. This is a call out to hard core maths heads to think about how we should think about parallelism and what FPGA hardware can do.
    • Ericson231435 minutes ago
      <a href="https:&#x2F;&#x2F;clash-lang.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;clash-lang.org&#x2F;</a> we&#x27;ve already done the research! Circuits are just functional programming (the vast majority of the time).<p>We just need the toolchains to be opened up.
  • Cadwhisker6 hours ago
    This article is a rant about how bad tools are without going into specifics. &quot;VHDL and Verilog are relics&quot;, well so is &quot;C&quot; but they all get the job done if you&#x27;ve been shown how to use them properly.<p>&quot;engineers are stuck using outdated languages inside proprietary IDEs that feel like time capsules from another century.&quot;. The article misses that Vivado was developed in the 2010&#x27;s and released around 2013. It&#x27;s a huge step-up from ISE if you know how to drive it properly and THIS is the main point that the original author misses. You need to have a different mindset when writing hardware and it&#x27;s not easy to find training that shows how to do it right.<p>If you venture into the world of digital logic design without a guide or mentor, then you&#x27;re going to encounter all the pitfalls and get frustrated.<p>My daily Vivado experience involves typing &quot;make&quot;, then waiting for the result and analysing from there (if necessary). It takes experience to set up a hardware project like this, but once you get there it&#x27;s compatible with standard version control, CI tools, regression tests and the other nice things you expect form a modern development environment.
    • tverbeure6 hours ago
      &gt; My daily Vivado experience involves typing &quot;make&quot;, …<p>Exactly my experience with Quartus as well.<p>One really can’t help but wonder if those who always whine about the IDE&#x2F;GUI just don’t know any better?
      • Cadwhisker6 hours ago
        I&#x27;ve managed to make nice &#x27;make&#x27; flows for Vivado, ISE, Quartus and DC. Libero took a bit more poking, but it&#x27;s also possible.<p>The GUI interfaces are what newcomers tend to aim for straight away, but they&#x27;re not good for any long-term &quot;repeatable&quot; build flows and they&#x27;re no use for CI. I think this is where a lot of the frustration comes from.
        • fooblaster5 hours ago
          I think my real problem is that xilinx pushes the gui flows heavily. It is extremely annoying to configure the mpsoc fabrics entirely outside of vivado. Same thing for using any of their bundled IP.
          • Cadwhisker4 hours ago
            Yes, IP usage is awkward and tricky. You can use the GUI to make the initial .xci file or .tcl file, but when you build a project, you need to use the same version of Vivado that the IP core was originally created in. Xilinx have improved that a little with &#x27;write_ip_tcl&#x27; and &#x27;write_bd_tcl&#x27; now having flags that let you ignore the version (or minor version). I&#x27;ve not had time to try those yet.
    • CamperBob24 hours ago
      <i>&quot;VHDL and Verilog are relics&quot;, well so is &quot;C&quot; but they all get the job done if you&#x27;ve been shown how to use them properly.</i><p>Or how to use an LLM properly.
  • dsab48 minutes ago
    I had the misfortune of working with the Xilinx Vivado environment, it&#x27;s a fucking garbage, the software is straight out of the 90s, everything is glued together with shell scripts and the TCL scripting language, the IDE throws thousands of warnings and errors while building a sample project, the documentation is missing or spread over 150 PDFs, if the manufacturer of your evaluation board prepared an example for the previous version of Vivado, you must have two installations, which is probably about 2 * 100GB, if you want to keep anything under version control, you have to use some external tools, it&#x27;s all absurd.
  • omneity8 hours ago
    If performant FPGAs were more accessible we’d be able to download models directly into custom silicon, locally, and unlock innovation in inference hardware optimizations. The highest grade FPGAs also have HBM memory and are competitive (on paper) to GPUs. To my understanding this would be a rough hobbyist version of what Cerebras and Groq are doing with their LPUs.<p>Unlikely this will ever happen but one can always dream.
    • 151555 hours ago
      &gt; highest grade FPGAs also have HBM memory<p>The three SKUs between Xilinx and Altera that had HBM are no longer manufactured because Samsung Aquabolt was discontinued.
  • rzerowan5 hours ago
    On the software front as mentioned VHDL and Verilog are showing their age with their design as well as ther tooling ecosystem.Attempts such as CHISEL[1] (written in Scala)also havent gotten much traction - seeing also the language choice - would have btter have been in something more accesible like kotlin&#x2F;ocaml.<p>Secondly the integration with consumer devices and OS is almost non-ecistant - it should really be simpler to interact with ala GPU&#x2F;Network chip and have more mainboards with lowcost integrated FPGAs even if they are only a couple of hundred of logic cells.<p>[1]<a href="https:&#x2F;&#x2F;github.com&#x2F;chipsalliance&#x2F;chisel&#x2F;blob&#x2F;main&#x2F;README.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;chipsalliance&#x2F;chisel&#x2F;blob&#x2F;main&#x2F;README.md</a>
  • Ericson231436 minutes ago
    This is a very correct article. FPGAs should indeed be really easy to use!
  • jhide6 hours ago
    This article could’ve been written 20 years ago with only minor revisions, and it would’ve been true then. But it’s not now. It is trivial, literally a day of work, to set up a build system and CICD environment using Verilator if you are already proficient with your build system of choice. Learning TCL to script a bitfile generation target using your FPGA vendor’s tools is a few extra days of work. And regarding IDE support, the authors complain about the experience of writing code in the vendor GUI. They should look at one of the numerous fully featured systemverilog LSPs available in e.g. VS Code.<p>The real argument for open source toolchains is much narrower in scope and implying its requirement for fixing a nonexistent tool problem is absurd
    • amirhirsch2 hours ago
      I did write this 20 years ago <a href="https:&#x2F;&#x2F;fpgacomputing.blogspot.com&#x2F;2006&#x2F;05&#x2F;methods-for-reconfigurable-computing.html" rel="nofollow">https:&#x2F;&#x2F;fpgacomputing.blogspot.com&#x2F;2006&#x2F;05&#x2F;methods-for-recon...</a><p>The vendor tools are still a barrier to the high-end FPGA&#x27;s hardened IP
  • rcxdude7 hours ago
    FPGA toolchains certainly could do with being pulled out of the gutter but I don&#x27;t think that alone will lead to much of a renaissance for them. Almost definitionally they&#x27;re for niches: applications which need something weird in hardware but aren&#x27;t big enough to demand dedicated silicon, because the flexibility of FPGAs comes at a big cost in die area (read:price), power, and speed.
  • xgstation4 hours ago
    I imagine FPGA could just be part of general CPU that provides user space APIs to program them to accelerate certain work flow, in other words, this sounds like exactly JIT to me. People may program FPGA as they need to, e.g. AV1 encoder&#x2F;decoder, accelerate some NN layers, or even a JS runtime, am I thinking something too wild for hardware capability or is it just the ecosystem isn&#x27;t there yet to allow such flexible use cases?
    • tverbeure8 minutes ago
      My rule of thumb is a 40x silicon area ratio between FPGA and ASIC, a clock speed that is around 5x lower. And a lot more power consumption.<p>If you have an application that can be done on a CPU, with lots of sequences dependencies (such as video compression&#x2F;decompression), an FPGA doesn’t stand a chance compared to adding dedicated silicon area.<p>That’s even more so if you’d embed an FPGA on a CPU die. Intel tried it and you got a power hungry jack of all trades, master of none that nobody knew what to do with.<p>Xilinx MPSOC and RFSOC are successful, but their CPUs are comparatively lower performance and used as application specific orchestrators and never as a generic CPU that run traditional desktop or server software.
    • 151553 hours ago
      Digital logic design isn&#x27;t software programming, and today&#x27;s FPGAs are for most intents and purposes &#x27;single-configuration-at-a-time&#x27; devices - you can&#x27;t realistically time-slice them.<p>The placement and routing flow of these devices is an NP-Complete problem and is relatively non-deterministic* (the exact same HDL will typically produce identical results, but even slightly different HDL can produce radically different results.)<p>All of these use cases you&#x27;ve mentioned (AV1 decoders, NN layers, but especially a JS runtime) require phenomenal amounts of physical die area, even on modern processes. CPUs will run circles around the practical die area you can afford to spare - at massively higher clock speeds - for all but the most niche of problems.
  • phendrenad25 hours ago
    It seems to me that there are exactly 3 buyers for FPGAs: Government contractors (who spend millions all at once), retro gamers (small market), and electronics hobbyists (another small market). It&#x27;s no wonder every company has orientated itself towards the first one. I look to China to accidentally make chips that are an order of magnitude better &quot;just because they can&quot;.
  • jauntywundrkind8 hours ago
    Making FPGA&#x27;s actually available (without encumbering stacks) would be so great. Companies IMO do better when they stop operating from within their moat &amp; this would be such the amazing use case to lend support for that hypothesis.<p>Gowin and Efinix, like Lattice, have some very interesting new FPGAs, that they&#x27;ve innovated hard on, but which still are only so-so available.<p>Particularly with AI about, having open source stacks really should be a major door opening function. There could be such an OpenROAD type moment for FPGAs!
  • octoberfranklin6 hours ago
    The problem is that FPGA companies don&#x27;t see themselves as chip companies.<p>They see themselves as CAD software companies. The chip is just a copy-protection dongle.
  • anondawg557 hours ago
    No. FPGAs already have a future. You just don&#x27;t know about it yet.
  • checker6593 days ago
    Cost is also such a big issue.
    • fleventynine7 hours ago
      There are some reasonably affordable models like <a href="https:&#x2F;&#x2F;www.lcsc.com&#x2F;product-detail&#x2F;C5272996.html" rel="nofollow">https:&#x2F;&#x2F;www.lcsc.com&#x2F;product-detail&#x2F;C5272996.html</a> that are powerful enough for many tasks.
  • CamperBob24 hours ago
    <i>Here’s the first big misconception: HDL is hardware. It isn’t. HDL is software and should be managed like software.</i><p>Yes, that&#x27;s certainly a big misconception. Maybe not the one the author meant to call out, but... yes, a big misconception indeed.
    • oscillonoscope2 hours ago
      Its both depending on context. If HDL becomes custom silicon then it needs to be treated like hardware. If you can easily deploy field updates to your device then it&#x27;s no different than any other firmware
  • nospice6 hours ago
    To folks who wax lyrical about FPGAs: <i>why</i> do they need a future?<p>I agree with another commenter: I think there are parallels to &quot;the bitter lesson&quot; here. There&#x27;s little reason for specialized solutions when increasingly capable general-purpose platforms are getting faster, cheaper, and more energy efficient with every passing month. Another software engineering analogy is that you almost never need to write in assembly because higher-level languages are pretty amazing. Don&#x27;t get me wrong, when you need assembly, you need assembly. But I&#x27;m not wishing for an assembly programming renaissance, because what would be the point of that?<p>FPGAs were a niche solution when they first came out, and they&#x27;re arguably even more niche now. Most people don&#x27;t need to learn about them and we don&#x27;t need to make them ubiquitous and cheap.
    • tyami946 hours ago
      I can&#x27;t say I agree with you here, if anything FPGAs and general purpose microprocessors go hand in hand. It would be an absolute game changer to be able to literally download hardware acceleration for a new video codec or encryption algorithm. Currently this is all handled by fixed function silicon which rapidly becomes obsolete. AV1 support is only just now appearing in mainstream chips after almost 8 years, and soon AV2 will be out and the cycle will repeat.<p>This is such a severe problem that even now, (20+ year old) H.264 is the only codec that you can safely assume every end-user will be able to play, and H.264 consumes 2x (if not more) bandwidth compared to modern codecs at the same perceived image quality. There are still large subsets of users that cannot play any codecs newer than this without falling back to (heavy and power intensive) software decoding. Being able to simply load a new video codec into hardware would be revolutionary, and that&#x27;s only one possible use case.
      • tverbeure1 minute ago
        It would only be a game changer if you don’t care about a 40x area cost, massive lower clock speeds and off the chart increase in power consumption.<p>AV1 is a complex codec. You need a large FPGA (hundreds of square mm2) to run it. When implemented as custom silicon, it’s just a handful of mm2.<p>If you can’t afford AV1 as custom silicon, you’re much better off doing it in SW.
      • nospice4 hours ago
        But <i>why</i> would it be amazing? The alternative right now is that you do it in software and just dedicate a couple of cores to the task (or even just put in a separate $2 chip to run the decoder).<p>Like, I get the aesthetic appeal, and I accept that there is a small subset of uses where an FPGA really makes a difference. But in the general case, it&#x27;s a bit like getting upset at people for using an MCU when a 555 timer would do. Sure, except doing it the &quot;right&quot; way is actually slower, more expensive, and less flexible, so why bother?
        • fennecbutt4 hours ago
          Battery powered or thermally constrained devices.
          • nospice4 hours ago
            ...which are playing back video, so they&#x27;re likely blowing most of their power budget on the display and on radio. I guess my threshold of &quot;amazing&quot; is different. Again, I&#x27;m not denying some incremental utility in specialized uses, but most of the time, it just doesn&#x27;t seem to be worth the pain - especially since nothing about the implementation will be portable or maintainable in the long haul.<p>In the same vein, no one is writing a smartwatch software stack in 100% bare-metal assembly, although in the hands of a capable developer, I&#x27;m sure it could prolong battery life.
      • ThrowawayR25 hours ago
        And you think that a downloaded codec on an FPGA would perform anywhere close to custom silicon? Because it won&#x27;t; configurability comes at a steep cost.
        • checker6591 hour ago
          FPGAs are more like CGRAs these days. With the right DSP units, it could absolutely be competitive with custom silicon.
  • ursAxZA6 hours ago
    If Jim Keller says it, I’ll believe it.<p>My Ryzen agrees — the fans just spun up like it’s hitting 10,000 rpm.