What do you mean by respect? Here's a layperson's perspective, at least.<p>Up through the 486 (with its built in x87), the x87 was always a niche product. You had to know about it, need it, buy it, and install it. This is over and on top of buying a PC in the first place. So definitionally, it was relegated it to the peripheries of the industry. Most people didn't even know x87 was a possibility. (I remember distinctly a PC World article having to explain why there was an empty socket next to the 8088 socket in the IBM PC.)<p>However, in the periphery where it mattered, it gained acceptance within a matter of a few years of being available. Lotus 1-2-3, AutoCAD, and many compilers (including yours, IIRC) had support for x87 early on. I would argue that this is one of the better examples of marginal hardware being appropriately supported.<p>The other argument I'd make is that (thanks to William Kahan), the 8087 was the first real attempt at IEEE-754 support in hardware. Given that IEEE-754 is still the standard, I'd suggest that x87's place in history is secure. While we may not be executing x87 opcodes, our floating point data is still in a format first used in the x87. (Not the 80-bit type, but do we really care? If the 80-bit type was truly important, I'd have thought that in the intervening 45 years, there'd be a material attempt to bring it back. Instead, what we have are a push towards narrower floating point types used in GPGPU, etc.... fp8 and f16, sure... fp80, not so much.)
> What do you mean by respect?<p>The disinterest programmers have in using 80 bit arithmetic.<p>A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors. More bits would put off the cliff where the results turned into gibberish.<p>I know there are techniques to minimize this problem. But they aren't simple or obvious. It's easier to go to higher precision. After all, you have the chip in your computer.
Yes, the argument of Kahan in favor of the 80-bit precision has always been that it will allow ordinary programmers, like the expected users of IBM PC, who do not have the knowledge and experience of a numerical analyst, to write programs that perform floating-point computations without subtle bugs caused by unexpected behavior due to rounding errors.
> The disinterest programmers have in using 80 bit arithmetic.<p>I don't know, other than to say there's often a tendency in this industry to overlook the better in the name of the standard. 80-bit probably didn't offer enough marginal value to enough people to be worth the investment and complexity. I also wonder how much of an impact there is to the fact that you can't align 80-bit quantities on 64-bit boundaries. Not to mention the fact that memory bandwidth costs are 25% higher when dealing with 64-bit quantities, and floating point work is very often bandwidth constrained. There's more precision in 80-bit, but it's not free, and as you point out, there are techniques for managing the lack of precision.<p>> A bit of background - I wrote my one numerical analysis programs when I worked at Boeing. The biggest issue I had was accumulation of rounding errors.<p>This sort of thing shows up in even the most prosaic places, of course:<p><a href="https://blog.codinghorror.com/if-you-dont-change-the-ui-nobody-notices/" rel="nofollow">https://blog.codinghorror.com/if-you-dont-change-the-ui-nobo...</a><p>In any event, while we're chatting, thank you for your longstanding work in the field.
The 80-bit format was included in the IEEE standard since the beginning.<p>The IEEE standard had included almost all of what Intel 8087 had implemented, the main exception being the projective extension of the real number line. Because of this deviation in the standard, Intel 80387 has also dropped this feature.<p>Where you are right is that most other implementers of the standard have chosen to not provide this extended precision format, due to the higher cost in die area, power consumption and memory usage, the latter being exacerbated by the alignment issue. The same was true for Intel when defining SSE, SSE2 and later ISA extensions. The main cost issue is the superlinear growth of the multiplier size with precision, a 64-bit multiplier is not a little bigger than a 53-bit multiplier, but much bigger.<p>Nowadays, the FP arithmetic standard also includes 128-bit floating-point numbers, which are preferable to 80-bit numbers and do not have alignment problems. However, few processors implement this format in hardware, and on the processors where it would need to be implemented in a software library one can obtain a higher performance by using double-double precision numbers, instead of quadruple precision numbers (unless there is a risk of overflow/underflow in intermediate results, when using the range of double-precision exponents).<p>In general, on the most popular CPUs, e.g. x86-64 based or Aarch64 based, one should use a double-double precision library for all the arithmetic computations where the traditional 80-bit Intel 8087 format would have been appropriate.
Haha the calculator app misses one critical feature - a history of the numbers you typed in, so you can double check the column of numbers you added.<p>> thank you for your longstanding work in the field.<p>I sure appreciate that, especially since I give away all my work for free these days!