Yes, it's all good and nice that your types are sound and you don't have panics, but I feel like this could get you in trouble in the real world (gleam also uses this division convention, and people very much use gleam for "real world" things). Suppose you took an average over an unintentionally empty list (maybe your streaming data source just didn't send anything over the last minute due to a backhoe hitting a fiber in your external data source's data center) and took some downstream action based off of what you think is the rolling average. You could get royally fucked if money is involved.<p>Crashing would have been preferable.<p>1/0 = 0 is unsuitable and dangerous for anyone doing anything in the real world.
People are too scared of crashes. Sure, crashing is not ideal. Best is to do what the program is supposed to do, and if you can’t, then it’s better to produce a friendly error message than to crash. But there are far worse outcomes than crashing. Avoiding a crash by assigning some arbitrary behavior to an edge case is not the right approach.
Strongly agree here. IMO libraries should try hard to return sensible error codes (within reason, eg null pointer access is unrecoverable imo) but application code should just crash. And when a library returns an error code, default to just crashing if it fails until you have a compelling reason to do something more complicated.
as so often, the really preferable solution would be to make it impossible to code the wrong thing from the start:<p><pre><code> - a sum type (or some wrapper type) `number | DIVISION_BY_ZERO` forces you to explicitly handle the case of having divided by zero
- alternatively, if the division operator only accepted the set `number - 0` as type for the denominator you'd have to explicitly handle it ahead of the division. Probably better as you don't even try to divide by zero, but not sure how many languages can represent `number - 0` as a type.</code></pre>
Rather than removing 0 from a numeric type, we can avoid including it at all. For example, we can have a bunch of numeric types like:<p><pre><code> Positive = One | Succ Positive
Nat = Zero | Positive
NonZeroInt = Positive | Neg Positive
Int = Zero | NonZeroInt
Rational = Ratio Int Positive
</code></pre>
etc.<p>Depending on the language, these could be implemented with little or no runtime overhead.
All Rust's primitive integer types have a corresponding non-zero variant, NonZeroU8, NonZeroI32, NonZeroU64, NonZeroI128 etc. and indeed NonZero<T> is the corresponding type, for any primitive type T if that's useful in your generic code.
galaxy brain: `number` already includes `NaN`
I use Gleam in production[0][1], and that is not really an issue.<p>Gleam offers division functions that return an error type, and you can use those if you need that check.<p>They fit a list-length use case well as they work better with a piping syntax which is popular in Gleam.<p>[0] <a href="https://nestful.app" rel="nofollow">https://nestful.app</a><p>[1] <a href="https://blog.nestful.app/p/why-i-rewrote-nestful-in-gleam" rel="nofollow">https://blog.nestful.app/p/why-i-rewrote-nestful-in-gleam</a>
It's funny, I hold the exact opposite opinion, but from the same example: In the course of my programming career, I've had at least 3 different instances where I crashed stuff in production because I was computing an average and forgot to handle the case of the empty list. Everything would have been just fine if dividing by zero yielded zero.<p>I've learned my lesson since, but still.
What was the problem with crashing? Surely you had Kubernetes/GCP/ECS restart your container, or if you're using a BEAM based language, it would have just restarted<p>> Everything would have been just fine if dividing by zero yielded zero<p>perhaps you weren't making business decisions based on the reported average, just logging it for metrics or something, in which case I can see how a crash/restart would be annoying.
> What was the problem with crashing?<p>I imagine the problem was that it crashed the whole process, and so the processing of other, completely fine data that was happening in parallel, was aborted as well. Did that lead to that data being dropped on the floor? Who knows — but probably yes.<p>And process restarts are not instantaneous, just so you know, and that's even without talking about bringing the application into the "stable stream processing" state, which includes establishing streaming connections with other up- and downstream services.
Interestingly, RiscV goes with 1/0 = 0xFFFF_FFFF (in 32 bit mode).<p>I guess that's slightly more of a warning than giving 0.
This article invents a new binary operation, calls it "division" and uses the "/" operator to denote it. But the article repeats multiple times that this new operation isn't a multiplicative inverse, so it's not actually division. For example, (a/b)*b=a isn't true for this new operation.
(a/b)*b=a isn't true, but that's also not true for the math that you're thinking of. What is true is IF b≠0 THEN (a/b)*b=a. And this definition works just fine even if you define division by zero.<p>Also just to point out, the statement here really is a*b‾*b=a, which might make it more clear why b≠0.
There's no "if" in the division operation. Division is not defined for b=0. a/0 is a nonsensical quantity because the zero directly contradicts the definition of division.<p>maybe someday there will be a revelation where somebody proposes that it's a new class of numbers we've never considered before like how (1-1), (0-1) and sqrt(-1) used to be nonsensical values to past mathematicians. For now it's not defined.
Division by zero is perfectly well defined in floating point. x/0 = INF and INF*0 = NaN. That means b*(a/b) != a if b = 0.<p>It's true that it's not defined for integer types, but that wouldn't make a = b*(a/b) true for them either.<p>It's also common to define x/0 = infinity in the extended real numbers that floating point models.
The limit 1/x as x goes to zero diverges to plus or minus infinity depending on whether you approach from the right or the left. IEEE 754 uses a signed zero, so defining 1/+0 = +INF and 1/-0 = -INF makes sense. If you do not have a signed zero, arbitrarily picking either plus or minus infinity makes much less sense and picking their "average" zero seems more sensible. So x/0 is not actually +INF - even if you meant +0 and we forget about -0 - it is +INF or -INF depending on the sign of x and NaN if x is +0 or -0.
TFA was about mathematics, not computer programs.
Mathematically, the <i>limit as b approaches 0 of a/b</i> is defined to be +/- INF depending whether a and b have matching signs. The limit represents the value that a/b asymptotically approaches as b approaches 0. a/b for b=0 is still undefined.<p>For a good example of <i>why</i> this needs to be undefined, consider that limit as b approaches zero of a/b is both +INF and -INF depending on whether b is "approaching" from the side that matches a's sign or the opposite side. At the exact singularity where b=0 +INF and -INF are both equally valid answers, which is a contradiction.<p>also in case you weren't aware, "NaN" stands for "not a number".
The definitions in the floating point standard make much more sense when you look to 0/INF as "something so close to/far from 0 we cannot represent it", rather than the actual concepts of 0 and infinity.
In floating point a = b * (a / b) is not always a true statement.<p><pre><code> >>> import random
>>> random.random()
0.4667867537470992
>>> n = 0
>>> for i in range(1_000_000):
... a = random.random()
... b = random.random()
... if (a == b * (a / b)):
... n += 1
...
>>> n
886304
</code></pre>
For example:<p><pre><code> >>> a, b = 0.7959754927336106, 0.7345016612407793
>>> a == b * (a / b)
False
>>> a
0.7959754927336106
>>> b * (a / b)
0.7959754927336105
</code></pre>
This is off by one ulp ("unit in the last place").<p>And of course the division of two finite floating point numbers may be infinite:<p><pre><code> >>> a, b = 2, 1.5e-323
>>> a
2
>>> b
1.5e-323
>>> b * (a / b)
inf
>>> a/b
inf
</code></pre>
As a minor technical point, x/0 can be -INF if sgn(x) < 0, and NaN if x is a NaN.
Did you fully read the article?<p>In modern math, the concept of a field establishes addition and multiplication within its structure. We are not free to redefine those without abandoning a boatload of things that depend on their definition.<p>Division is not inherent to field theory, but rather an operation defined by convention.<p>It seems like you're fixating on the most common convention, but as Hilel points out, there is no reason we have to adopt this convention in all situations.
Reusing symbols like +, *, or / to define operations that aren't the + or the / you're used to is pretty common in math. It's just notation.<p>At the end of the day, the / that we have in programming has the same problem as this article's /, almost all programming languages will return 5/2 = 2 when dividing integers, even though 2 * 2 is not 5! Division is not defined for all integers, but it's just <i>convenient</i> to extend it when programming.<p>So if some languages want to define 1/0 = 0, we really shouldn't be surprised that 0*0 is not 1, we already had the (a/b)*b != a problem all along!
> Reusing symbols like +, *, or / to define operations that aren't the + or the / you're used to is pretty common in math. It's just notation.<p>Reusing symbols <i>in a different context</i> is pretty common; taking a symbol that is already broadly used in a specific way (in this case, that `a/b` is defined for elements in a field as multiplying `a` by the multiplicative inverse of `b`) is poor form and, frankly, a disingenuous argument.
I am a professor for algebra at a research university. I make a point out of teaching my students that `a/b` is NOT the same as multiplying `a` by the multiplicative inverse of `b`.<p>The standard example is that we have a well-defined and useful notion of division in the ring Z/nZ for n any positive integer even in cases were we "divide" by an element that has no multiplicative inverse. Easy example: take n=8 then you can "divide" 4+nZ by 2+nZ just fine (and in fact turn Z/nZ into a Euclidean ring), even though 2+nZ is not a unit, i.e. admits no multiplicative inverse.
That's nonsense. a/b is float in Python 3, and even in other languages a/b gets closer to it's actual value as a and b get bigger (the "limit", which is the basis of Algebra). So four operations in programming generally do agree with foundations of Algebra. But a/0=0 is %100 against Algebra. And it's very unintuitive. It's basically saying zero is the same as infinity, and therefore all numbers are the same, so why bother having any numbers at all?
Floats don't have multiplicative inverses, and the floating point operations don't give us any of the mathematical structures we expect of numbers. Floating point division already abandons algebra for the sake of usefulness.
> even in other languages a/b gets closer to it's actual value as a and b get bigger (the "limit", which is the basis of Algebra)<p>This is not generally true. 5/2 = 2, 50/20 = 2, 500/200 = 2, and so on no matter how big the numbers get.
If you were to define a/0 the most logical choice would be a new special value "Infinity". The second best choice would be the maximum supported value of the type of a (int, int64 etc). Anything else would be stupid.
Multiplicative inverse happens to be a convenient way to define division in the reals, but there are cases when multiplicative inverses do not correspond to any notion of division. E.g. take a finite ring of integers, like what you’d use for cryptography or heck any operation on an `int`!<p>It’s all just definitions. Always has been.
Under what definition of division is (a/b)*b=a true for all values?
The one that excludes 0. It's not a terribly complicated thing to restrict domain: you don't expect, for example, complex values in real-valued functions.
If 0 is not an allowable value for b is necessary but not generally sufficient.
Can you say more? If "0 is not an allowable value for b", then it seems to me that (a/b)*b=a isn't true for all values. Specifically, it's false when b=0.<p>IIUC, codeflo is arguing that the division operation defined in the article isn't "actual division" because (a/b)*b=a isn't true for all values. But I can't think of a definition of division that satisfies that criteria.
When we say "is not an allowable value", we are speaking about the domain [1]: all the values for which the function is defined. When we say "for all values", we implicitly mean for all values of the domain.<p>The parallel in programming would be the contract : you provide a function that works on a given set of values. Or the type: the function would "crash" if you passed a value not of the type of its parameter, but it is admitted it won't be done.<p>(In the remaining I'm referring to 1/x instead of a/b to simplify things a bit)<p>Another way of saying it is that the function is undefined for 0. (Or on {0}). Then the property is true for all values (on which the function is defined, but saying it is redundant, the function can't be called outside its domain, it is an error to try to do this).<p>The domain is often left out / implicit, but it is always part of the definition of a function.<p>0 is not in the domain, so it's not to be considered at all when studying the function (except maybe when studying limits, but the function will still not be called with it).<p>[1] <a href="https://en.m.wikipedia.org/wiki/Domain_of_a_function" rel="nofollow">https://en.m.wikipedia.org/wiki/Domain_of_a_function</a>
If "0 is not an allowable value for b", then (a/b)*b=a is not defined when b=0, so it is neither true nor false, since you had previously agreed that b=0 is not allowed (regardless of what "/" and "*" are meaning in this context).
We don't allow division of apples and oranges, either. So why is excluding 0 weird, but excluding ice cream as an argument is not?
But 0/1 = 0. So 1/0 must be the inverse/opposite of zero.<p>And I think if you look at the Riemann sphere, the inverse of zero is the point where +infinity and -infinity meet. I would call that 0^(-1).
I debated this with my boss at my first programming job (this was 20+ years ago). He thought 1/0 should be 0 rather than an error because "that's what people expect". My argument was from mathematical definitions (the argument which this blog post picks apart).<p>In retrospect, I see his point better - practical use trumps theory in most language design decisions.<p>I haven't changed my mind but the reason has shifted more toward because "it's what a larger set of people expect in more situations" rather than mathematical purity.
i would not expect 1/0 to be zero. as you divide by smaller numbers, the quotient gets bigger, so i can't understand why someone would expect /0 to be zero.
1/0 = 0 is usually not a practical thing, it's to satisfy that the output of the division operator stays in the type and you don't want crashes (a "feature" of ponylang and gleam, e.g.). Its kind of a PL wonk thing.<p>It's not at all a good idea for very important practical reasons as I outline in a reply to parent.
Never have I ever met anybody who would think dividing by zero yields zero O_o<p>If anything it feels natural to yield +/-infinity
It's not about what I think zero division yields I've taken a math class before. It's just about representation within the type system. If division can return infinities we can't safely combine division with other functions that are expecting ints and floats.<p>Most languages throw an error instead, but there are tradeoffs there too. If you've decided not to throw an error you should at least return a usable number and zero makes more sense than -1 or 7 or a billion or whatever.<p>You could also build the number stack from the ground up to accommodate this edge case, and make it so all arithmetic functions can handle infinities, infinitesimals and limits. I've come across a racket sublang like that but it's nearly unusable for the normal common things you want to do with numbers in code.
> <i>He thought 1/0 should be 0 rather than an error because "that's what people expect"</i><p>So I saw this in action once, and it created a mess. Private company had a stupid stock dividend mechanism: every shareholder received some fraction, dependent on fundraising, of a recurring floating pool of shares, quantity dependent on operating performance. (TL; DR Capital was supposed to fundraise, management was supposed to operate. It was stupid.)<p>One quarter, the divisor was zero for reasons I can't remember. This <i>should</i> have resulted in no stock dividend. Instead, the cap-table manager issued zero-share certificates to everyone. By Murphy's Law, this occured on the last quarter of the company's fiscal year.<p>Zero-share certificates are used for one purpose: to help a shareholder prove to an authority that they no longer own any shares. Unlike normal share certificates, which are additive, a zero-share certificate doesn't add zero shares to your existing shares; it ambiguously negates them. In essence, on that day, the cap-table manager sent every shareholder a notice that looked like their shares had been cancelled. Because their system thought 1 / 0 = 0.<p>If you're dividing by zero in a low-impact system, it really doesn't matter what you output. Zero. Infinity. Bagel. If you're doing so in a physical or financial or other high-impact system, the appropriate output is confused puppy.
Huh? The article shows why 1/0=0 is mathematically sound, and then considers an error preferable in a programming context anyway, because practicality. It’s the opposite of the reasoning you’re describing.
> The article shows why 1/0=0 is mathematically sound<p>It does not, because it is not. And the “real mathematicians” that he quotes aren’t supporting his case either, they’re just saying that there are cases where it’s convenient to pretend. If you look at the Wikipedia page for division by zero you may find “it is possible to define the result of division by zero in other ways, resulting in different number systems”: in short, if it’s convenient, you can make up your own rules.
"Making up your own rules" is literally what mathematics is, though. Using that as a counterargument to using a specific set of axioms tells me you don't understand mathematics.
> in short, if it’s convenient, you can make up your own rules.<p>Yes.<p>People find it confusing that there is no simple model that encapsulates arithmetic. Fields do not capture it in its entirety. The models of arithmetic that describe it end up being extremely complex.<p>Arithmetic is ubiquitous in proofs of other things, and people like the author of this blog cannot get over it.<p>Reality is weird, inconsistent, and weirdly incomplete.<p>Get used to it!
As long as lim(1/x)_x->0 = inf, 1/0 = 0 doesn't make a whole lot of sense, mathematically speaking.
I might be wrong but I don't think it was addressed in the article either.
There's a great Radiolab episode[0] that talks about divide by zero in perhaps more conceptual terms.<p><pre><code> KARIM ANI: If you take 10 and divide it by 10, you get one. 10 divided by five is two. 10 divided by half is 20. The smaller the number on the bottom, the number that you're dividing by, the larger the result. And so by that reasoning ...
LULU: If you divide by zero, the smallest nothingness number we can conceive of, then your answer ...
KARIM ANI: Would be infinity.
LULU: Why isn't it infinity? Infinity feels like a great answer.
KARIM ANI: Because infinity in mathematics isn't actually a number, it's a direction. It's a direction that we can move towards, but it isn't a destination that we can get to. And the reason is because if you allow for infinity then you get really weird results. For instance, infinity plus zero is ...
LATIF: Infinity.
KARIM ANI: Infinity plus two is infinity. Infinity plus three is infinity. And what that would suggest is zero is equal to one, is equal to two, is equal to three, is equal to four ...
STEVE STROGATZ: And that would break math as we know it. Because then, as your friend says, all numbers would become the same number.
</code></pre>
[0] <a href="https://radiolab.org/podcast/zeroworld" rel="nofollow">https://radiolab.org/podcast/zeroworld</a>
Then take 10 and divide it by -10 = -1. 10 / -5 = -2. 10 / -0.5 = -20.
So from the other side of the y-axis it behaves the exact opposite. It goes to minus infinity. So at x=0 we would have infinity and minus infinity at the same time. Imho that is why it is undefined.
In IEEE 754 math, x/0 for x < 0 is in fact negative infinity.<p><pre><code> >>> np.float64(-1.)/0.
-inf
>>> np.float64(1.)/0.
inf
</code></pre>
And you're exactly right, 0/0 is NaN in 754 math exactly because it approaches negative infinity, zero (from 0/x), and positive infinity at the same time.
I always thought the answer to verbal query "let y=1/x, x=0, find y" was "Well, the answer is the Y axis of the plot". Surprising that people have to be reminded that X can be signed. I've had similar conversation IRL.
on computers you can have negative zeros
Negative zero is equal to zero, so it's not really a distinct number, just another representation of the same value.
It's equal (as in, comparing them with == is true), but they are not the same value. At least in IEEE 754 floats, which is what most languages with floating point numbers use. E.g., in JS:<p><pre><code> > 1 / 0
Infinity
> 1 / -0
-Infinity
> 0 === -0
true
> Object.is(0, -0)
false</code></pre>
that's really just an encoding of the number to help you understand how the hell you got here
Ordinal and cardinal infinities are different. There are hierarchies of infinities.<p>`1/0` and `1/0 + 1` aren't meaningfully different, so it kinda does make sense for whatever notation to not make a distinction.
It's even worse than that. The other issue is what happens when you've got a negative number as the numerator (number on top). Then the smaller the denominator (number on bottom) the <i>more negative</i> the result. -10/10 = -1. -10/5 = -2. -10/2 = -20. So if you divide by zero, it's obviously negative infinity! And it's positive infinity! At the same time.
The arguments around limits are addressed towards the end (under "Update 8/12/2018"):<p>> > If 0/0 = 0 then lim_(x -> 0) sin(x) / x = sin(0) / 0 = 0, but by L’Hospitals’ Rule lim_(x -> 0) sin(x) / x = lim_(x -> 0) cos(x) / 1 = 1. So we have 0 = 1.<p>> This was a really clever one. The issue is that the counterargument assumes that if the limit exists and f(0) is defined, then lim_(x -> 0) f(x) = f(0). This isn’t always true: take a continuous function and add a point discontinuity. The limit of sin(x) / x is not sin(0) / 0, because sin(x) / x is discontinuous at 0. For the unextended division it’s because sin(0) / 0 is undefined, while for our extended division it’s a point discontinuity. Funnily enough if we instead picked x/0 = 1 then sin(x) / x would be continuous everywhere.<p>Similar examples can be constructed for any regular function which is discontinuous (e.g. Heaviside step function).
I was also looking for this. And would like to add: lim(-1/x)_x -> 0 = -inf
That is (in my opinion) the whole point why it is actually undefined. On one side of the y-axis it goes to infinity, on the other to minus infinity. I don't see a solution to this and therefore always have accepted that it is undefined.
No. 1/x^2 is undefined at 0 but has the same <i>limit behavior</i>, because <i>limit behavior</i> is not a function from "pairs of (functions from R to R, R)" to R<p>Infinity is not a real number.
It's fine. Infinity isn't a real number, so 1/x isn't continuous at 0, so it doesn't matter what the value of 1/0 is. All your open sets still behave the way you expect. Whether you choose "this function is undefined here" vs "it's impossible to ever reach the value of this function at this value, under any assumptions I'll ever care about" is purely a matter of convenience.
This is all well and fine, but feels like a lot of words to say "it's a matter of definition".<p>The question is what definitions will be useful and what properties you gain or give up. Being a partial function is a perfectly acceptable trade-off for mathematics, but perhaps it makes it difficult to reason about programs in some cases.<p>I suppose the aim of the article is to point out the issue is not one of soundness, which is useful — but I wish more emphasis had been put on the fact that it doesn't solve the question of what 1/0 should do and produced arguments with regards to that.
EDIT: markup broke my operators<p>In combinatorics and discrete probability, `0**0 = 1` is a useful convention, to the point that some books define a new version of the operator - let's call it `***` - and define `a***b = a**b` except that `0***0 = 1` and then use the new operator instead of exponentiation everywhere. (To be clear, `**` is exponentiation, I could write `a^b` but that is already XOR in my mind.)<p>So one might as well overload the old one: tinyurl.com/zeropowerzero<p>This causes no problems unless you're trying to do real (or complex) analysis.<p>1/0 can cause a few more problems, but if you know you're doing something where it's safe, it's like putting the Rust `unsafe` keyword in front of your proof and so promising you know what you're doing.
So, how often do devs actually want a `/` that isn't the inverse of multiplication?<p>Trying to calculate... I don't know, how many 2-disk raid6 groups I need to hold some amount of data is an <i>error</i>, not "lol you don't need any".<p>If my queue consumer can handle 0 concurrent tasks, it will take <i>literally forever</i> to finish, not finish instantly.
Sounds legit, infinity is singular and so is 0. I think one problem is also that division isn't the only mathematical operation which can produce dubious results. E.g. sqrt(x), arctan(x) which have multiple branches which is why there is often a separate arctan2(x, y) to select the correct branch. Oh well and then there's just addition which silently overflows in almost every programming language.<p>Without arbitrary precision numerics and functions which aren't explicit about corner cases it's always a simplification. However performance-/code-wise this is usually not feasible.
Why is infinity singular? There's at least positive and negative infinity.<p>And why do you bring up infinity? In regular math, 1/0 is literally undefined. It's not infinity.
Hilbert's Hotel shows nicely that infinity can't be singular.
Consistency depends on your set of axioms. If you are willing to give up various nice properties of division, then you can obviously extend it however you like.<p>My gripe with arbitrary choices like this is that it pushes complexity from your proof's initial conditions ("we assume x != 0") into the body of your proof (every time you use division now you've split your proof into two cases). The former is a linear addition of complexity to a proof, whereas the latter can grow exponentially.<p>Of course, nothing is stopping you from using an initial condition anyway to avoid the case splitting, but if you were going to do that why mess with division in the first place?
Not that anybody asked me, but I think about it like this:<p>You have a field (a set of "numbers"). Multiplication is defined over the field. You want to invent a notion of division. Let's introduce the notation "a/b" to refer to some member of a field such that "a/b" * b = a.<p>As Hillel points out, you can identify "a/b" with a*inverse(b), where "inverse" is the multiplicative inverse. And yes, there is no inverse(0). But really let's just stick with the previous definition: "a/b" * b = a.<p>Now consider "a/0". If "a/0" is in the field, then "a/0" * 0 = a. Let's consider the case where a != 0. Then we have "a/0" * 0 != 0. But this cannot be true if "a/0" is in the field, because for every x we have x * 0 = 0. Thus "a/0" is not in the field.<p>Consider "a/0" with a=0. Then "a/0" * 0 = 0. Any member of the field satisfies this equation, because for every x we have x * 0 = 0. So, "a/0" could be any member of the field. Our definition of division does not determine "0/0".<p>Whether you can assign "1/0" to a member of the field (such as 0) depends on how you define division.
I'd agree to some kind of 1//0=0 for ints; but for floats you'll take 1/0=inf from my cold, dead hands.
My head-canon with dividing by zero is that 1/0 = undefined and 1/-0 = -undefined, and that's where I leave it because anything less funny than that seems like an impractical answer.
I find it odd that all of the mathematicians cited at the end are actually pretty much CS people, working on proof assistants. Kinda renders that section pointless, IMO (though the comment by Isabelle's author was interesting).<p>IMO, whether something like this makes sense is a separate matter. Personally I always just think of division in terms of multiplicative inverses, so I don't see how defining division by zero helps other than perhaps making implementation easier in a proof assistant. But I've seen people say that there are some cases where having a/0 = 0 works out nicely. I'm curious to know what these cases are, though.
Note 1/0 (or x/0 with x>0) isn't undefined or an exception in 754 FP math, it's +infinity. It's 0/0 that's the problem. Defining 1/0=0 isn't really helpful imho.
Whatever as long as the name does not imply that these are integers, because then it is just wrong. The same holds for overflowing results being clamped or resulting in smaller or negative values due to wraparound. These are not integers.<p>There is only one correct behavior for something named "int". Give the correct result or throw an error.
Agree `int` is the problem. This implies we're doing math over all integers, when in most languages what we're actually working with are bounded integers. (There's some counter-examples, Python and Haskell come to mind.) Calling them sane names like `i32` and `i64` makes it clear that overflow exists.
Those are all integers. <a href="https://en.wikipedia.org/wiki/Modular_arithmetic" rel="nofollow">https://en.wikipedia.org/wiki/Modular_arithmetic</a> - "The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801." They have been integers for over 200 years now.
But if you write a + b and the result is wrapped around or saturated, it's not integer addition. It's something else and should be written in another way in code and have a different name. I am aware of modular arithmetic.<p>If you have a type named "int" with an operation called "addition", and that operation is not actually integer addition... it's wrong.
Wrapping around is correct integer behavior; clamping ("5 + 1 = 5") isn't. Clamping implies immediately that all positive numbers are equal to zero.
True correct behavior would have that if a > b, then a + c > b + c also holds true for all integers, but that isn't guaranteed for wrapping (or clamping.) (e.g. if 250 > 1, then 250 + 10 > 1 + 10 should be true, but with 8-bit wrapping you would get 4 > 11, which is false.)
Most definitions of division that I have seen use q * d + r = n if q is unique and abs(r)<abs(d), which doesn't require the definition of an inverse. Rather, d that exist for n = 1 and r = 0 can be labelled q's inverse but it doesn't require a new definition.<p>Additionally, if inverses are defined as separate objects then what is 2 plus the inverse of 2? It doesn't simplify to 2.5 because there's no addition axiom for numbers and multiplicative inverses, or for that matter any rules for inverses with inverses. So you might have 1/2 and 5/10 but they're not equal and can't be multiplied together.
I've always wondered what would happen if we defined /0 as a new symbol, for example 'z'. The same as we define sqrt(-1) as 'i'. So if you can do 4*sqrt(-1)=4i, you could also do 4/0 = 4z. These two seems similar, as in taking something that should not exist, and just letting it exists in a totally different and orthogonal domain.<p>I tried once to investigate the implications, but it quickly became far more complex that with 'i' and never went far. Still intrigued if this is somewhat interesting or a total time loss though.
In SQL, if you divide by zero, you get a NULL. If you divide by NULL, you get NULL (any operation involving a NULL yields NULL, even GROUP BY). I call it "a black hole zero", if it touches anything, that thing becomes a black hole zero.<p>Some languages will wrap division by zero in a special type, a NaN (not a number). You can then reason on top if that NaN if you want to.<p>So, in a sense, there are some people already doing practical stuff with substituting /0 for a new symbol.
1,000,000 grains of sand is a heap of sand (Premise 1)
A heap of sand minus one grain is still a heap. (Premise 2)
- <a href="https://en.wikipedia.org/wiki/Sorites_paradox" rel="nofollow">https://en.wikipedia.org/wiki/Sorites_paradox</a><p>So one grain of sand is a heap and then when you remove that grain the heap disappears, but you only removed one grain from a heap so this is impossible because it is discontinuous. One solution is to wrap the problem in fuzzy logic with a 'heapness' measure.<p>Generalizing this type of solution we have a practice of wrapping paradoxes in other forms of logic. You would define an interface between these logics. For example in Trits (0,1,UNKNOWN) you could define an interface where you can change the type of NOT-UNKNOWN from Trit to Boolean. This would return at least some part of the entropy to the original domain, preserving a continuity. Wave Function Collapse is another example of translating from one logical domain to another.
You might be interested in the hyperreal numbers, which sound a bit like the avenue you were exploring.
It's just a waste of time. The reason no value is conventionally assigned for division by zero is that assigning a consistent value doesn't help. When you want a value for that kind of expression at all, you'll want different values in different expressions.
In uxn, the result of division of anything by zero is defined as zero (there are no error conditions in uxn). I did not know that Pony is also doing that. This is not a proper "division" (since it is not always a multiplicative inverse operation), but it does not necessarily have to be (and, as another comment mentions, the integer division operator in many programming languages is not a proper "division" either); it is something else which might use a "/" sign or the instruction name "DIV" or whatever.
I might be wrong but this looks a lot like an re-implementation of Riemann sphere?
One megathread and a couple small ones. Others?<p><i>1 / 0 = 0 (2018)</i> - <a href="https://news.ycombinator.com/item?id=42167875">https://news.ycombinator.com/item?id=42167875</a> - Nov 2024 (8 comments)<p><i>What is the best answer to divide by 0</i> - <a href="https://news.ycombinator.com/item?id=40210775">https://news.ycombinator.com/item?id=40210775</a> - April 2024 (3 comments)<p><i>1/0 = 0</i> - <a href="https://news.ycombinator.com/item?id=17736046">https://news.ycombinator.com/item?id=17736046</a> - Aug 2018 (570 comments)
Maybe division by zero should just not exist.<p>If you actually write 1/0 in a manner that can be discovered through static analysis, that could just be a compile time error.<p>If you compute a zero, and then divide by it… I dunno. Probably what happened was the denominator rounded or truncated to zero. So, you actually have 1/(0+-e), for some type-dependent e. You have an interval which contains a ton of valid values, why pick the <i>one very specific</i> invalid value?
MySQL has ignored math rules for ages as well. 1/0 yields NULL there
Q on this post:
Is the field rule "Every element Except Zero has ... " (the 9th rule) defined with respect to the additive identity "zero" or the magical other undefined "Zero" that is the number we're all familiar with?<p>If so, how weirdly arbitrary that the additive zero is omitted for all multiplicative inverse definitions. (At least it seems to me). I always figured this was a consequence of our number systems, not of all fields.
> It’s saying that Pony is mathematically wrong. This is objectively false.<p>Pff. The author wants to show off their knowledge of fields by defining a "division" operator where 1/0 = 0. Absolutely fine. I could define "addition" where 1 + 2 = 7. Totally fine.<p>What I <i>can't</i> do is write a programming language where I use the universally recognised "+" symbols for this operation, call it "addition" and claim that it's totally reasonable.<p>Under the <i>standard definition of division implied by '/'</i> it is mathematically wrong.<p>What they obviously should have done is use a different symbol, say `/!`. Obviously now they've done the classic thing and made the obvious choice unsafe and the safe choice unobvious (`/?`).
> <i>What I can't do is write a programming language where I use the universally recognised "+" symbols for this operation, call it "addition" and claim that it's totally reasonable.</i><p>As a programmer, you're right: we have standard expectations around how computers do mathematics.<p>As a pedant: Why not? Commonly considered 'reasonable' things surrounding addition in programming languages are:<p>* (Particularly for older programming languages): If we let Z = X + Y, where X > 0 and Y > 0, any of the following can be true: Z < X, Z < Y, (Z - X) < Y. Which we commonly know as 'wrap around'.<p>* I haven't yet encountered a language which solves this issue: X + Y has no result for sufficiently large values for X and Y (any integer whose binary representation exceeds the storage capacity of the machine the code runs on will do). Depending on whether or not the language supports integer promotion and arbitrary precision integers the values of X and Y don't even have to be particularly large.<p>* Non-integer addition. You're lucky if 0.3 = 0.1 + 0.2, good luck trying to to get anything sensible out of X + 0.2, where X = (2 ^ 128) + 0.1.
> I haven't yet encountered a language which solves this issue:<p>Well, Python supports arbitrary precision integers. And some other niche languages (Sail is one I know).<p>I don't think "running out of memory" counts as a caveat because it still won't give the wrong answer.<p>For floats, I don't think it's actually unreasonable to use different operators there. I vaguely recall some languages use +. or .+ or something for float addition.<p>Fair point about wrapping.
> <i>Well, Python supports arbitrary precision integers. And some other niche languages (Sail is one I know).</i><p>As a Lisper, I very carefully chose an example to account for arbitrary-precision integers (so X + X where X is, say, 8^8^8^8 (remember, exponentiation is right-associative, 8^8^8^8 = 8^(8^(8^8)))).<p>> <i>I don't think "running out of memory" counts as a caveat because it still won't give the wrong answer.</i><p>Being pedantic, it doesn't give the _correct_ answer either, because in mathematics 'ran out of memory' is not the correct answer for any addition.
It's a question of usefulness. If in your problem domain "1+2=7" is the most useful definition, then by all means do that. Why does the semicolon terminate statements and not the universally agreed upon period? Why does the period denote member access? Why is multiplication not denoted by the universally agreed [middle dot / cross character] (strike out the one that is not universally agreed in your country). The design and semantics of a programming language ought to be in service of the programs we wish to express, and informed by our decades of experience in human ergonomics. Blind reverence to religions of yore does us no good. Mathematical notation itself has gone through centuries of development and is not universal, with papers within the same field using different notation depending on what strikes the author's fancy. To treat it as sacred and immutable is to behave most un-mathematically. Hell, you can still get into a nice hours-long argument about whether or not the set of natural numbers includes zero or not (neither side will accept defeat, even though there is clearly a right answer)!
It’s unexpected and that makes it dangerous.
So,it gives you a infinite list with binary system digit values, that produces a ranked infinity?
In the computational domain we hold entropy in high esteem. Arbitrarily assigning a value of 0 does not preserve entropy. We could return a promise that eventually we will not overflow if we get to be very very clever (arbitrary time) so that we can maintain purity.
This sort of convenient semi-arbitrary extension of a partial function is ubiquitous in Lean 4 mathlib, the most active mathematics formalization project today. It turns out that the most convenient way to do informal math and formal math differ in this aspect.
I set this to zero and print a warning/error about divide by zero on the log with data that caused it. That log would be sent to the business person worry about.<p>If they ignore it, I do not care, it is the business problem anyway.<p>Worked for me for decades :)
1/0 = NOP in assembly. You just don't divide, you skip that operation.
Wouldn’t the logical value when dividing by zero be infinity, because zero can go into any number an infinite number of times?
Saying 1/0=∞ means creating a new number system with ∞ as a number. Now you have to figure out all operations with ∞, like -1*∞, 0*∞, ∞*∞, ∞/∞, or ∞-∞.<p>Making wrong definitions creates contradictions. With 1*x=x, ∞/∞=1, the associative property x*(y/z)=(x*y)/z, and ∞*∞=∞:<p>∞ = ∞*1 = ∞*(∞/∞) = (∞*∞)/∞ = ∞/∞ = 1
But why would we go from what obviously should be a very large boundless number and just replace it with 0. Our few comment discussion is why it’s undefined in a nutshell.
that's largely solved problem. ieee758 defines consistent rules for dealing with infinities. even if don't use the floating-point parts and made a new integer format, it almost certainly would make sense to lift ieee754 rules as-is.
> Wouldn’t the logical value when dividing by zero be infinity, because zero can go into any number an infinite number of times?<p>No, just look at the graph of f(x) = 1/x. +inf can't work.<p>It can work if you assume that no numbers are ever negative.
Disagreed. 1/0 should be infinity, and computers should be able to handle these concepts. Just look into what is 1/0.00000000000[etc]1. And no is not an error, you find out with a very real and tangible example, when you are developing a 3D engine and you want to make the camera to look at vector [ 0, 0, 0 ]. Quick resume: You can't, you need to force add a slight displacement so you can skip this silly error.
<a href="https://xenaproject.wordpress.com/2020/07/05/division-by-zero-in-type-theory-a-faq/" rel="nofollow">https://xenaproject.wordpress.com/2020/07/05/division-by-zer...</a><p>explains Lean's behavior. Basically, you use a goofy alternate definition of division (and sqrt, and more), and to compensate you have to assume (or prove based on assumptions) that the things you will divide by are never zero.<p>Hillel's pedantry is ill-taken, though, because he starts off with a false accusation that the headline tweet was insulting anyone.<p>Also, 1/0=0" is sound only if you <i>change the field axiom.of division</i>, which is fine, but quite rather hiding the ball. If you add " 1/0=0" as an axiom to the usual field axioms, you do get an unsound system.
1/0 = (−∞, ∞)<p>0 ∈ (−∞, ∞)
The only thing that truly matters is this:<p>When software engineers make mistakes dividing by 0 and end up with Exceptions being raised or NaNs being output, they'll usually blame themselves.<p>When the results are wrong numbers all over the place, they'll blame the language.<p>There are 2 cases when people are going to "use" x/0:<p>1. They made a mistake.<p>2. They KNOW that x/0 returns 0 and they take it as a shortcut for (y == 0 ? 0 : x/y)<p>Is that shortcut useful? No. Is it dangerous? Yes. Hence, this is a bad idea.
I despise that answer because it’s so context-dependent. What’s? 10/10? 1. 5/5? 1. .3/.3? 1. .0000000578/.0000000578? 1.<p>Ergo, x/x=1, so 0/0=1. You can use the same logic for x/0=any rational number.<p>Defining x/0=0 is impossibly arbitrary.
Not zero. Infinity
God. No.
Division has an intuitive meaning: A divided by B is the number of Bs in A.<p>That is an intuition why division by zero is undefined.<p>Defining it arbitrarily is uninteresting.<p>Disapointing
Honestly this hurts my head but Hillel is inevitably correct. You can define an explicitly undefined operation to do whatever you like. But what’s the point? There’s no new mathematics you can do with it, no existing behaviours you can extend like this. Normally, when you divide by a small number, you get a large number. Now for some reason it goes through zero. Why not five? Why not seven?<p>Just because it’s formally consistent doesn’t mean it isn’t dumb.
Because exceptions are expensive, and functions with holes are dumb.<p>"Dumb" is purely a matter of aesthetic preference. Calling things "dumb" is dumb.<p>> Normally, when you divide by a small number, you get a large number. Now for some reason it goes through zero.<p>Zero is not a "small" number. Zero is the zero number. There is no number that is better result than 0 when dividing by 0; "Infinity" is not a real (or complex) number. This itself is GREAT reason to set 1/0 = 0.
It only ever bothers people who conflate open sets with closed sets, or conflate Infinity with real numbers, so it's good have this pop up to force people to think about the difference.
Sure.. but there are infinite series that sum to a finite value. Perhaps a pertinent example would be summing all the distances between each successive reciprocal of 1:<p><pre><code> Sum[1/x - 1/(x+1), {x, 1, ∞}] == 1
</code></pre>
You do actually need infinity to arrive at that 1.
Consider that lim -> inf does not mean “It goes to infinity”. Its actual definition has nothing to do with infinity. So your argument about infinity is a red herring.<p>Or try it the other way, tell me what mathematics works better if 1/x=0 than 1/x=5. If there’s an aesthetic preference displayed here, it’s for mathematics as a tool for reasoning.
> Zero is not a "small" number. Zero is the zero number.<p>What do you mean by this? Zero is certainly a zero number, but it seems that it might also be a small number simultaneously.