Contrarian thinking can be great because it taps into the intuition that the masses are mostly followers who can be led anywhere, not critical thinkers who've deeply examined what they believe. Being contrarian, then, is akin to staking out a new leadership position.<p>The space of contrarian ideas is vast, and most of them are probably bad, but, nevertheless, the willingness to hold unconventional, internally consistent views should be celebrated, because it increases diversity of thought. Our collective hive mind grows stronger through heresy.<p>However, I like my heresy with a splash of axiomatic precision, which is sadly lacking in this article.
I don’t understand, and I hope it’s just bad writing.<p>Certainly you can build a branch of mathematics without an axiom of infinity, and that’s fine, it’s math over finite sets.<p>However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.<p>He may think the axiom of infinity isn’t satisfied by our real physical world, but that’s not a math question! There’s nothing logically inconsistent about infinite sets nor their axiomatizations.
> But in the late 1800s, Georg Cantor and other mathematicians showed that the infinite really can exist.<p>I think, as I understand it, the objection is this. The proposition that infinity is "real", and there are actually infinite (not just very many) things.
Take the approximate number of subatomic particles in the universe, call it Ω. Define the largest number as Ω² and the smallest number as -Ω², and define the number of decimal numbers between each integer number as Ω², evenly spaced. That should be more than enough numbers. Redefine Ω with each new discovery in physics.<p>If this seems too conservative to you, like if for some reason you want to talk about the volume of the universe in terms of the width of an up-quark or whatever, feel free to tack on some modifier to my proposed number system.
And no discussion of Zeno? Pish.<p>The idea that nothing is demonstrative of infinity is clearly incorrect.<p>Take the screen you're reading this on. One pixel is composed of a bunch of different atoms, and once you get down to one of them, that atom subdivides into a bunch of subatomic particles, some of which even have mass. Let's take one of those for argument's sake. Split that, and you get some quarks.<p>Now let's imagine that's the smallest you can go. We can still talk about half of a down quark, or half of that, etc. Say, uh, infinitely so. There you go, everything is infinite. That wasn't so hard was it?
I think you missed the point.<p>So, firstly, you have split the particle 5 times. That's not infinite times. You can split it more, so that would be 6 times. And more. Even if you could split it 1000 times, that's not infinity.<p>The standard argument for infinity is that "you can always add 1 to any number, so there must be an infinity of them", and the refutation is that no matter how many times you add 1 to a number, all you've done is create a larger number. You never reach the point of actual infinity, no matter how long you keep doing this. You need to have infinite time in order to create an infinity by adding 1 to each number, so you're starting with the axiom that infinity exists (because you need an infinite number of operations to actually create an infinity). If you don't start with that axiom, then you can never reach infinity by addition (or any operation).
> To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is.<p>I'm hoping this is just bad writing from Quanta rather than something "ultrafinitists" truly believe.<p>I really don't think it's that complicated. Even pre-schoolers, competing to see who can say the highest number, quickly learn the concept of infinity. Or elementary school students trying to write 1/3 as a decimal.<p>Of course you need to be careful mapping infinity onto the physical world. But as a mathematical concept, there is absolutely nothing wrong with it.<p>> Mathematicians can construct a form of calculus without infinity, for instance, cutting infinitesimal limits out of the picture entirely.<p>This seems like a useful concept that also doesn't require denying the very obvious concept of infinity.
finite moments. cherish them.
In school I developed a strong hunch that continuity and infinity are "convenient delusions" we have that allow us to process the otherwise horrific complexity of the world. Experiencing time, sound, or visual motion as continuous, rather than discrete signal inputs is <i>so much simpler</i>. Similarly, the mathematical tricks and shortcuts we can use on well behaved continuous functions are both "unreasonably effective" and... probably not grounded in actual reality[1]? But damn are they convenient.<p>[1] EDIT: the reasoning is simple, if naive: the largest quantities we can measure are not, in fact, infinitely large, and the smallest ones we can measure are not, in fact, infinitesimally small. So until you show me an infinitesimal or an infinity, you're just making them up!
I've always felt that to treat infinity as number is to commit a category error (aka type conflict), to confuse the process with the outcome of the process. Infinity has proven to be very useful, but usefulness doesn't make it always valid.
Normally amps only go up to ten… but this one goes to eleven.
…it’s one louder ain’t it!?!
[dead]
It's not a new idea, and it's a challenging one to investigate. Without real numbers (that are infinitely long) most of the calculus stops working. And everything that depends on it.<p>Perhaps we can recover some of it by treating the infinitely variable values as approximations of the more discrete values and then somehow proving that the errors from them stay bounded, for at least some interesting problems.
The article doesn’t really tell us what is gained by rejecting infinity.<p>And in general, why not also reject zero, negative numbers, irrational numbers, complex numbers, uncomputable numbers, etc.?<p>Seems like an article about quacks that can’t even agree on what the bounds and rules of their quackery are.
> The article doesn’t really tell us what is gained by rejecting infinity.<p>Decidability. The issues around undecidability all involve the lack of an upper bound. In a finite deterministic space, everything is decidable, although some things may be too costly computationally to decide.<p>There are several ways to go for decidability. The brute force way is computer arithmetic - there is no number larger than 2^64-1. That's how we get things done on computers, but proofs about numbers with finite upper bounds need lots of special cases. Mathematicians hate that.<p>I used to work on this sort of thing, using Boyer-Moore theory. That's a lot like the Peano axioms. There is (ZERO), and (ADD1 (ZERO)), and (ADD1 (ADD1 (ZERO))), etc. Everything is constructive and has an unambiguous representation in a LISP-like form.
You can have recursive functions. But they must be proven to terminate, by having a nonnegative value which decreases on each recursive call. There is a distinction between "infinite" and "arbitrarily large". You can talk about arbitrarily large numbers, but you cannot get to 1/2 + 1/4 + 1/8 ... = 1. You can have integers and rational numbers of arbitrary size, but not reals.<p>Set theory was interesting. Rather than axiomatic set theory, I was using lists as sets, with the constraints that no value could be duplicated and the list must be ordered. Equality is strict - two things are equal only if the elements are all equal, compared element by element. It's possible to prove the usual axioms of set theory via this route. The ordered criterion requires proving things about ordered list insertion to get there. It's ugly and needs machine proofs.<p>I was doing this back in the early 1980s, when machine proofs were frowned upon. Mathematicians were still upset about the four-color theorem proof. It's all case analysis, with thousands of cases. That's more acceptable today.<p>Looked at in this light, infinity is a labor-saving device to eliminate special cases, at a potential cost in soundness.
All indications seem to be that things are only lost, not gained. But that doesn't mean it doesn't hew closer to how things actually are. But if that's how reality actually is, then developing a rigorous understanding of it can only be a good thing, right?
Rejecting infinity is a purely philosophical stance that doesn’t teach us anything about reality.<p>There is a big difference between “infinity doesn’t exist” and “infinity doesn’t exist physically”.<p>I should also add that the resolution of zeno’s paradox in the form of calculus where and infinite set of steps can occur in a finite time (or infinite set of distance can span a finite total distance) is conceptually very simple and useful. Rejecting it as unphysical, or saying it must imply time or space come in discrete chunks, is not contributing to an understanding of reality unless the rejection also comes with a set of testable (in principle) predictions.
> There is a big difference between “infinity doesn’t exist” and “infinity doesn’t exist physically”.<p>Is there? I think one could make a decent case for "nothing exists which doesn't exist physically[1]".<p>[1] <a href="https://plato.stanford.edu/entries/physicalism/" rel="nofollow">https://plato.stanford.edu/entries/physicalism/</a><p>EDIT: you could even probably claim "nothing exists which isn't physically <i>measureable</i>" which may or may not be a stronger claim depending on your point of view.<p>EDIT AGAIN: rate limited by this dogshit website :D but I'll respond to this comment here:<p>> Which is exactly why I mentioned rejection of zero, negative numbers, etc. You can reject them, but doing so just throws away useful tools without gaining anything in return.<p>Yeah! I fully agree. I can see no obvious benefit to rejecting these powerful tools. However, important discoveries often happen in non-obvious directions, and exploring unexplored territory is generally worthwhile. So the fact that it doesn't seem immediately useful doesn't mean it's not worth trying!
Which is exactly why I mentioned rejection of zero, negative numbers, etc.<p>You can reject them, but doing so just throws away useful tools without gaining anything in return.
The first thing that came to mind reading the article is that you need only 60ish digits of pi to calculate the circumference of the universe with a resolution of a Planck length, or something like that. You can have all the digits you want, but at some point you are beyond what is possible in reality, and giving back wrong answers for what you are trying to achieve.