
I’ve had a topic sitting in my ‘Other Ideas’ file for a while now, waiting for me to tackle it, and in the meantime, an article popped up that, well, tackled it. Again, actually, because within my file were two links to different articles. They’re all related to a standpoint I fostered in a much earlier post, to wit: ‘Infinity’ is a pointless concept.
Now, the first thing I will say is, I’m much more inclined to the practical, rather than the philosophical and theoretical, and call this a personal viewpoint/stubbornness if you like. If it can’t actually be applied to solve or describe something, what’s the use? And at the same time, I recognize something that is ignored wholesale far too often in mathematics, and it’s that any number that we apply to anything is by nature inexact, and only exists as an approximation. I have two coins on my desk; therefore, by the very nature of math, either coin is exactly half the mass of both together, perfectly equal to the other, right? Of course not – even if they were exactly the same type of coin, they’re bound to have minute differences in mass and size and so on. We accept “two” only for the sake of convenience, and because this is as far as we need take it.
Anyway, the article, What Can We Gain by Losing Infinity? by Gregory Barber, is about the concept of Ultrafinitism, the idea that infinity does not actually exist and everything has an end, someplace, even if it’s extremely far away. For all practical purposes, this is indistinguishable from infinity in most applications, with one pertinent exception: that something cannot be endless. It is, admittedly, more of a philosophical distinction than a physical one, even when it specifically pertains to physics (and perhaps most physicists have already embraced it,) but it establishes a fundamental ‘law’ within mathematics, in that numbers do not represent real things, and can play by rules that exist only within mathematics and nowhere else.
As mentioned therein, a simple axiom: you can always add 1 to any number you produce, and keep this going indefinitely. Fine. Provide proof of this – demonstrate that this really is the case. You’ll die before you get even close to some of the proposed numbers in mathematics, and even if you have a computer doing the additions, billions of times a second, that computer is going to fail – probably before you do, to be honest. The present mathematical view is that, to put things into my own words, “There’s nothing stopping it,” but actually, there is – you’ll run out of resources to achieve it, no matter how you tackle it. More pertinently, however, you’ll never find a reason, an application, to actually do this. It exists only as an abstract that cannot be realized.
Here’s another example, a simple one: pi. Pi is an irrational number (look, it’s admitted right there in the name!) meaning that the decimal notation of the ratio itself is never-ending. So, we cannot actually measure an exact circle, because we cannot calculate a never-ending number against the radius we start from. So we, always, use an approximation, shortening it to the necessary decimals that allow us to get ‘close enough.’ Except that, shortening this number means our circle will actually fall short and never be complete, never actually close – there will always be a gap. And since the number never ends, the gap can never close, and we can never complete an actual circle!
But of course we do, and even if you want to argue that there’s an infinitesimal gap that we never actually jump across, this is obviously nonsense – what, do we sit here against a wall? Did we start another circle to arrive back where we started? No, the answer is much simpler: we use a base-10 number system that doesn’t handle the ratio of pi in a useful manner. Pi is not never-ending – we just have a counting system that is inadequate for the task.
[I realize that the ‘unclosed gap’ in the argument above is closed by a straight line, however small, and this demonstrates the practical application of math and its inherent inaccuracy: there is never a perfect circle no matter what – there’s always some wandering from it. But so what who cares? Even NASA doesn’t use pi beyond a mere 15 decimal places, for all of its huge orbital calculations.]
One of the major stumbling blocks in the acceptance of ultrafinitism, according to the article, is that there is as yet no axiom that addresses it, that specifies how and where we will find this end. It is not a specific theory, and has no distinct definition. Yet to me, this is backwards. Most of the other sciences require that a theory fit the known facts and serve to explain them, but a strong theory is also testable, and predicts results. In math, ‘infinity’ is an accepted theory, but based solely on ideas as simple as, literally, “You can always add one more,” or, “You can always make a smaller decimal distinction between any two given numbers.” Is this testable, and/or can it predict results? And most especially, is it applicable to anything at all that we can use in any other discipline? When you think about it, infinity is an incredibly weak idea, based more on word games than anything physical or applicable. Therefore, ultrafinitism is not a theory in itself, but the recognition that infinity shouldn’t have ever been considered one.
And too much of advanced mathematics is like this. From the article (which I had to do as a screenshot since the notation within isn’t easily rendered in any formal typesetting):

And we can consider another, one that a few more people are familiar with: googol, defined as a one followed by 100 zeroes, which I tried to type out for giggles but the page format can’t handle it. And if that wasn’t enough, there’s googolplex, which is a one followed by a googol of zeroes. These are well known and well defined – but to what possible fucking use can any of these be put to, ever? These are word salads, not functional concepts. We really need to ask, why do these even exist?
To no small extent, however, there is also the human trait that we don’t let go of things that we believed for a long time, and this is intertwined deeply within mathematics (and indeed most philosophies, at least, but probably no small number of other sciences as well.) Mathematics relies on its axioms and functions, and can create a theory from “n+1,” the simple idea that no matter what number, you can always add to it. And since there is no number that you cannot add to, this process can go on ‘infinitely.’ Which is fine, simple and neat and all that, but this in no way implies that there actually is anything infinite, even though it is usually taken to mean that. Since there’s no stopping point, no “number so big” or rule that you can only do this so many times, it implies no end. But the end, quite simply, is when you get too goddamn tired of doing it, and all you’re really doing is repeating yourself. It’s a pattern, nothing more, exactly the same as the bare fact that we don’t have infinite numbers – we only have 0-9, and then we start over again in the next column. In seeking ‘truth’s within mathematics, axioms were created from things that really are nothing more than functions.
There is also a fundamental problem that comes up if you rephrase the approach slightly. In essence, ‘infinity’ exists because we cannot demonstrate that it doesn’t, that there is any end to n+1 and all other such axioms. But we could apply this to anything that we want, anything we can imagine – does this somehow mean that such things are allowed to exist until we can prove otherwise? And it must be said that, in a lot of concepts of an infinite universe, many people do make such claims; given some event that might have odds of 1 in 1,000,000 or even higher, in an infinite universe it is virtually guaranteed to happen – and to follow the axiom to its logical (heh!) conclusion, happen an infinite number of times. Most of the sciences, however, rely on a more practical approach: it exists when you can demonstrate or measure it. Outside of that, well, nice idea perhaps, but not worth considering until you have some proof.
Mathematicians are not unified though, and some (it’s not clear how many) recognize the difference between provable, demonstrable, applicable axioms, and the ones that cannot be and exist only in theory. There remains recognition that the value of math is how it applies to real world scenarios, problems, and circumstances. To me, that’s the only value, but there also seems to be an awful lot of emphasis on theoretical concepts that can only exist in the imagination, that have no possibility of applying to anything at all, and as long as this is considered important, the concept of infinity will continue to be protected in this Harry Potter universe of Skewes’ number and googol and i.
For consideration, the evidence for the entire universe having started 13.78 billion years ago is substantial, so substantial that we continually refine that number with further decimals. The speed with which it can expand is of course finite, because we can see and measure it. So while the phrase “infinite universe” is bandied around quite frequently, this really depends on whether you mean the actual contents like stars and gases and so on, or the empty space that it is expanding into. If the former, we actually have a calculation for that, and it’s a sphere roughly 92 billion light years across (an incomplete sphere, of course, because pi) – not infinite. And even if it’s the latter we’re referring to, well, the difference between a boundary we haven’t found and none at all is indistinguishable, except this is the only place where infinity might be found, and we couldn’t prove it anyway. It’s also possible that the expanding universe hit an outside barrier a billion years ago, and the visible effects have yet to travel back to us. Adding to all that, physicists and cosmologists operate on a fundamental concept that the matter/energy within the universe is finite and fixed, and has been from the moment of the Big Bang – not increasing, not decreasing, only concentrating or dissipating. While we cannot actually prove this in any way, we also have quite a bit of evidence that this really is the case, evidence that helped formulate the laws of thermodynamics to be exact. Most of the hard sciences don’t mess about with ‘proof,’ but rely on evidence instead, and reams of evidence provide all the support necessary, as well as functioning without any issues whatsoever. So this would mean that everything does have an end, and has to, to be contained within said universe.
Even if we try to go in the opposite direction, going perpetually smaller in size instead to demonstrate the value of infinity, we reach the Planck Constant, what quantum physicists have determined is the smallest distance we can find, measure, and use. There is no such thing as “0.1 Planck scale” and no reason to invoke such a thing.
Both of these effectively trash infinity as anything real, and while we may yet determine that we were wrong about some aspect of these measurements, what it does mean is that we have plenty of reasons to treat ‘infinity’ as flawed, an inapplicable idea, and ignore it entirely. Meanwhile, we have found the principle thing that separates math from science: math has axioms or ‘truths’ that it relies on, while science ignores the entire concept of ‘truth’ and relies on evidence instead. Ultrafinitism doesn’t have ‘truth,’ it has evidence, and so it threatens the core of mathematics – the unsupportable, untestable, inapplicable core.
We go back to something mentioned earlier, that’s exceedingly simple: numbers are just placeholders in our heads, a simple way of tracking ideas. They are not real, and cannot even be applied to real things consistently with all the axioms of mathematics. They do not define anything at all in the universe, they’re only there to help us manage our understanding – but when they cannot, when they’re dead wrong, they need to be recognized as flawed. Axiom or no, mathematical ‘theory’ or no. They’ve reached their limit of function.
Now, since there are some applications where specifying that no end is in sight is necessary, the definition of ‘infinity’ can be changed to, “beyond any reasonable or useful calculation” – which, again, is indistinguishable, but it’s more precise and explanatory without implying that there really is such a thing as ‘endless.’ It would probably be better to coin a different term, however, to avoid confusion with an established, though flawed, concept. In fact, the name is right there: BAROUC, or barouc for common usage. I deserve some credit for coming up with this…
![]()
I’m glad I stalled on this post as long as I did, because the two other articles that I had bookmarked didn’t cover as much ground, nor help guide my thoughts, as well as this one. They were The Man Who Stole Infinity, and one that is either What If Infinity Didn’t Exist? or Some Mathematicians Don’t Believe In Infinity, depending on whether you treat the page header or the URL as the title. The former is simply about the likelihood that Georg Cantor, the mathematician that formulated set theory and “different sizes of infinity,” plagiarized his main paper, while the latter is much more on-topic but also more superficial than the one I worked from for this.
If you like, you can also check out two links from within the primary article, Why Math’s Final Axiom Proved So Controversial, and Banack-Tarski and the Paradox of Infinite Cloning. The former outlines the principle axioms of mathematics, the Zermelo-Fraenkel (or Zermelo-Fraenkel Choice) set theory based on (not) Cantor’s work, wile the latter shows the utter fucking nonsense that can be produced in the Harry Potter universe of mathematics.



























































That said, I’ve had several failures which turned out to be the SATA ports on the motherboard going bad while the drive was still good, and have used my Sort backup to restore images an unknown number of times (a dozen or so?) I have had warning signs from drives and switched over, one of which did indeed fail soon afterward and is now inaccessible even from an external drive dock – nothing lost, since I acted quickly. Worth noting is that I also learned ages ago how to avoid viruses, of which Linux is largely resistant to in the first place; a friend of mine has learned no such thing, and has been infected to the point of reformatting drives at least three times (good rule to follow: if it seems too good to be true, it is – don’t click.)
























