Turns out I’m neither good in maths nor teaching
I think dominating on a first date is a risk (which I was mindful of) but just being yourself, and talking about something you're truly passionate about is the key.
I tried using if for this: https://adventofcode.com/2023/day/12 but computer said no
Despite the fact that this was actively debated for decades, modern math courses seldom acknowledge the fact that they are making unprovable intellectual leaps along the way.
That’s not at all true at the level where you are dealing with different infinities, usually, which tends to come after the (usually, fairly early) part dealing with proofs and the fact that all mathematics is dealing with “unprovable intellectual leaps” which are encoded into axioms, and everything in math which is provable is only provable based on a particular chosen set of axioms.
It may be true that math beyond that basic level doesn’t make a point of going back and explicitly reviewing that point, but it is just kind of implicit in everything later.
Uncountable need not mean more. It can mean that there are things that you can't figure out whether to count, because they are undecidable.
> I guarantee that a naive presentation doesn't actually include the axioms
But you said "modern math courses". Are you now talking about a casual conversation? I mean the OP's story is that his wife just liked listening to him talk about his passions. > Uncountable need not mean more.
Sure. But that doesn't mean that there aren't differing categories. However you slice it, we can operate on these things in different ways. Real or not the logic isn't consistent between these things but they do fall out into differing categories.If you're trying to find mistakes in the logic does it not make sense to push it at its bounds? Look at the Banach-Tarski Paradox. Sure, normal people hear about it and go "oh wow, cool." But when it was presented in my math course it was used as a discussion of why we might want to question the Axiom of Choice, but that removing it creates new concerns. Really the "paradox" was explored to push the bounds of the axiom of choice in the first place. They asked "can this axiom be abused?" And the answer is yes. Now the question is "does this matter, since infinity is non-physical? Or does it despite infinity being non-physics?"
You seem to think mathematicians, physicists, and scientists in general believe infinities are physical. As one of those people, I'm not sure why you think that. We don't. I mean math is a language. A language used because it is pedantic and precise. Much the same way we use programming languages. I'm not so sure why you're upset that people are trying to push the bounds of the language and find out what works and doesn't work. Or are you upset that non-professionals misunderstand the nuances of a field? Well... that's a whole other conversation, isn't it...
When I say "modern math courses", I mean like the standard courses that most future mathematicians take on their way to various degrees. For all that we mumble ZFC, it is darned easy to get a PhD in mathematics without actually learning the axioms of ZFC. And without learning anything about the historical debates in the foundations of mathematics.
To be fair, constructivists tend to prefer talk about different "universes" as opposed to different "sizes" of sets, but that's all it is: little more than a mere difference in terminology! You can show equiconsistency statements across these different points of view.
So the care that intuitionists take does not lead to any improvement in consistency.
However the two approaches lead to very different notions of what it means for something to mathematically exist. Despite the formal correspondences, they lead to very different concepts of mathematics.
I'm firmly of the belief that constructivism leads to concepts of existence that better fit the lay public than formalism does.
QED
If she laughs at that kind of thing, I can see why you married her.
https://plato.stanford.edu/entries/mathematics-constructive/ is one place that you could start filling in that gap.
For sure there are valid arguments on whether or not to use certain axioms which allow or disallow some set theoretical constructions, but given ZFC, is there anything that follows that is unprovable?
In particular, you have made sufficient assumptions to prove that almost all real numbers that exist can never be specified in any possible finite description. In what sense do they exist? You also wind up with weirder things. Such as well-specified finite problems that provably have a polynomial time algorithm to solve...but for which it is impossible to find or verify that algorithm, or put an upper bound on the constants in the algorithm. In what sense does that algorithm exist, and is finite?
Does that sound impossible? An example of an open problem whose algorithm may have those characteristics is an algorithm to decide which graphs can be drawn on a torus without any self-crossings.
If our notion of "exists" is "constructable", all possible mathematical things can fit inside of a countable universe. No set can have more than that.
Errr, I'm just assuming the axioms of ZFC. That's literally all I'm doing.
> In what sense do [numbers that can't be finitely specified] exist?
In the sense that we can describe rules that lead to them, and describe how to work with them.
I understand that you're trying to tie the notion of "existence" to constructability, and that's fine. That's one way to play the game. Another is to use ZFC and be fine with "weird, unintuitive to laypeople" outcomes. Both are interesting and valid things to do IMO. I'm just not sure why one is obviously "better" or "more real" or something. At the end, it's all just coming up with rules and figuring out what comes out of them.
John Horton Conway:
> It's a funny thing that happens with mathematicians. What's the ontology of mathematical things? How do they exist? In what sense do they exist? There's no doubt that they do exist but you can't poke and prod them except by thinking about them. It's quite astonishing and I still don't understand it, having been a mathematician all my life. How can things be there without actually being there? There's no doubt that 2 is there or 3 or the square root of omega. They're very real things. I still don't know the sense in which mathematical objects exist, but they do. Of course, it's hard to say in what sense a cat is out there, too, but we know it is, very definitely. Cats have a stubborn reality but maybe numbers are stubborner still. You can't push a cat in a direction it doesn't want to go. You can't do it with a number either.
In the sense that all statements of non-constructive "existence" are made, viz. "you can't prove that they don't exist in the general case", so you are allowed to work under the stronger assumption that they also exist constructively, without any contradiction resulting. That can certainly be useful in some applications.
But the fact that such systems don't create contradictions emphatically *DOES NOT* demonstrate the constructive existence of such an oracle. Doubly not given that in various usual constructivist systems, it is easily provable that nothing that exists can serve as such an oracle.
If the only questions you accept as meaningful are the decidable ones, then you can trust its answers for all the questions you accept as meaningful and for which it has answers.
Also, “provable that nothing that exists can serve as such an oracle” seems pretty presumptive about what things can exist? Shouldn’t that be more like, “nothing which can be given in such-and-such way (essentially, no computable procedure) can be such an oracle”?
Why treat it as axiomatic that nothing that isn’t Turing-computable can exist? It seems unlikely that any finite physical object can compute any deterministic non-Turing-computable function (because it seems like state spaces for bounded regions of space have bounded dimension), but that’s not something that should be a priori, I think.
I guess it wouldn’t really be verifiable if such a machine did exist, because we would have no way to confirm that it never errs? Ah, wait, no, maybe using the MIP* = RE result, maybe we could in principle use that to test it?
Of course, but it shows that you can assume that such an oracle exists whenever you are working under additional conditions where the existence of such a "special case" oracle makes sense to you, even though you can't show its existence in the general case. This outlook generalizes to all non-constructive existence statements (and disjunctive statements, as appropriate). It's emphatically not the same as constructive existence, but it can nonetheless be useful.
I won't ever be able to find a contradiction from that claim, because I have no way to find that bank account if it exists.
But that argument also won't convince me that the bank account exists.
I'm saying that to go from the uncountability of the reals to the idea that this implies that the infinity of the reals is larger, requires making some important philosophical assumptions. Constructivism demonstrates that uncountable need not mean more.
On the algorithm example, you could have asked what I was referring to.
The result that I was referencing follows from the https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour_theo.... The theorem says that any class of finite graphs which is closed under graph minors, must be completely characterized by a finite set of forbidden minors. Given that set of forbidden minors, we can construct a polynomial time test for membership in the class - just test each forbidden minor in turn.
The problem is that the theorem is nonconstructive. While it classically proves that the set exists, it provides no way to find it. Worse yet, it can be proven that in general there is no way to find or verify the minimal solution. Or even to provide an upper bound on the number of forbidden minors that will be required.
This need not hold in special cases. For example planar graphs are characterized by 2 forbidden minors.
For the toroidal graphs, as https://en.wikipedia.org/wiki/Toroidal_graph will verify, the list of known forbidden minors currently has 17,523 graphs. We have no idea how many more there will be. Nor do we have any reason to believe that it is possible to verify the complete list in ZFC. Therefore the polynomial time algorithm that Robinson-Seymour says must exist, does not seem to exist in any meaningful and useful way. Such as, for example, being findable or provably correct from ZFC.
Also this: https://arxiv.org/pdf/1212.6543
Assuming you haven't looked at these already, of course.
Pure mathematics is regarded as an abstract science, which it is by definition. Arnol'd argued vehemently and much more convincingly for the viewpoint that all mathematics is (and must be) linked to the natural sciences.
>On forums such as Stack Exchange, trained mathematicians may sneer at newcomers who ask for intuitive explanations of mathematical constructs.
Mathematicians use intuition routinely at all levels of investigation. This is captured for example by Tao's famous stages of rigour (https://terrytao.wordpress.com/career-advice/theres-more-to-...). Mathematicians require that their intuition is useful for mathematics: if intuition disagrees with rigour, the intuition must be discarded or modified so that it becomes a sharper, more useful razor. If intuition leads one to believe and pursue false mathematical statements, then it isn't (mathematical) intuition after all. Most beginners in mathematics do not have the knowledge to discern the difference (because mathematics is very subtle) and many experts lack the patience required to help navigate beginners through building (and appreciating the importance of) that intuition.
The next paragraph about how mathematics was closely coupled to reality for most of history and only recently with our understanding of infinite sets became too abstract is not really at all accurate of the history of mathematics. Euclid's Elements is 2300 years old and is presented in a completely abstract way.
The mainstream view in mathematics is that infinite sets, especially ones as pedestrian as the naturals or the reals, are not particularly weird after all. Once one develops the aforementioned mathematical intuition (that is, once one discards the naive, human-centric notion that our intuition about finite things should be the "correct" lens through which to understand infinite things, and instead allows our rigorous understanding of infinite sets to inform our intuition for what to expect) the confusion fades away like a mirage. That process occurs for all abstract parts of mathematics as one comes to appreciate them (expect, possibly, for things like spectral sequences).
I'd argue that, by definition, mathemtatics is not, and cannot be, a science. Mathematics deals with provable truths, science cannot prove truth and must deal falsifiability instead.
When we try to model something probabilistically, it is usually not a great idea to model the probability that we made an error in our probability calculations as part of our calculations of the probability.
Ultimately, we must act. It does no good to suppose that “perhaps all of our beliefs are incoherent and we are utterly incapable of reason”.
Solipsists would like to have a word with you...
In the end arguing about whether mathematics is a science or not makes no more sense than bickering about tomates being fruit; can be answered both yes and no using reasonable definitions.
That's the thing, though — It does make sense, and it's an important distinction. There is a reason why "mathematical certainty" is an idiom — we collectively understand that maths is in the business of irrefutable truths. I find that a large part of science skepticism comes from the fundamental misunderstanding that science is, like maths, in the business of irrefutable truths, when it is actually in the business of temporarily holding things as true until they're proven false. Because of this misunderstanding, skeptics assume that science being proven wrong is a deathblow to science itself instead of being an integral part of the process.
Mathematicians actually do the same thing as scientists: hypothesis building by extensive investigation of examples. Looking for examples which catch the boundary of established knowledge and try to break existing assumptions, etc. The difference comes after that in the nature of the concluding argument. A scientist performs experiments to validate or refute the hypothesis, establishing scientific proof (a kind of conditional or statistical truth required only to hold up to certain conditions, those upon which the claim was tested). A mathematician finds and writes a proof or creates a counter example.
The failure of logical positivism and the rise of Popperian philosophy is obviously correct that we can't approach that end process in the natural sciences the way we do for maths, but the practical distinction between the subjects is not so clear.
This is all without mention the much tighter coupling between the two modes of investigation at the boundary between maths and science in subjects like theoretical physics. There the line blurs almost completely and a major tool used by genuine physicists is literally purusiing mathematical consistency in their theories. This has been used to tremendous success (GR, Yang-Mills, the weak force) and with some difficulties (string theory).
————
Einstein understood all this:
> If, then, it is true that the axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented, can we ever hope to find the right way? Nay, more, has this right way any existence outside our illusions? Can we hope to be guided safely by experience at all when there exist theories (such as classical mechanics) which to a large extent do justice to experience, without getting to the root of the matter? I answer without hesitation that there is, in my opinion, a right way, and that we are capable of finding it. Our experience hitherto justifies us in believing that nature is the realisation of the simplest conceivable mathematical ideas. I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they most certainly cannot be deduced from it. Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the creative principle resides in mathematics. In a certain sense, therefore, I hold it true that pure thought can grasp reality, as the ancients dreamed. - Albert Einstein
Math is scientific in the sense that you've proposed a hypothesis, and others can test it.
Also the empirical part means natural phenomena needs to be involved. Math can be purely abstract.
If you want to escape human fallibility, I'm afraid you're going to need divine intervention. Works checked as carefully as possible still seem to frequently feature corrections.
The "symbol pushing" is a methodological tool, and a very useful one that opened up the possibility of new expansive fields of mathematics.
(Of course, it is important to always distinguish between properties of the abstraction or the tool from the object of study.)
[1] And even this has limits: https://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theor...
https://math.stackexchange.com/questions/31859/what-concept-...
Other great sources for quick intuition checks are Wikipedia and now LLMs, but mainly through putting in the work to discover the nuances that exist or learning related topics to develop that wider context for yourself.
I may be off-base as an outsider to mathematics, but Euclid’s Elements, per my understanding, is very much grounded in the physical reality of the shapes and relationships he describes, if you were to physically construct them.
I am going to quote from the _very beginning_ of the elements:
Definition 1. A point is that which has no part. Definition 2. A line is breadthless length.
Both of these two definitions are impossible to construct physically right off the bat.
All of the physically realized constructions of shapes were considered to basically be shadows of an idealized form of them.
The complex number system started being explored by the greeks long before any notion of the value of complex spaces existed, and could be mapped to something in reality.
Leibniz (late 1600s) helped to popularize negative numbers. At the time most mathematicians thought they were "absurd" and "fictitious".
No, not highly abstract from the beginning.
https://en.wikipedia.org/wiki/The_Method_of_Mechanical_Theor...
Wasn't that imaginary numbers?
Geometry is “attached” to the physical world… but in an abstract way… but you can point to the thing your measuring maybe so it doesn’t count…
Abstraction was perfected if not invented by mathematics.
It wasn't; but that's a common misunderstanding from hundreds of centuries of common practice.
So, how has maths gotten so abstract? Easy, it has been taken over by abstraction astronauts(1), which have existed throghout all eras (and not just for software engineering).
Mathematics was created by unofficial engineers as a way to better accomplish useful activities (guessing the best time of year to start migrating, and later harvesting; counting what portion of harvest should be collected to fill the granaries for the whole winter; building temples for the Pharaoh that wouldn't collapse...)
But then, it was adopted by thinkers that enjoyed the activity by itself and started exploring it by sheer joy; math stopped representing "something that needed doing in an efficient way", and was considered "something to think about to the last consecuences".
Then it was merged into philosophy, with considerations about perfect regular solids, or things like the (misunderstood) metaphor of shadows in Plato's cave (which people interpreted as being about duality of the essences, when it was merely an allegory on clarity of thinking and explanation). Going from an intuitive physical reality such as natural numbers ("we have two cows", or "two fingers") to the current understanding of numbers as an abstract entity ("the universe has the essence of number 'two' floating beyond the orbit of Uranus"(2)) was a consequence of that historical process, when layers upon layers of abstraction took thinkers further and further away from the practical origins of math.
[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...
That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?
Agree that math is built on language. But math is not any specific set of abstractions; time and again mathematicians have found out that if you change the definitions and axioms, you achieve a quite different set of abstractions (different numbers, geometries, infinity sets...). Does it mean that the previous math ceases to exist when you find a contradiction on it? No, it's just that you start talking about new objects, because you have gained new knowledge.
The math is not in the specific objects you find, it's in the process to find them. Rationalism consider on thinking one step at a time with rigor. Math is the language by which you explain rational thought in a very precise, unambiguous way. You can express many different thoughts, even inconsistent ones, with the same precise language of mathematics.
Numbers, for example, are abstract in the sense that you cannot find concrete numbers walking around or falling off trees or whatever. They're quantities abstracted from concrete particulars.
What the author is concerned with is how mathematics became so abstract.
You have abstractions that bear no apparent relation to concrete reality, at least not according to any direct correspondence. You have degrees of abstraction that generalize various fields of mathematics in a way that are increasingly far removed from concrete reality.
Mathematicians didn't just randomly decide to go to abstraction and the foundations of mathematics. They were forced there by a series of crises where the mathematics that they knew fell apart. For example Joseph Fourier came up with a way to add up a bunch of well-behaved functions - sin and cos - and came up to something that wasn't considered a function - a square wave.
The focus on abstraction and axiomatization came after decades of trying to repair mathematics over and over again. Trying to retell the story in terms of the resulting mathematical flow of the ideas, completely mangles the actual flow of events.
> forced there by a series of crises where the mathematics that they knew fell apart
This can be said to be true of those working in foundations, but the vast majority of mathematicians are completely uninterested in that! In fact, most mathematicians today probably can't cite you the set-theoretic (or any other foundation) axioms that they use every day, if you ask them point-blank.
However, the kind of abstractness I most enjoy in mathematics is found in algebraic structures such as groups and rings, or even simpler structures like magmas and monoids. These structures avoid relying on specific types of numbers or elements, and instead focus on the relationships and operations themselves. For me, this reveals an even deeper beauty, i.e., different domains of mathematics, or even problems in computer science, can be unified under the same algebraic framework.
Consider, for example, the fact that the set of real numbers forms a vector space over the set of rationals. Can it get more abstract than that? We know such a vector space must have a basis, but what would that basis even look like? The existence of such a basis (Hamel basis) is guaranteed by the axioms and proofs, yet it defies explicit description. That, to me, is the most intriguing kind of abstractness!
Despite being so abstract, the same algebraic structures find concrete applications in computing, for example, in the form of coding theory. Concepts such as polynomial rings and cosets of subspaces over finite fields play an important role in error-correcting codes, without which modern data transmission and storage would not exist in their current form.
"A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies."
-- Stefan Banach
When I was studying, I always got top marks in Analysis.
Then came Algebra, Topology and similar nightmares. Oh crap, that was difficult. Not really because of the complexity, but rather because of abstraction, an abstraction I could not take to physics (I was not a very good physicist either). This is the moment I realized that I will never be "good in maths" and that will remain a toolbox to me.
Fast forward 30 years, my son has differentials in high school (France, math was one of his "majors").
He comes to me to ask what the fuck it is (we have a unhealthy fascination for maths in France, and teach them the same was as in 1950). It is only when we went from physical models to differentials that it became clear. We did again the trip Newton did - physics rocks :)
Am i daft, eventually (Very soon) Achilles would over take the turtles position regardless of how far it moved... I am missing something?
More generally, mathematics is experimental not just in the sense that it can be used to make physical predictions, but also (probably more importantly) in that definitions are "experiments" whose outcome is judged by their usefulness.
While mathematics "can" be reasoned about from first principles, the history of math is chock-full of examples of professional mathematicians convinced by unsound and wrong arguments. I prefer the clarity of testing the math on a computer.
We used Peano arithmetic when doing C++ template metaprogramming anytime a for loop from 0..n was needed. It was fun and games as long as you didn't make a mistake because the compiler errors would be gnarly. The Haskell people still do stuff like this, and I wouldn't be surprised if someone were doing it in Scala's type system as well.
Also, the PLT people are using lattices and categories to formalize their work.
Math in its core has always been abstract. It’s the whole point.
I don't think so. E.g. there may be some abstractions in numerical linear algebra, but the subject matter has always been quite concrete.
Given the collective time put into it, easier stuff was already solved thousands of years ago, and people are not really left with something trivial to work on. Hence focusing on more and more abstract things as those are the only things left to do something novel.
But also wrong, the easier stuff was solved INCORRECTLY thousands of years ago. But it takes advanced math to understand what was incorrect about it.
I get what they're saying in practice. But numbers are abstract. They only seem concrete because you'd internalized the abstract concept.
On the other hand, two cookies plus three cookies, what even is a cookie? What if they're different sizes? Do sandwich cookies count as one or two? If you cut one in half, does you count it as two cookies now? All very abstract. Just give me some concrete definitions and rules and I'll give you a concrete answer.
The Peano axioms are pretty nifty though. To get a better appreciation of the difficulty of formally constructing the integers as we know them, I recommend trying the Numbers Game in Lean found here: https://adam.math.hhu.de/
- they are material objects
- they are concepts I understand
- they are sequences of letters
- they are English words
- ...
Not sure why oneness is privileged as what they have in common, and their oneness is meaningless by itself. Oneness is a property that is only meaningful in relation to other concepts of objects.
The tendency towards excessive abstraction is the same as the use of jargon in other fields: it just serves to gatekeep everything. The history of mathematics (and science) is actually full of amateurs, priests and bored aristocrats that happened to help make progress, often in their spare time.
To put it another way: Jargon is the source code of the sciences. To an outsider, looking in on software development, they see the somewhat impenetrable wall of parentheses and semicolons and go "Ah, that's why programming is hard: you have to understand code". And I hope everyone here can understand that that's an uninformed thing to say. Syntax is the easy part of programming, it was made specifically to make expressing the rigorous problem solving easier. Jargon is the same way: it exists to make expressing very specific things that only people in this subfield actually think about easier, instead of having to vaguely gesture at the concept, or completely redefine it every time anybody wants to communicate within the field.
People are aware that you need context to motivate abstractions. That's why we start with numbers and fractions and not ideals and localizations.
Jargon in any field is to communicate quickly with precision. Again the point is not to gatekeep. It's that e.g. doctors spend a lot of time talking to other doctors about complex medical topics, and need a high bandwidth way to discuss things that may require a lot of nuance. The gatekeeping is not about knowing the words; it's knowing all of the information that the words are condensing.
Formal reasoning is the point, which is not by itself abstraction.
Someone else in this discussion is saying Euclid's Elements is abstract, which is near complete nonsense. If that is abstract our perception of everything except for the fundamental [whatever] we are formed of is an abstraction.
What do you think "formal" means in that sentence.
It means "formal" from the word "form". It is reasoning through pure manipulation of symbols, with no relation to the external world required.
https://www.etymonline.com/word/formal "late 14c., "pertaining to form or arrangement;" also, in philosophy and theology, "pertaining to the form or essence of a thing," from Old French formal, formel "formal, constituent" (13c.) and directly from Latin formalis, from forma "a form, figure, shape" (see form (n.)). From early 15c. as "in due or proper form, according to recognized form," As a noun, c. 1600 (plural) "things that are formal;" as a short way to say formal dance, recorded by 1906 among U.S. college students."
There's not a much better description of what Euclid was doing.
https://plato.stanford.edu/entries/logic-classical/
"Formal" in logic has a very precise technical meaning.
Edit to add: this comment had a sibling, that was suggesting that given a specific proof assistant requires all input to be formal logic perhaps the word formal could be redefined to mean that which is accepted by the proof assistant. Sadly this fine example of my point has been deleted.
Isn't that the subject of the whole argument? That mathematicians have taken the road off in a very specific direction, and everyone disagreeing is ejected from the field, rather like occurred more recently in theoretical physics with string theory.
Prior to that time quite clearly you had formal proofs which do not meet the symbolic abstraction requirements that pure mathematicians apparently believe are axiomatic to their field today, even if they attempt to pretend otherwise, as argued over the case of Euclid elsewhere. If the Pythagoreans were reincarnated, as they probably expected, they would no doubt be dismissed as crackpots by these same people.
I could construct a formal reasoning scheme involving rules and jugs on my table, where we can pour liquids from one to another. It would be in no way symbolic, since it could use the liquids directly to simply be what they are. Is constructing and studing such a mechanism not mathematics? Similarly with something like musical intervals.
An apple is an abstraction over the particles/waves that comprise it, as is a banana.
Euclid is no more abstract than the day to day existence of a normal person, hence to claim that it is unusually abstract is to ignore, as you did, the abstraction inherent in day to day life.
As I pointed out it's very possible to create formal reasoning systems which are not symbolic or abstract, but due to that are we to assume constructing or studying them would not be a mathematical exercise? In fact the Pythagoreans did all sorts of stuff like that.
No, you don’t understand what abstraction is. Apple is exactly arrangement of particles, it’s not abstraction over them.
> hence to claim that it is unusually abstract
Who talks about him being unusually abstract (and not just abstract)?
> is to ignore, as you did, the abstraction inherent in day to day life.
How am I ignoring this abstraction when I’ve provided you exactly that (numbers are abstraction inherent in day to day life). I’m sorry but you seem to be discussing in bad faith.
No. You can do things to that apple, such as bite it, and it is still an apple, despite it now having a different set of particles. It is the abstract concept of appleness (which we define . . . somehow) applied to that arrangement of particles.
> I’m sorry but you seem to be discussing in bad faith.
Really?
> No, you don’t understand what abstraction is.
I personally cannot wrap my head around Cantor's infinitary ideas, but I'm sure it makes perfect sense to people with better mathematical intuition than me.
What is nerd-famous supposed to be. He's at the center of some subjective in-group that exists in your head?