Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a distinction between analytically correct expositions, and ones which build on naive intuition to teach students. It is inappropriate to expect beginning students to follow a logically rigorous exposition.

That said, there are philosophical questions about truth, infinity, unknowable statements and the like. Mathematicians have by and large settled on a set of answers to these. Every statement is true or false, regardless of whether we know the answer, or even whether we can know the answer. Infinite sets exist, and are described by a known set of axioms called ZFC. Almost all real numbers that exist can never, even in principle, be written down or described in any meaningful way. (In what sense do they exist again?)

All of these statements are part of classical mathematics. Almost every elementary exposition will implicitly assume that they are try. Yet they can all be questioned, and their truth can never be settled in any absolute sense. However woe betide the student who dares question these in a math class.



> Almost all real numbers that exist can never, even in principle, be written down or described in any meaningful way.

I don't think many people see that as a matter of philosophy.


Most people are uninformed.

This question strikes at the heart of the debate between Constructivism and Formalism. A debate about what it means for things to exist, statements to be true, and so on. This is very much a matter of philosophy.

To a Constructivist, most of classical mathematics is nonsense. And Constructivism is at least as logically consistent as classical mathematics.

More precisely any contradiction found in Constructivism necessarily will lead to a contradiction in classical mathematics. The converse is only partially true. Gödel did prove that a logical contradiction in the classical handling of infinity will lead to a contradiction in Constructivism. But a flaw in a specific set of classical axioms, such as ZFC, need not lead to a flaw in usual Constructivism.


I think you needed to be more precise. Most people, would read "almost all" in the context of the reals as dependent on the existence of an uncountable set, to make sense.


I was perfectly precise.

In real analysis you learn that "almost all" means that the exceptions are a set of measure 0. Since all countable sets have measure 0, the result is trivially true in classical mathematics.

In the constructible universe, you again have measure theory. Almost all still has a perfectly well-defined meaning. And all sets with enumerations again have measure zero, just like in classical mathematics. But "uncountable" now is a statement about self-referential complexity, not size. Next, "the set of all numbers with finite definitions" is not a well-defined set. And numbers without concrete definitions do not exist.


> Almost all real numbers that exist can never, even in principle, be written down or described in any meaningful way. (In what sense do they exist again?)

A profound question to reflect on; and interesting to contrast with modern science as a philosophical foundation that there might not be any continuous objects present in reality. Might be discrete all the way down.

In a sense, it seems like the uncomputable reals are an artifact of assuming continuity, ie, between any two numbers located on a line there must be more numbers. Part of the reason it is so unintuitive is we don't have any real lines to play with at the physical human scale, they fall apart at the atomic level and turn out to be non-continuous approximations.


I don't think "between any two numbers located on a line there must be more numbers" is the property you're looking for, since it holds for the rationals (all of which are, of course, computable).

Maybe "connectedness" is the notion you're trying to get at -- the real numbers are topologically connected, but the rationals aren't. If "A" is the set of rational numbers x with x^2 < 2, and B is the set with x^2 > 2, then the rationals are the union of A and B, and there is a "hole where sqrt(2) should be", so the rationals are disconnected. It's possible to define the word "connected" in a way that makes this notion precise.

A related notion is what's called "(sequential) completeness". The infinite sequence whose terms are (2, 2 + 1/2, 2 + 1/2 + 1/6, 2 + 1/2 + 1/6 + 1/24, ...), where the nth term is obtained by adding 1/(n!) to the previous term, intuitively "should" converge, since its elements get arbitrarily close together as n gets arbitrarily large. Any such sequence converges to a real value (this one converges to the exponential constant "e"). But if our number system is only countably infinite, there must be some sequences that get arbitrarily close together but don't converge. For example, if we restrict ourselves to rational numbers, this is a valid infinite sequence (every element is rational), and its terms get arbitrarily close together as "n" is large, but it doesn't converge to anything.


> Infinite sets exist, and are described by a known set of axioms called ZFC

This is the assertion of many mathematicians, but the justification for it is “this is convenient” and/or “we take this as an article of faith” (or often “I never really thought about it, but it doesn’t much affect my work day to day one way or the other”).

There is no way to prove that infinite sets “exist” by reasonable definitions of “exist”. Indeed, by a conventional definition of “exist” infinite sets pretty clearly don’t qualify.

Instead, mathematicians have redefined the words “exist” and “true”. In mathematics it now means something like “if we accept a particular set of non-obvious and rather handwavey premises, we will also accept any conclusions that result from symbolic manipulations thereof following our established formal rules.” [This is not a full or precise definition of mathematical existence; folks interested can do a search for those keywords and find piles of material.]

* * *

Personally I am happy to accept ZFC or the like because it is convenient and I can’t be bothered to work up an alternative system from scratch and carefully examine all of the conclusions that might follow from that, and whether ZFC is “true” or not doesn’t really affect me. It seems intuitively wrong to me, but I remain agnostic.


> “I never really thought about it, but it doesn’t much affect my work day to day one way or the other”

In my experience, people won't come out and say it, but this seems to be what everyone is thinking. :)

The problem with this is that it is wrong.

Classical ZFC in particular is a very strong and specific set of assumptions* with a very tenuous link to any practical application. If you actually want to develop a useful bit of mathematics it makes sense to consider the "foundations" as a moving piece. It's a part of the design space for modeling your problem domain, not some god-given notion of truth.

You can translate between different logical theories by building a model of one in another, so it's not like you loose anything. But it's cooky to insist that we should start with ZFC of all things.

---

*) I mean that second-order ZFC has basically no non-trivial models, so there is no real way of extending ZFC to talk about domain specific aspects of your problems.


> The problem with this is that it is wrong.

> You can translate between different logical theories by building a model of one in another, so it's not like you loose anything. But it's cooky to insist that we should start with ZFC of all things.

I don't see how these two are consistent. Almost everything most mathematicians do can be done both in ZFC and your favourite non-kooky axiom system. Certain Powers That Be seem to have decided that ZFC is the foundation of mathematics, so they say that what they're doing follows from ZFC even if they have a very hazy idea of what it is, but why does it matter? Most mathematics probably won't be formalized in their lifetime anyway, so whether it ends up being formalized on top of ZFC or something else doesn't affect them.


Almost everything most mathematicians do can be done both in ZFC and your favourite non-kooky axiom system.

You would be amazed at how many uniqueness and existence theorems in how many areas of mathematics require Zorn's Lemma. Which is, of course, equivalent to the axiom of choice. For example, "Every vector space has a (possibly infinite) basis." Or, "Every Hilbert space has a (possibly infinite) orthonormal basis."

It is rare for mathematicians to think much about choice. But it underpins key results in a surprising number of fields.


Slightly tongue in cheek, but the analogy to programming is this:

> "Almost everything most programmers do can be done both in x86 assembly and your favorite non-kooky programming language. Certain Powers That Be seem to have decided that x86 is the foundation of computer science. [...] Why does it matter?"

The problem is that it is difficult to translate results in a theory built in ZFC to other "architectures". In mathematics, the architectures in question are not different axiom systems, they are different branches of mathematics.

Let me give you an example. There is a large body of work on differential geometry with many useful constructions. Classical differential geometry works directly in a model where manifolds are certain subspaces of (countable products of) R^n. These constructions have been successfully imported into many different areas of mathematics. In most cases people just had to tweak the definitions slightly and adapt the proofs by keeping the basic strategy and changing all the details.

What is happening here is that the underlying ideas of differential geometry are not specific to this particular model.

When faced with such a concrete model, our first instinct should be to abstract from it and ask which assumptions are required. This is difficult in ZFC, because in the end you have to encode everything into sets. It's not possible to reason about "abstract datatypes" directly, without literally building a notion of logic (generalized algebraic theory) and models of it within ZFC. Even then, the existence of choice means that you usually have to exclude unwanted models of your theory.

Coming back to differential geometry: You can generalize a lot of it by working in a differentially cohesive (infinity-)topos. This is terribly indirect (in my opinion) and looses a lot of the intuitions. A topos is literally a model of a certain logic. Alternatively you can work directly in this logic (the "internal language" of the topos), where the differential geometric structure is available in the form of additional logical connectives. You are now talking in a language where it makes sense to talk about two points being "infinitesimally close" and where you can separate the topological from the infinitesimal structure.

At the same time you reap the benefits that there are many more models of differential cohesion than there are models of "R^n modeled in classical set theory". You can easily identify new applications, which might in turn suggest looking into different aspects of your theory. It's a virtuous cycle. :)

This approach is deeply unnatural when working in set theory or arithmetic. You have to encode everything into sets or numbers and then these sets or numbers become the thing you are studying.


I only studied mathematics, never actually been a professional mathematician, and I feel like a big difference might be in what we see as "most mathematicians" (if you're working in this area you're probably more right than I am! but it's also possible your view is focussed on people working in areas related to yours).

I'd make the following alternative analogy: I code in Python on an x86, because that happens to be the machine on my desk. If you told me I should be using POWER instead of x86, I'd probably just shrug: I could do that - my work is portable - but it's also completely irrelevant to my work. I think this would be how most people in say analysis, algorithms or combinatorics feel, for example.


(i.e. one of the standard set theory axioms amounts to "an infinite set exists")


Well I don’t mean to be rude but many logicians and set theorists I know will be sad to find themselves out of a job if what you said about these questions being settled is true


I disbelieve your claim.

In particular I maintain that any competent logician or set theorist should be able to explain to you the sense in which the statements that I made cannot logically be settled, and also explain to you the extent to which the rest of mainstream mathematics accepts these statements as true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: