Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Basics of Proofs (2017) [pdf] (stanford.edu)
145 points by waldarbeiter on June 17, 2023 | hide | past | favorite | 92 comments


When I was first introduced to mathematical proofs, I was perplexed by how fuzzy and intuition-based the notion of proof was. A "convincing argument", really?! There is no knowing, as a novice, how detailed those "arguments" have to be, what parts you can simply assert without further justification. Usually the teachers themselves can't explain what the criteria for an "obvious" and "not obvious" step is, they just know it intuitively from experience. Writing proofs, then, is a lot like having to learn to ride a bike: Instructions are mostly unavailable or useless.

I later learned that there is indeed a precise way of learning proofs which doesn't rely on intuitions of what counts as a rigours inference step: Formal logic together with a natural deduction proof system. Natural deduction is a formal proof system which resembles actual ("natural") proofs in mathematics, unlike other proof systems.

In such a proof system, inference rules, like modus tollens or universal instantiation, are strictly defined. Only the given inference rules (and those which are provable from the given rules) may be used. Coming up with such proofs still requires creativity, there is no algorithm. But there is no ambiguity in what counts as a valid or invalid proof or inference step.

Of course, this is far too tedious for actual mathematical proofs, since every little step needs to be done explicitly, e.g. even applications of modus ponens (rule: "A, if A then B, therefore B"). Moreover, mathematicians rarely prove anything from axioms, they start from other statements which are considered more trivial. But I think it would be helpful for many people to first learn logic and deduction "the precise way", and then do actual mathematical proof where you can jump over more obvious parts.

But that's not how it is teached in mathematics or computer science. Students are thrown into the cold water, and only receive tips&tricks, but no rigorous introduction. Ironically, the one subject which often teaches formal logic as an early introductory class for undergraduates isn't mathematics, computer science, or physics, it's philosophy.


Some universities now have a Lean proof assistant logic course that teaches natural deduction http://leanprover.github.io/logic_and_proof/

Whenever I have asked professors in the past how they first learned proofs the answer was always from doing Euclid in highschool which is no longer taught


That course looks like a great introduction!

(A small downside is, in my opinion, that is uses trees instead of a more classical sequential presentation in the style of Suppes and Kleene.

https://en.wikipedia.org/wiki/Natural_deduction#Different_pr...

Trees may be more elegant, but actual, informal proofs in mathematics are written sequentially. So I think trees defeat the purpose of "natural" deduction a bit. But that's complaining on a high level.)


I studied CS and electric engineering for 6-7 years total, and both degrees included some pretty math-intense courses, proofs and all. I remember how I really, really wanted an explanation of the underlying proof system but all I got was just more intuitive explanations, all of which felt out of place.

What is unusual is that I kept looking. "How to Prove It" was something I really wish I read ahead of calculus. It is only after this book it clicked for me, all of it. Having understood the rules of the game it became possible to play with math..! Which I still do from time to time.

I think that this material should be taught at high school. Bits of it even earlier.


Philosophy was the only field where I learned formal logic at all. I went into higher math, did some CS, and also got a law degree[1], and am now doing a Ph.D. in informatics, and still the only place I have really been exposed to formal logic was my undergrad degree in philosophy.

[1] It is amazing (or not, depending on your view of lawyers) that lawyers are basically not taught anything about making and proving formal arguments, at least in my experience.


Law is not about logic. It’s about hermeneutics. It’s closer to religion than to math.


Which is why the most successful lawyers are more akin to the most successful evangelists: the ability to hoodwink groups of people with their interpretation.


> But I think it would be helpful for many people to first learn logic and deduction "the precise way", and then do actual mathematical proof where you can jump over more obvious parts.

Absolutely. And I can not imagine a mathematics degree which doesn't place formal logic front and center in the first semester. When you are starting out you absolutely need to suffer through the rigourous formal arguments.

>Ironically, the one subject which often teaches formal logic as an early introductory class for undergraduates isn't mathematics, computer science, or physics, it's philosophy.

I can not believe that this is the case. Even engineering undergrads have to learn formal logic in their first semester.


> Even engineering undergrads have to learn formal logic in their first semester.

I got my engineering B.S. from a top 5 college in the US, and have known many people who have gone to similar schools. None of us have had to take a class that goes into first order logic or proof writing. I don't know what college you go to where that is a thing, but it would be exceedingly rare.


>I don't know what college you go to where that is a thing, but it would be exceedingly rare.

German technical university. I was a tutor there. AFAIK this is normal.

As part of the first semester you take linear algebra and analysis, starting out with the basics of formal logic. Of course the courses are less focused on proof writing than the mathematics "major" courses.

I should also point out that German universities have very loose entry standards (except when places are very rare compared to applicants) and use the first two semesters to filter out students. These courses are often designed to have around a 50% failure rate.


> As part of the first semester you take linear algebra and analysis, starting out with the basics of formal logic

There is not enough time to learn the basics of formal logic and linear algebra and/or analysis in a single class, but I think what you’re referring to is an introduction to proof techniques like induction, modus tollens, quantifiers, etc.

Every math and computer science department in the US that I’ve ever heard of teaches these topics, but I wouldn’t call it a formal logic class.


For me basic formal logics means learning the symbols (conjunction, disjunction, implication, equivalency, not, etc.) and the rules of inference to maniuplate these symbols and using these rules to prove new things.

How can you teach analysis without that anyway. It is absolutely essential for set theory and how would you e.g. define the reals (in a "proper" math course, not engineering) without a good understanding of set theory?

If you don't believe me, here is a link to the contents of a first semester engineering math course from some german technical university: https://page.math.tu-berlin.de/~joswig/teaching/notes/Joswig...

The symbols should be enough to tell you what the contents are.


> For me basic formal logics means learning the symbols (conjunction, disjunction, implication, equivalency, not, etc.) and the rules of inference to maniuplate these symbols and using these rules to prove new things.

That's really only baby logic. Which, probably, is what will be sufficient for most mathematician most of the time.

A real first introduction to formal logic would introduce an actual formal proof system and go at least as far as proving completeness of first order logic.


>That's really only baby logic.

Instead of (literally) infantalizing the name, you could also call it basic formal logic.


I don't want to quibble about names (and I wasn't the person to come up with the term "baby logic"), but the point is that just introducing a couple of connectives and proof strategies doesn't even constitute the basics of what mathematical logic really is about. Which btw I'm perfectly fine with, most people don't need more than that.

If you do want to study logic formally, the basics start with well-formed formulas, signatures, etc.

I guess what you call it doesn't matter a lot, but the discussion seems to have started with the assertion that most students, even in mathematics, never really learn formal logic, and I would agree with that (under my definition of "formal logic"), while also agreeing with you that you can't pursue a degree in maths without knowing how induction works or what a bijection is. But still, most people don't need to know exactly how to formalise induction and that it's actually (in its full form) a second-order axiom.


That level of logic taught to every mathematics and computer science student, and it’s really not what I was thought others in this thread were talking about.


Your student filtration system sounds like a long and expensive waste of everyone's time.


Not really. First year courses usually have hundreds of students in large halls. There is some more effort as you need more tutors, but that is basically it. (Students usually do not live on campus)

The enormous upside is that all students are judged equally on their ability to academically succeed in their chosen field. I think US university admissions are ridicolous for many reasons.


Formal logic is taught in Mathematical Logic class (intermediate, optional) or Discrete math (intro/intermediate, often not taken)

Geometry class in high school sometimes teaches some of it.

Computer science Binary logic teaches some of (De Morgan's laws)

Outside of New Math of the 1970s, it is a glaring omission from the curriculum.

Even enriched classes like Art of Problem Solving that put heavy emphasis on proofs, do not teach formal logic.


To be fair, I think some people just quickly and intuitively "get" what's expected from them when doing proofs, without any formal introduction. But for others (like me) it is very much helpful to at least list the basic natural deduction inference rules, and to do a bunch of exercises where they have to use these rules explicitly. Otherwise they are floating in thin air, with only a hazy idea of what a "valid logical step" even is.


> When I was first introduced to mathematical proofs, I was perplexed by how fuzzy and intuition-based the notion of proof was.

This was precisely my issue when studying proofs in school. Do you have any suggestions for resources to get started down the right path?


The lecture notes for some introductory logic class of an analytic philosophy department may be quite good, but unfortunately I can't recommend something specific for English.


>some introductory logic class

like forall x from openlogicproject?


Yes! It looks like a great introduction. Perhaps even better than the Lean course mentioned above. Here is the link:

https://forallx.openlogicproject.org/


"How to prove it" is the best book you can find


How do you prove that?


With an intuitive explanation!


"a sequence of thoughts convincing a sound mind" (Gödel 1953, p. 341)


"How to Read and Do Proofs" by Daniel Solow, also there is a playlist on YT with the video lectures, very beginner friendly in my opinion https://www.youtube.com/watch?v=lOnsQUFFUEE&list=PL8GFp_SBW-...


> Moreover, mathematicians rarely prove anything from axioms, they start from other statements which are considered more trivial.

Not more trivial but from the ones already proved and those closer to the thing you are proving. There is no need to go back to the axioms if you know, and can reference, proof of step N-1, just go from there.

There is also this 'misconception' that mathematical theorems follow from the axioms. They do, of course, but the axioms were choosen just right to make things that were working to still work, with some weird consequences like axiom of choice


There are are a lot of "folk theorems" in mathematics, and things which are simply considered obvious or common knowledge relative to a conjecture in question, without anyone being able to actually cite a proof for that. Mathematical proof is really just for convincing other mathematicians, there is no need to prove things which are considered obvious.


None of what you said is true except for

> Mathematical proof is really just for convincing other mathematicians

which is precisely the point of a proof, but 'convince' means something different than you seem to think it means (in the field of mathematics)


>There are are a lot of "folk theorems" in mathematics

Can you name some? Preferably in analysis.


I found this, unfortunately it is about category theory: "manuscripts [...] that are cited but never became widely available"

https://ncatlab.org/nlab/show/list+of+lost+manuscripts+in+ca...


Any particular theorem? This links to hard/impossible to access materials, but does any of them actually contain a theorem which proof is not accessible in some form?

Further is it any relevant theorem, which has been cited many times?


From the link I posted:

John Beck's Monadicity Theorem is cited in "many sources" but a manuscript has been found and "The proof is reproduced for instance in (MacLane, p. 147-150, Riehl 2017, 5/5/)".

Fred Linton cites Michael Barr for a "Universal property of the Kleisli construction" and no copy has been found.

I don't know enough about the subject to say how difficult these proofs would be to recreate or whether they exist elsewhere, but these definitely seem to be specific theorems.

I'd like to see a specific theorem in any field that was once proven and accepted but all existing proofs have been lost. That would be extremely tantalizing


I don't think this is convicing at all. None of this shows that there are generally accepted theorems without proofs.

The first one isn't lost at all, the second is just a single lost citation.

All of this is very far away from mathematicians building upon thr shaky grounds of mythical theorems. Also, in all my mathematical experience (going up to recent research in analysis) I never encountered something like this. In all textbooks I ever read either everything was proven or was cited to other works, which every time I checked included the proof.


"Everything was proven" ... everything to satisfy a mathematician, not a logician. Formal proofs are generally very unwieldy, because informal proofs treat all the most obvious parts as obvious. In some cases these parts are not even trivial to prove formally. Last time I checked there was still no formal version of Cantor's diagonalization argument.


> Last time I checked there was still no formal version of Cantor's diagonalization argument.

Formalization? Like a computer checked proof?

Lean: https://leanprover-community.github.io/mathlib_docs/logic/fu...

Coq: https://github.com/bmsherman/finite/blob/63706fa4898aa05296c...

Isabelle: https://isabelle.in.tum.de/website-Isabelle2009-2/dist/libra...


> Formalization? Like a computer checked proof?

Yes, that would be an example, though not all formalizations are written in software. I was originally talking just about standard formal logic, which is just first-order predicate calculus. Proof assistants use much more powerful systems in order to gain the same expressibility as the natural language which is used in informal proofs. (Which is kind of strange: In using natural language mathematicians don't seem to be using some advanced type theory.)

So I can't actually read these proof assistant languages, as they are using some advanced type theory based language. So just a few comments.

Lean: It looks suspicious that this seems to involve just one to two lines. Are they actually proving anything here? It doesn't look at all like Cantor's diagonal argument.

Coq: This looks better, at least from the description, and that it actually looks like a proof (Coq actually has a Qed keyword!). Though they, unlike Cantor, don't talk about real numbers here, just about sequences of natural numbers. Last time I read a discussion about it, it was mentioned that this exactly was a problem for the formal proof: Every natural number sequence (with a decimal point) corresponds to some real number, but real numbers don't necessarily have just one decimal expression, e.h. 0.999...=1. So real numbers and sequences can't straightforwardly be identified.

Isabelle: That seems to be a formalization of Cantor's powerset argument, not his diagonal argument.

Overall, this highlights a major problem with formalization of existing proofs. There is no way (at least no obvious way) to "prove", that a formal proof X actually is a formalization of some informal proof Y. X could be simply a different proof, or one which is similar but uses stronger or question begging assumptions, etc. One such example is informal proofs (like Cantor's) which didn't work in any axiomatized system, and where it isn't clear how much you can assume for a formal version without being question begging.


You have literally been given a fully formal proof of Cantor diagonalization.

The lean proof seens correct, I can't read lean well, but it appears to just be a formalization of the standard proof. It at least constructs the set needed for the proof.

>Proof assistants use much more powerful systems in order to gain the same expressibility as the natural language which is used in informal proofs. (Which is kind of strange: In using natural language mathematicians don't seem to be using some advanced type theory.)

Look up "Curry-Howard isomorphism".

>Overall, this highlights a major problem with formalization of existing proofs. There is no way (at least no obvious way) to "prove", that a formal proof X actually is a formalization of some informal proof Y.

And? This seems not relevant to the question at hand. Surely some textbook contains an insufficient proof, which couldn't be formalized without more steps. Why does that matter. The question still is whether "folk theorems" are real. Certainly Cantor's theorem is no such folk theorem, as you have just been given a fully formal proof.

>Though they, unlike Cantor, don't talk about real numbers here

Cantors theorem usually refers to the existence of uncountable sets, which can be proven by showing that there is no bijection between N and R, but also by showing that there is no bijection between a set and its powerset.

>Every natural number sequence (with a decimal point) corresponds to some real number, but real numbers don't necessarily have just one decimal expression, e.h. 0.999...=1. So real numbers and sequences can't straightforwardly be identified.

This is only relevant if you say that the real numbers are "infinte strings of digits", which is not a formal definition and therefore not used in mathematics.


> The question still is whether "folk theorems" are real. Certainly Cantor's theorem is no such folk theorem, as you have just been given a fully formal proof.

I didn't say that Cantor's theorem is a folk theorem, only that some proofs aren't even trivial to formalize. Many other things are trivial to formalize, but they still require filling in a lot of obvious steps. It often wouldn't be practically feasible to prove them with a standard first-order calculus, otherwise proof system languages wouldn't need higher order systems with advanced type theory.

> Cantors theorem usually refers to the existence of uncountable sets, which can be proven by showing that there is no bijection between N and R, but also by showing that there is no bijection between a set and its powerset.

Yes, but I was talking about his diagonal argument, not his powerset argument. That's a different proof from a different paper.

> This is only relevant if you say that the real numbers are "infinte strings of digits", which is not a formal definition and therefore not used in mathematics.

Well, I'm not saying that, but the Coq proof seems to assume it. That was the problem I was talking about.


I don't get what the distinction is.

>because informal proofs treat all the most obvious parts as obvious.

No, they treat the most obvious parts as previously proven (which they are). That is an enormous difference.

I really have no idea what the difference to a "fully formal" and the standard proof would be, except the later argument includes verbose commentary, instead of logical symbols. But saying "since a is true and we formerly proved a implies b, we know b is true" instead of "(a and (a => b)) => b", does only seem to be a pedantic difference.

>Last time I checked there was still no formal version of Cantor's diagonalization argument.

AFAIK it is part of the standard libraries of multiple proof assistants, which means it is as fully formal as it can ever be.


> >because informal proofs treat all the most obvious parts as obvious.

> No, they treat the most obvious parts as previously proven (which they are).

Well, I don't agree here. If you look at the history of mathematics, it shows that it progressed often in the opposite direction, where many "basics" were proven very late, e.g. by Dedekind, Peano, and Russell/Whitehead. Not to mention all these details which are now are getting expressed in languages for proof assistants. Many of these details couldn't even have been proved earlier, because the necessary formal systems were only invented recently.

> I really have no idea what the difference to a "fully formal" and the standard proof would be

Just look at actual formal proofs. They are very different from informal ones. To approximate the natural language of informal proofs, proof assistant languages actually have to use logics with advanced type systems, which most people who just write informal proofs wouldn't even understand.

> But saying "since a is true and we formerly proved a implies b, we know b is true" instead of "(a and (a => b)) => b", does only seem to be a pedantic difference.

It's more like they write "A, therefore B" (or even "A, then one easily sees that B") where the step from A to B is considered obvious by the guy who wrote the proof (a reviewer might reject it and ask for more intermediate steps, if he feels the "A therefore B" step isn't obvious), and where usually no commentary like "and we formerly proved A |= B" is included. Proofs are in fact not plastered with such commentaries, and if they were, people would expect a reference to where this was proven previously. But they don't expect references to obvious steps, because they are obvious, not because they necessarily expect they were proven previously somewhere. That wouldn't even make sense, since many theories were not, or are still not, axiomatized, so respective proofs have to rely on things that are considered obvious.

I already cited Yehuda Rav's 1999 paper "Why Do We Prove Theorems?"[1][2] elsewhere in this thread, and it's again relevant here.

From p. 13:

> A proof in mainstream mathematics is set forth as a sequence of claims, where the passage from one claim to another is based on drawing consequences on the basis of meanings or through accepted symbol manipulation, not by citing rules of predicate logic.4 The argument-style of a paper in mathematics usually takes the following form: '... from so and so it follows that..., hence...; as is well known, one sees that...; consequently, on the basis of Fehlermeister's Principal Theorem, taking in consideration α, β, γ, ..., ω, one concludes..., as claimed'. Why is it so difficult, in general, to understand a proof? Why does one need so much background information, training and know-how in order to follow the steps of a proof, when the purely logical skeleton is supposed to be nothing more than first-order predicate calculus with its few and simple rules of inference?

From p. 14f:

> In reading a paper or monograph it often happens—as everyone knows too well—that one arrives at an impasse, not seeing why a certain claim B is to follow from claim A, as its author affirms. Let us symbolise the author's claim by 'A —> B' . (The arrow functions iconically: there is an informal logical path from A to B. It does not denote formal implication.) Thus, in trying to understand the author's claim, one picks up paper and pencil and tries to fill in the gaps. After some reflection on the background theory, the meaning of the terms and using one's general knowledge of the topic, including eventually some symbol manipulation, one sees a path from A to A_1, from A_1 to A_2, ..., and finally from A_n to B. This analysis can be written schematically as follows:

> A —> A_1, A_1 —> A_2, ..., A_n —> B.

> Explaining the structure of the argument to a student or non-specialist, the other may still fail to see why, for instance, A_1 ought to follow from A. So again we interpolate A —> A', A' —> A_1. But the process of interpolations for a given claim has no theoretical upper bound. In other words, how far has one to analyse a claim of the form 'from property A, B follows' before assenting to it depends on the agent.

[1] https://doi.org/10.1093/philmat/7.1.5 [2] http://sgpwe.izt.uam.mx/files/users/uami/ahg/1999_Rav.pdf


> (The arrow functions iconically: there is an informal logical path from A to B. It does not denote formal implication.)

> In other words, how far has one to analyse a claim of the form 'from property A, B follows' before assenting to it depends on the agent.

Or just prove the implication instead of inventing 'informal logical paths'


The "informal logical path" proves the informal implication, but what counts as a proof of this implication here can be agent dependent. If we talk about proving formal implication, then we have still the problem of deduction[1][2][3]. In short, natural language doesn't distinguish between "inference rules" and expressing this rule in a sentence and using it as a premiss. Which leads to an infinite regress, unless the agent just stops at some point, unlike the tortoise in Carroll's story.

[1] https://doi.org/10.1093/mind/IV.14.278 [2] https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achi... [3] https://www.jstor.org/stable/2253263


I agree that proofs assume for readability that the reader fills in some blanks. I might even go so far as to say that this informality can be dangerous.

Still, the question is, are there "folk theorems", where no proof exist which could be written down fully formal?


I said there are accepted statements for which nobody can "cite" a proof, not that these statements are not provable in principle.


I don't think this is correct. Cantor's diagonal argument follows easily from Lawvere's fixed-point theorem https://ncatlab.org/nlab/show/Lawvere%27s+fixed+point+theore... and both are quite easy to formalize.


That seems to be a different proof which implies Cantor's theorem, but not a formalization of Cantor's diagonal argument itself.


It is a formalization of Cantor's diagonal argument - in fact, of "diagonal arguments" more generally. See http://arxiv.org/abs/math/0305282 for an expository treatment.


Thanks, looks interesting.


Our discrete mathematics professor leaned heavy on proofs, but also introduced pop quizzes for proofs of algorithms and other mathematical constructs as an experiment that term which sent my anxiety ablaze. You either had the intuition for that particular problem or you didn’t. Studying would not help you that much. The success rate on the quizzes was very low surprising no one.


Classes like this were funny. I often found that people who just knew the answer immediately had a lot of difficulty doing proofs, and the people good at doing proofs usually didn't arrive at the solution very quickly or seem to have an intuitive understanding of the problem at first.

Some people are just wired differently.


That’s interesting. In your context what is “knowing the answer”? To me it seems like they “knew” a given statement was true but they didn’t know how to prove it which makes me wonder how they knew.


The teacher may start off a class with a question like "what is the most efficient way to satisfy this problem given these constraints?" Some people would know the answer immediately but couldn't do a proof for it hardly at all. Others could find a proof in class almost every time, but necer really saw the answer to the problem until they'd sat on it for a while and chewed over it.

Can't really give an example question, it's been well over a decade (closer to two) since I took it.


As someone who tutored a Formal Reasoning class for a top philosophy department, I don't think we should be teaching anyone logic. It's rigid, its made up, and it doesn't help much for most things. I think the way proofs are written in math is cool, and I think the fact that mathematicians are, in the end, aware that it is impossible to rigidly demonstrate anything is perfectly fine and it makes their proofs a lot easier to read and interpret. Formal logic should never be treated as first order or ever used as such, it will only hamper mathematics. I understand the need for a rigorous language to describe mathematics, I understand why people like it so much, and why it has found so much use. It is still not the groundwork (because there is none).


Would you agree to at least list some of the common natural deduction inference rules in an introductory class, and do a few exercises with them? Like, for propositional logic, modus ponens, modus tollens, contraposition, transitivity of the conditional, conditional proof, proof by contradiction, de Morgan's laws, distributive laws, etc, and some inference rules for quantified predicate logic?

I agree that full formal logic could be too much irrelevant information, but I think many experienced people underestimate how non-obvious the basic inference rules are to novices, and how confused people are about just being told to produce "convincing arguments". The important part is that the argument has to be truth preserving, unlike a "convincing argument" or "proof" an attorney might give in court. It is very hard to understand this difference if one has only a hazy idea of logic and deduction vs induction.


I think all proofs are just "convincing arguments." Computation would make that seem meaningless, but computers don't operate based on truth, they are repetition machines, just like you and I. But I've been reading an Introductory Topology book recently (Munkres) and the entire first chapter is explicating at a relatively high level of detail all the things you describe and moreso. But its necessary, in some sense. A lot of the set theoretical rules were employed in topology to create more generalized forms of analysis. You have to learn them because otherwise its very difficult to understand the language that's being used. But as I've argued elsewhere, if we are aware that its all based on interpretation, then I think we need to be a bit anarchistic and let students have the freedom to break those rules without knowing it, if they might produce thereby fascinating proofs and arguments. If you give people a schematic for interpretation, they're just going to follow it, and they might have difficulty deviating from it at all. The spark of human creativity comes when people are forced to, as they say, re-derive certain theorems, and while doing so they might accidentally discover some new implication of the logic that was unseen before, and produce something entirely new.


Isn't (classical) geometry basically about proving properties from a few theorems? I.e., logic, proof, and deduction.

A lot of people are exposed to Geometry in High School. It might not be universal, but it's pretty close. IIRC, Eucid's Elements was the second most read book, after the Bible. Many cities even have a street named after the guy.

I know, it's not the same as a formal logic course. But it's not that far off, either. There is some preparation in earlier years of schooling.


I like your analogy with riding a bike: If you want to ride a bike, what is more important, learning how to ACTUALLY DO IT, or to learn the physical and mechanical rules underlying it? Furthermore, technical knowledge will always be incomplete (and that applies also to logic in a very technical sense), while once you learnt how to ride a bike, your knowledge, in a way, is complete.


A lot of things can be learned simply by practice, without any theory, but many things at least benefit from learning it in a more principled way. That's not feasible for riding bikes, since this relies on the the motor cortex, which doesn't understand and benefit from explicit knowledge about the physics of bike movement. Writing proofs is more intellectual, it benefits from principled understanding.


Finding theorems and proofs is not intellectual at all, it is like riding a bike. You are probably not going to prove anything new and interesting by approaching it from an intellectual angle. You CAN turn it into an intellectual exercise, and at some point it is very beneficial to do so, for meta-mathematical insights. Just like it is beneficial to learn the physics and mechanics of cycling, if you want to construct bikes. It's also very helpful when you are wondering about the technical veracity of your theorems and proofs. It's not that helpful if you want to FIND those theorems and proofs in the first place. Starting out with a fixed rule system also has the danger of limiting your imagination about what's possible, given that no fixed rule system is ever going to be enough.


Proving things with formal calculi still relies on something like creativity, which itself isn't captured by a precise algorithm, similar to informal proofs. The advantage of learning formal proof is, I claim, purely didactic, as it gives a clear idea on what a valid inference step is in the ideal case. I don't think there is really a danger of limiting your imagination, as you will probably stop thinking in formal terms as soon as you are good in doing informal proofs.

For example, the above Stanford document on proof basics doesn't even mention elementary inference rules like modus tollens. It briefly refers to "the contrapositive", but without explaining what the contraposition rule even is. It was clearly written by someone who regards all these as too obvious to be mentioned in such a document (he only explains proof by contradiction), but they are not obvious to novices.

Anyway, your comment reminds of an interesting paper[1] by Yehuda Rav, where he defends the autonomy of informal proof with original arguments against the "formalists". You might be interested, I think your intuitions are going in a similar direction.

[1] http://sgpwe.izt.uam.mx/files/users/uami/ahg/1999_Rav.pdf


Thanks for the link, looks like an interesting paper!

The thing is, formal rules can only model what you already understand. A formal rule can tell you exactly WHAT and HOW, but not WHY. But especially with basic proof rules like modus ponens, that is essential. So intuitive understanding is prior to formal rules. Of course, lots can be gained from the interplay of intuition and formality, I am the last one to deny that.

For me, that is an important issue, because I am a big proponent of doing logic on the computer, which necessarily is formal. Nevertheless, the goal here is to make logic on the computer as intuitiv as possible, something many people in the field don't seem to have a desire for. I also don't want the next generation to think exclusively in formal systems, in terms of, oh, if my formal system doesn't allow that, what I want to do must be wrong, or at least stupid. There is already enough of that going around with type theorists.


> Proving things with formal calculi still relies on something like creativity, which itself isn't captured by a precise algorithm, similar to informal proofs.

That's because first-order logic (and any system that extends it) is formally undecidable, so there can be no algorithm that, for a given statement, determines whether it is a valid theorem. If you restrict yourself to propositional logic, you gain decidability but, in the general case, it's still extremely inefficient (unless P=NP).

Of course, in order to prove these things, you have to formally develop what logic is.


Yeah. Though if one is loose with the word "algorithm" one can count an advanced machine learning model (a few years from now) as such an algorithm. It wouldn't be able to prove or disprove every given first-order statement, but perhaps the vast majority below a certain length. Maybe it could decide every conjecture we are interested in.


I still prefer the explanation that I prepared when I was teaching at Dartmouth College.

https://docs.google.com/document/d/1_uwl3WDZk_BxNOUL7W0FiPMM...

I literally gave everyone that handout and told them, "To make sense of it, you're all going to do the next proof. I'll just prompt you." They thought this was impossible. But I told them to trust me and I began.

I went around the room. I asked one person what the next step in the flowchart was. I asked the next person to do it. I just wrote down what they said. Kept going until they had produced a complete proof of a result that, at the beginning, they did not know why it might be true.

The best comment I got from that class later was, "Proofs are easy. It is kind of like filling out a shopping list."


This is fantastic. Perhaps similarly, I personally found it much easier to complete math problem sets after I began to write out an explicit list of steps of what to do.

For example, I broke down problems with to-dos, such as:

1. Find the definition for what math_term_X means in a particular problem.

2. (For breaking down part of the problem): Figure out how to show that a particular object is lesser than or equal to another project.

3. Write down headings for each case I need to prove.

...and so on.

Writing down explicit steps was far more practically helpful to me, than my previous conception of problem-solving from the quote about how Feynman solves problems (that is: "Write down the problem, think real hard, write down the solution"). Some people may not need to write down steps, but I was personally able to learn a lot more with a specific, more verbalized approach.

It's very neat and helpful to have a flowchart suited to any general problem, which I'll try out in addition to my current approach of writing down a list of to-dos for solving specific problems. Thanks a lot for sharing.


Maybe it's worth one more section near "Any Ideas On Why It Is True?", something like:

Are you beginning to doubt whether it's true?

Try to think of a counterexample.

Is there something that keeps getting in the way of a counterexample working? Can you prove that that always happens?


Read the second paragraph of the description of "Any Ideas On Why It Is True?"

:-)


Nice and concise! I just started Proof and the Art of Mathematics by JD Hamkins[0], based on pg's recommendation[1], "is a beautiful book in both senses. It's both beautifully written, but also physically beautiful, thanks to its many illustrations, which I was surprised to hear were made by the author himself." Chapter 4 on induction improved my attitude.

[0]https://www.amazon.com/Proof-Art-Mathematics-Examples-Extens... [1] https://twitter.com/paulg/status/1662065331727155202


I found Johannes Riebel's thesis [1] an excellent introduction to the formalization of Zermelo Fraenkel-Set theory, complete with detailed examples of formal proofs.

[1] https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-unde...


I did my undergraduate degree at Wits University in Johannesburg. My Real Analysis course was SO boring. The lecturer basically wrote on the board:

Lemma: ......

Proof: ....

etc.

Theorem: Let epsilon < K, ....

Proof: ....

--QED--

which we all dutifully copied into our notebooks. Very uninspiring.

But when I got to the University of Waterloo for graduate studies, I had a real competitive advantage over my peers for the theory courses I took: I knew what a proof was, and how to do one.


I think the problem a lot of people have (even math majors) is expecting the lecture to be the first introduction to the topic. It's like taking a literature class and going to lecture without doing the reading. Reading (or even just skimming) the next section/chapter of the book before the lecture allows you to focus and ask questions on the parts of the material you didn't understand during the lecture. I very rarely wrote down full proofs in my notes during the lecture. I focused on the lecture itself and wrote down the pieces that I wanted to remember.


>which we all dutifully copied into our notebooks. Very uninspiring.

I had 5 years of that, a very enjoyable time.


There's also "proof by picture", in which you can hastily attempt to prove or disprove something by drawing a suitable diagram in your exam booklet. You have to be darn sure of what you're doing though... ;)


And another fun and powerful technique is "proof by intimidation" (https://en.wikipedia.org/wiki/Proof_by_intimidation), in which you don't have to know what you're doing, but other people have to think you do.


Ironically when rigorous proofs were invented by the greeks, Euclids "proof by picture" was no less rigorous than modern formal logic.


Right now this article is at no. 10 on the front page, while no. 11 is "Book of Proof (2018)".

This is the third time I notice such a coincidence on HN. Is this something that HN does on purpose? Like, does the site match posts with similar titles on the front page to encourage discussion in both? Note that (at the time of writing) this article has 91 points and the other one 45, so the two articles are not ordered by upvote count.

Also, I need a catchy name for this phenomenon (just 'cause I want to name the folder where I keep screenshots documenting it, like). Suggestions?


Over the years I’ve noticed it too, the time lag between similar posts ranging from a few hours to a day or so. My simple explanation is that a person reading the original post either remembers a related nugget of Internet that they think people would also find interesting or else they find it while googling, inspired by the initial post.


I was not aware of the other post when posting this. I stumbled upon this PDF coincidentally and thought it would be valuable for HN.


The Book of Proof post was made 2 hours after this one, you wouldn't have been aware of it. It happens a lot. See a topic, and people post related things because it jogs a memory, they found it while digging deeper, or as a form of indirect response.


>Is this something that HN does on purpose?

No, the HN algorithm is extremely simple and does not consider syntactic similarities of the titles.


I've noticed similar themed posts appearing in clusters constantly on HN. It's way too commonplace to be coincidental.


No it is coincidental. Look at the algorithm yourself.


It doesn't mean it's the algorithm, it means users could be tactically posting to ride a wave of similar interest. That's all.


Yes, you are right. Potentially the effect is partly due to that after seeing the first post a user stumbles over something related and decides to post that as well.


It's intriguing to me that someone who made it to Stanford might only just be getting exposed to the basics of proofs like this?


It makes some sense to have a leveling-of-expectations course for freshmen that both inflates their GPA and says "Okay, whatever you were told in high school, we expect you to use this."


Affirmative action and many related criteria mean that students aren't just admitted by considerations of ability.


For some reason I thought it would be about Coq or other similar structured proof assistant language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: