Previously, I thought certain math topics were "hard" (e.g. category theory) while others were supposed to be "easy" (e.g. Calc I). I beat myself up for struggling with the "easy" topics and believe this precluded me from ever tackling "hard" topics.
I was thirty-something years old when I finally realized math has a well-documented maturity model, just like emotional maturity or financial maturity. This realization inspired me to go back and take a few math classes that I had previously labeled as "too hard," with the mindset that I was progressing my math maturity.
My point is that choosing an "age-appropriate" (in terms of math maturity, not actual calendar age) textbook is important. I also find it extremely helpful to chat with people who are more mathematically mature than I am, in the same way it's helpful to seek advice from an older sibling.
This was very much my experience with computer science. When I first studied computer science in middle school at age 13, I could only understand simpler algorithms like quicksort. I simply couldn't grasp dynamic programming. When I studied it again at age 19 (after having learned a couple of more programming languages like C++ and Python and Haskell, as well as taken some classes in mathematical proofs), it became much easier to understand. And then it was around age 22 when I could solve competition-style dynamic programming problems with ease.
This sounds somewhat similar to my experience in mathematics. I realized part way through college that I would only gain an innate understanding of a topic about two semesters into a harder one - ie I only had a firm grasp of Calculus I once I took Calculus III. I would still do well in those courses, but I would have to follow the motions at times. Alas, this meant the tail end of my education was doomed to some courses I only sort of understood. Fortunately I didnt major in it.
I am not sure what you are trying to say because your message and your posted link seem to be an odds with each other.
Mathematical maturity has all to do with practice and experience and nothing to do with age.
Category Theory is easy because it starts from nothing, literally. You can learn it at any age and with no almost no prior education. Same with various formal logics.
You can't study or use in any way the theory of Calabi–Yau manifolds unless you have mastered all of its prerequisites.
Certainly the advice of not choosing textbooks you don't understand is spot on, however. Unfortunately (?) most textbooks assume quite a bit of background, so you don't often have much choice in this regard.
> Category Theory is easy because it starts from nothing, literally. You can learn it at any age and with no almost no prior education. Same with various formal logics.
I disagree, you need to have capacity for abstract thought for abstract topics, most especially with category theory. Normal people have quite difficulty understanding abstractness, you either have the aptitude for it, or work your brain hard enough that it becomes somewhat easier. Children and younger people especially have difficulty understanding non-concrete topics.
> Mathematical maturity has all to do with practice and experience and nothing to do with age.
That's OP's point.
"Age-appropriate" was put in quotes because it was referring to a metaphorical "mathematical age" to tie it to the concept of mathematical maturity.
And, indeed, certain books require more mathematical maturity to get through than others, even though the prerequisites may be minimal. You'll see this often explicitly described in the preface of textbooks and reviews of those textbooks.
They mentioned studying certain books to develop mathematical maturity because that's where the practice and experience happen. Calculus is one such course used as part of this process, as are many courses intended as a first exposure to proofs like linear algebra and discrete math. Some might use the Moore Method with point-set topology.
While beginning calculus students often pick up derivatives and integrals (and the associated formulas) easily, the delta-epsilon definitions of limits and continuity are a well known stumbling block for many. I've been told that the difficulty stems from that being the first place math beginners really see nested quantifiers: (forall epsilon)(exists delta)(...). In logic though, nested quantifiers are fundamental. I don't know what happens if someone tries to study logic without first having studied calculus. Maybe it's a good idea, but few people do it that way.
> I don't know what happens if someone tries to study logic without first having studied calculus.
When I was in college, the Philosophy department offered this course. It was considered an easy way to get a general education math credit without needing to be good at math. It was a really enjoyable course[0] that put me on the path to becoming a computer programmer. It occasionally comes in handy[1].
Delta epsilon is just an annoying unenlightening technicality, not the essence of real analysis. Surreal numbers (infinitesimals)solve the problem more elegantly.
To each his own, but epsilon-delta is my go-to example of formalizing an intuitive concept ("gets closer and closer"), which is a high-level mathematical skill.
The intuition and the formalism are presented together (at least, they should be!). To learn the role of epsilon and delta, the student needs to jump back and forth, finding the correspondences between equations and the motivation. This is a skill that needs practice; this was one of the first places I found the equations dense enough that I couldn't just "swallow them whole".
(The earlier I remember is the quadratic formula, which I first painfully memorized as technical trivia. It took me a couple of years to grasp that it was completing-the-square in general form. Switching between the general and the specific is another skill that you develop)
Surreal analysis is sort of a thing but it is quite far out there (e.g. you can have transfinite series instead of merely infinite ones). Maybe you meant nonstandard analysis (NSA), which is real analysis done with infinitesimals, but the machinery justifying it is way outside of what you'd see in even a theory-oriented intro calculus class. There was an intro calculus text (Keisler, 1976) that used infinitesimals and NSA. I don't know how it dealt with constructing them though.
The problem is that epsilon deltas have very little practical use outside of theoretical proofs in pure mathematics. Even for cutting edge CS/statistics fields like high level machine learning, most of the calculus used are existing formalisms on multidimensional statistics and perhaps differential equations. Aside from Jensen's inequality and the mean value theorem, I have never seen any truly useful epsilon delta proofs being used in any of the ML papers with significant impact. It's perhaps mentioned once in passing when teaching gradient descent to grad students.
> Even for cutting edge CS/statistics fields like high level machine learning, most of the calculus used are existing formalisms on multidimensional statistics and perhaps differential equations.
If you mean experimental work, then sure, that's like laboratory chemistry. You run code and write up what you observe happens. If you're trying to prove theorems, you have to understand the epsilon delta stuff even if your proofs don't actually use it. It can be somewhat abstracted away by the statistics and differential equations theorems that you mention, but it is still there. Anyway, the difficulty melts away once you have seen enough math to deal with the statistics, differential equations, have some grasp of high dimensional geometry, etc. It's all part of "how to think mathematically" rather than some particular weird device that one studies and forgets.
I agree, and including delta-epsilon proofs in calculus 1 seemed like a way for the curriculum authors to feel good that they were “teaching proof techniques” to these students, when in reality they are doing no such thing. I later did an MS in math, and loved the proofs, including delta-epsilon proofs…after taking a one semester intro to proofs class that focused just on practicing logic and basic proof techniques
If you want to do "exact" computation with real numbers (meaning, be able to rigorously bound your results) you just can't avoid epsilon-delta reasoning. That's quite practical, even though in most applied settings we just rely on floating point approximations and try to deal with numerical round-off errors in a rather ad-hoc way.
I am not aware of any good one, but I realized you could probably mechanically extract such a map from Lean's mathlib[0][1].
Since Lean builds everything from scratch, this should be doable, albeit Lean builds everything on top of type theory which is not the only choice possible. Different foundations will result in a different graph.
Also the best way to learn math is probably not by following this sort of graph, it would be far too abstract and disconnected from both the real world and usual practical applications.
Garrity's All the Math You Missed book, mentioned elsewhere in the comments, draws a nice map of subjects, along with little introductions and book recommendations. The map is good for continuous mathematics, but IMHO fails to consider logic and type theory, which is a bit odd given the separate chapter for category theory. It also does not do proper justice to computation and clumps everything together under the label "algorithms".
Good alternatives are The Princeton Companion to Mathematics by Gowers and Mathematics: Its content, Methods and Meaning by Alekxandrov, Kolmogorov, et al. Those present much more detailed maps so YMMV.
> Category Theory is easy because it starts from nothing, literally.
It has virtually no prerequisites, at least in classical mathematics. But I wouldn’t call it ‘easy’ (indeed, many proficient in elementary calculus and so on find it very hard). If you study category theory with no knowledge of any of the concepts it’s designed to abstract it’s not going to make any sense and the whole exercise is pointless. You may be able to follow it and complete exercises, but you won’t actually grok it.
It's sufficiently general as to be approachable from all angles but to actually understand why anything is being discussed I think category theory requires a certain amount of background material.
Yes, mathematical maturity is a consideration, but working carefully through just one mid-college math book that is based on theorems and proofs is a reliable cure. The consideration is not really big since the main goal remains: Just prove the theorems.
(2) Something missing: As a grad student
studying the Kuhn-Tucker (KT) conditions
and the constraint qualifications (CQ),
there was interest in implications among
the CQs. Two of the famous CQ were (a)
the Zangwill and the (b) KT, but the
implications between them were "missing".
So, that was a problem, a theorem "need to
prove". My approach was to look for a
counterexample among wildly goofy sets,
e.g., the Mandelbrot set or Brownian
motion. As appropriate for the KT work,
both sets were closed. Hmm .... So,
needed an optimization objective
function to be minimized. So, ..., soon
enough, for each closed set there is a
real valued function zero on the closed
set, strictly positive otherwise, and
infinitely differentiable. Then I had a
counter example. Two weeks of work in a
reading course. Published it.
Was doing some AI for monitoring but
wanted a better approach, the "need".
Used the probabilistic concept tightness
to get another approach, the basis of the
"proof", widely applicable because still
distribution free (i.e., made no
assumptions about probability
distributions, e.g., Gaussian). Published
it.
the FedEx BoD wanted some revenue
projections, wanted so much it nearly
killed FedEx. So, as in that Hacker News
URL, got some "intuition", ..., got a
simple differential equation (the
theorem), solved that (the proof).
Currently my startup needed some progress,
and I formulated a suitable theorem and
proved it.
A concern about such theorems and proofs,
for the published ones, the check has yet
to arrive.
Was eating lunch with some well known
mathematicians, and they asked what I was
working on. I explained, "scheduling the
fleet at FedEx, which airplanes go to what
cities in what order". Immediately one of
the mathematicians with contempt scoffed
and said "the traveling salesman problem"
as if that was the "theorem" to be proved,
i.e., P = NP.
Nope: I was just trying to save FedEx
some money. So, my approach was 0-1
integer linear programming (ILP) set
covering; that this is in
NP-Complete was to me next to irrelevant; I
just wanted feasible solutions that would
save money. Maybe over a year the savings
would be some $millions, but each feasible
solution might be $1000 above an optimal
solution. At 365 days a year, I'd leave
$365,000 on the table. Fine with me!!!
To the mathematician, all that
consideration of money was irrelevant --
he wanted to focus on P = NP and regarded
that as too difficult (it still is) and I
was foolish for working on it (I wasn't
working on it). In short I was counting
the millions to be saved, not the
thousands of saving to be missed.
Later there was a 0-1 ILP with 40,000
constraints and 600,000 variables. I used
the IBM OSL (Optimization Subroutine
Library) and in 900 primal-dual iteration
for Lagrangian relaxation got a feasible
solution within 0.025% of optimality.
Lesson: O-1 ILP can be a good tool in
practice, sometimes can save a lot of
money.
So the well-known mathematician and I
disagreed on your "which theorems you need
to prove"!!!
Of course there is the now famous
Garey and Johnson, Computers, and
Intractability, Bell Labs, 1979.
The authors were trying to find a least
cost design for some Bell network.
On pages 2-3 we see some cartoons with
"I can't find an efficient algorithm, I
guess I'm just too dumb."
and
"I can't find an efficient algorithm, but
neither can all these famous people."
It turns out by "an efficient algorithm"
they meant (a) getting least cost
solutions, least down to the last tiny
fraction of a penny, (b) to worst case
problems, (c) guaranteed, (d) with
computer time growing no faster than some
polynomial in the size of the problem.
I just wanted to save FedEx some $millions
a year.
The famous mathematician insisted on
(a)-(d) or no savings at all.
Previously, I thought certain math topics were "hard" (e.g. category theory) while others were supposed to be "easy" (e.g. Calc I). I beat myself up for struggling with the "easy" topics and believe this precluded me from ever tackling "hard" topics.
I was thirty-something years old when I finally realized math has a well-documented maturity model, just like emotional maturity or financial maturity. This realization inspired me to go back and take a few math classes that I had previously labeled as "too hard," with the mindset that I was progressing my math maturity.
My point is that choosing an "age-appropriate" (in terms of math maturity, not actual calendar age) textbook is important. I also find it extremely helpful to chat with people who are more mathematically mature than I am, in the same way it's helpful to seek advice from an older sibling.