Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Not "coding uses math", I mean it is math

> I mean equivalent in the way mathematicians do

That sounds like you're backing off from your original claim, probably because it is impossible to defend.

That you can use mathematics to describe code doesn't seem very different from using math to describe gravity, or the projected winner in an election, or how sound waves propagate.

Isn't the primary purpose of math to describe the world around us?

Then it shouldn't be surprising that it can also be used to describe programming.

In the real world, however, software engineering has nothing to do with mathematical abstractions 99% of the time



A programmer constructs a function from some data type to another while a mathematician constructs a function from witnesses of some proposition to another?

Though interpreting a CRUD app as a theorem (or collection of theorems) doesn’t result in an interesting theorem, and interpreting a typical theorem as a program… well, sometimes the result would be a useful program, but often it wouldn’t be.


Interpreting a CRUD apps (or fragments of them) as theorems is interesting (given a programming language and culture that doesn't suck)! e.g. if you have a function `A => ZIO[Any,Nothing,B]`, then you have reasonable certainty that barring catastrophic events like the machine going OOM or encountering a hardware failure (essentially, things that happen outside of the programming model), that given an A, you can run some IO operation that will produce a B and will not throw an exception or return an error. If you have an `A => B`, then you know that given an A, you can make a B. Sounds simple enough but in practice this is extremely useful!

It's not the type of thing that gets mathematicians excited, but from an engineering perspective, such theorems are great. You can often blindly code your way through things by just following the type signatures and having a vague sense of what you want to accomplish.

It's actually the halting problem that I find is not relevant to practical programming; in practice, CRUD apps are basically a trivial loop around a dispatcher into a bunch of simple functions operating on bounded data. The hard parts have been neatly tidied away into databases and operating systems (which for practical purposes, you can usually import as "axioms").


  > It's not the type of thing that gets mathematicians excited
Says who? I've certainly seen mathematicians get excited about these kinds of things. Frequently they study Programming Languages and will talk your ear off about Category Theory.

  > You can often blindly code your way through things by just following the type signatures and having a vague sense of what you want to accomplish.
Sounds like math to me. A simple and imprecise math, but still math via Poincare's description.

  > in practice, CRUD apps are basically a trivial loop around a dispatcher into a bunch of simple functions operating on bounded data
In common settings. But those settings also change. You may see those uncommon settings as not practical or useful but I'd say that studying those uncommon settings is necessary for them to become practical and useful (presumably with additional benefits that the current paradigm doesn't have).


I think we're in agreement. My comment about the halting problem was meant to refer to Rice's theorem (the name was slipping my mind), which I occasionally see people use to justify the idea that you can't prove interesting facts about real-world programs. In practice, real-world programming involves constantly proving small, useful theorems. Your useful theorem (e.g. `Map[User,Seq[Account]] => Map[User,NetWorth]`) is probably not that interesting to even the category theorists, but that's fine, and there's plenty you can learn from the theorists about how to factor the proof well (e.g. as _.map(_.map(_.balance).sum)).


  > Isn't the primary purpose of math to describe the world around us?
No, that's Physics[0]. I joke that "Physics is the subset of mathematics that reflects the observable world." This is also a jab at String Theorists[1].

Physicists use math, but that doesn't mean it is math. It's not the only language at their disposal nor do they use all of math.

  > software engineering has nothing to do with mathematical abstractions 99% of the time
I'd argue that 100% of the time it has to do with mathematical abstractions. Please read the Poincare quote again. Take a moment to digest his meaning. Determine what an "object" means. What he means by "[content] is irrelevant" and why only form matters. I'll give you a lead: a class object isn't the only type of object in programming, nor is a type object. :)

[0] Technically a specific (class of) physics, but the physics that any reasonable reader knows I'm referencing. But hey, I'll be a tad pedantic.

[1] String Theory is untestable, therefore doesn't really reflect the observable world. Even if all observable consequences could be explained through this theory it would still be indistinguishable from any other alternative theory which could do so. But we're getting too meta and this joke is rarely enjoyed outside mathematician and physicist communities.


> No, that's Physics

Going on a total tangent, if you'll forgive me, and I ask purely as a curious outsider: do you think math could have ever come into being if it weren't to fill the human need of describing and categorizing the world?

What would have been the very beginning of math, the first human thought, or word or action, that could be called "math"? Are you able to picture this?


  > do you think math could have ever come into being if it weren't to fill the human need of describing and categorizing the world?
I'm a bit confused. What exactly is the counterfactual[0] here? If it is hyper-specific to categorizing and describing then I think yes, those creatures could still invent math.

But my confusion is because I'm having a difficult time thinking where such things aren't also necessary consequences of just being a living being in general. I cannot think of a single creature that does not also have some world model, even if that model is very poor. My cat understands physics and math, even though her understandings are quite naive (also Wittgenstein[1] is quite wrong. I can understand my cat, even if not completely and even though she has a much harder time understanding me). More naive than say the Greeks, but they were also significantly more naive than your average math undergrad and I wouldn't say the Greeks "didn't do math".

It necessitates a threshold value and I'm not sure that this is useful framing. At least until we have a mutual understanding of what threshold we're concerned with. Frankly, we often place these contrived thresholds/barriers in continuous processes. They can be helpful but they also lead to a lot of confusion.

  > What would have been the very beginning of math
This too is hard to describe. Mull over the Poincare quote a bit. There's many thresholds we could pick from.

I could say when the some of the Greeks got tired of arguing with people who were just pulling shit out of their asses, but that'd ignore many times other civilizations independently did the same.

I could say when the first conscious creature arose (I don't know when this was). It needed to understand itself (an object) and its relationship to others. Other creatures, other things, other... objects.

I could also say the first living creature. As I said above, even a bad world model has some understanding that there are objects and relationships between them.

I could also say it always was. But then we get into a "tree falls in a forest and no one is around to hear it" type of thing (also with the prior one). Acoustic vibrations is a fine definition, but so is "what one hears".

I'd more put the line closer to "Greeks" (and probably conscious). The reason for this is formalization, and I think this is a sufficient point where there's near universal agreement. In quotes because I'll accept any point in time that can qualify with the intended distinction, which is really hard to pin-point. I'm certainly not a historian nor remotely qualified to point to a reasonable time lol. But this also seems to be a point in history often referenced as being near "the birth" and frankly I'm more interested in other questions/topics than really getting to the bottom of this one. It also seems unprovable, and I'm okay with that. I'm not so certain it matters when that happened.

To clarify, I do not think life itself necessitates this type of formalization though. I'm unsure what conditions are necessary for this to happen (as an ML researcher I am concerned with this question though), but it does seem the be a natural consequence of a sufficient level of intelligence.

I'll put it this way, if we meet an alien creature I would be astonished if they did not have math. I have no reason to believe that their math would look remotely similar to ours, and I do think there would be difficulties in communicating, but if we both understand Poincare's meaning then it'll surely make that process easier.

Sorry, I know that was long and probably confusing. I just don't have a great answer. Certainly I don't know the answer either. So all I can give are some of my thoughts.

[0] https://www.inference.vc/causal-inference-3-counterfactuals/

[1] https://existentialcomics.com/comic/245


>Even if all observable consequences could be explained through this theory it would still be indistinguishable from any other alternative theory which could do so.

That's not unique, all quantitative theories allow small modifications. Then you select parsimonious theory.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: