Not sure what the point of translating math into code is. Just because we represent something with code doesn't mean it is computable. It's also technically incorrect to say math is translated into code. The mathematical symbols are being swapped with codey things, but the underlying semantics are quite different. For example, we might say the symbol 'oo' is infinity, but infinity itself is something that cannot be embedded in a finite program. So, we might be coerce a parser to realize that 1/oo=0 with a background ruleset, but the identity itself was derived by our minds accessing the mathematical world of forms that is completely inaccessible to finite computational mechanisms.
It seems like this effort is going to increase a lot of overhead to do math, and obscures the role of thinking through the math conceptually. There seems to be a Principia Mathematica 2.0 idea in the programming world that we just need to reduce mathematics to code and it'll become easily accessible and usable. I'm not convinced that will be the case.
You and I read the article very differently. Based on your comment, it sounds like you interpreted the article as implicit advocation for the idea that mathematics should be represented in a programmatic (i.e. computable) way.
I didn't pick up any ideology in my reading. My interpretation of the article is that the author wanted to provide tips on how to implement mathematics, for two reasons:
1. People have to implement nontrivial mathematics from time to time, and
2. Most mathematics doesn't concern itself with obstacles to implementation, because it doesn't have to.
That's true, and numerical analysis can be quite helpful for mathematical insight. However, I've never found the math to code part to be tricky. It's mostly an after thought compared to the conceptualization itself. So the emphasis on coding math suggests a reversal of priorities. In my experience, focusing on the coding part becomes very inefficient, especially with combinatorial problems. No matter how tricky I get with the implementation, a combinatorial explosion is still a combinatorial explosion. Coming up with a neat analytic solution ends up being super efficient in comparison.
>infinity itself is something that cannot be embedded in a finite program
I am not sure what you mean by infinity here, but there are many ways in programming to handle infinite streams in an otherwise finite program. Python's itrrtools.count and the ability to map and filter over it would be a basic example.
Indeed I could argue that every game's or webserver's event loop is embedding infinity in a finite program.
The symbol is not an actual infinity. The abstract concept we access with our minds is an actual infinity. That's the point. The abstract principle cannot itself be embedded in a program or symbol. All we can do is describe it with rules we infer from observing it. Like a painting of a tree will never be the tree itself, and we will not be able to infer everything about the tree just by looking at the painting.
We can imagine infinity and its implications. E.g. what is the largest number? After thinking about it a bit, we realize there is no such thing. In that sense, we are not like finite computational devices. A computer cannot make the same kind of inference. It has to have the answer given to it.
It seems like this effort is going to increase a lot of overhead to do math, and obscures the role of thinking through the math conceptually. There seems to be a Principia Mathematica 2.0 idea in the programming world that we just need to reduce mathematics to code and it'll become easily accessible and usable. I'm not convinced that will be the case.