Having read _Principia_ did Pitts read Kurt Goedel[1]? I would very much like to know what he thought of it!
What an incredibly sad story - burning years of work in meloncholy, all thanks to the lies of an angry woman. Wiener, though, shares a great deal of blame - a man should not pass judgement in on his friends without inquiring into the truth of the matter.
Given that Gödel and Pitts most likely wrote to one another (although their correspondence doesn't survive), it's very probable that they were intimately familiar with the work of one another.
There's an entire chapter dedicated to Walter Pitts in Kurt Gödel: Collected Works, Volume V.
Such a terrible tragedy, both personal and for the world. I highly recommend the book referenced: "Dark Hero of the Information Age: In Search of Norbert Wiener, the Father of Cybernetics".
R.I.P. Walter Pitts
(One thing that troubles me slightly: this article mentions a possible cause of the break between Wiener and the others which is presented as speculation in the book, if I remember correctly, but stated here as a bald fact. In any event I wish that Wiener hadn't acted so rashly.)
This article is fascinating. I had no idea that the Von Neumann machine was an extrapolation of a mental model. Even more fascinating, that the existince of a symbolic computation machine made the possibility of purely symobolic epistimology impossible. It's like they abstracted a lever higher than they wanted to and then made a cleaner, simpler implementation of it. And now with modern AI we are doubling down on that implementation and trying to build a new kind of intelligence no top of it.
It's fascinating that we've built a model of a computer, which was impractical; then we've modeled the brain with very high abstraction and then actually built a computer. Now we're back to modelling the brain because while the abstractions are universal they're not efficient enough.
We need to nitty gritty details on efficient ways of learning and computing approximately ("probably approximately correct"), quickly and efficiently.
Which is what I believe the article tries to convey as the last epiphany of Pitts.
Even Alan Turing, in "On computable numbers, with an application to the Entscheidungsproblem", practically invents turing machines modelling how a mathematician works: He has a pencil, some paper and a number of different states inside his brain.
Everyone was directly motivated at the time to solve Hilbert's Entscheidungsproblem [0] which was about mathematical proof not universal machines. Turing, along with a few other mathematicians, recognized that proofs involved notions of algorithms and computation and all, together, generalized these into notions of computation we have today—Turing Machines, Lambda Calculus, Recursive Functions. So it's not terribly surprising that Turing's model was a human one. It was exactly his goal (originally).
"But three years later, when he heard that Russell would be visiting the University of Chicago, the 15-year-old ran away from home and headed for Illinois. He never saw his family again."
Assuming that 80% of humanity lives on less than $10 a day, and are, therefore, poor, I can just imagine the number of geniuses born poor that will not ever be able to show their geniality to the world.
Imagine how the world would be a better place with all these people working as scientists, philosophers, mathematicians, etc.
I've taught in (Mexican) state programs for gifted youth and I can testify that a lot of the kids (many of them poor or almost) end up wasting their potential and having hard times, monetary wise. Meanwhile, significantly less hard working and less bright kids from private schools go on and get comfortable jobs.
A local think tank (http://pipe.cide.edu/talento-en-mexico) calculated the economical implications of not developing this talent. According to their model, on a 40 year span, developing it would give a 132% increase in the GDP per capita in comparison with staying at current levels.
> Nature had chosen the messiness of life over the austerity of logic, a choice Pitts likely could not comprehend.
Regarding "Nature had chosen ...", I wonder if this was actually how Pitts saw it (he seemed more clever than that), or whether it is the article's author's misconception that he considered there is in fact something in Nature that "chooses", instead of applying mechanistic rules entirely.
It is as if the part of the story about the frogs is meant to show that Nature has a "spirit" after all, that evaded being captured in logic. I can't really fathom why Pitts, after all his history, would come to that conclusion. Just because the retina turned out to possess a certain amount of analog computing power?
The brain doesn't do logic, it does pattern matching, and selects the appropriate reaction based on the match that offers the biggest chances of survival.
I'm painfully familiar with that world view. I went through Stanford CS in 1983-1985, when logic-based AI was, in retrospect, having its last gasp. I took "Dr. John's Mystery Hour", Epistemological Problems in Artificial Intelligence, from John McCarthy. The logicians were making progress on solving problems once they'd been hammered into just the right predicate calculus form, but were getting nowhere in translating the real world into predicate calculus.
For computer program verification, though, that stuff works. For a time, I was fascinated by Boyer-Moore theory and their theorem prover. They'd redone Russell and Whitehead with machine proofs. Constructive mathematics maps well to what computers can do. I got the Boyer-Moore theorem prover (powerful, could do induction, but slow) hooked up to the Oppen-Nelson theorem prover (limited, only does arithmetic up to multiplication by constants, but fast) and used the combination to build a usable proof-of-correctness system for a dialect of Pascal. It worked fine; I used to invite people to put in a bug in a working program and watch the system find it.
But it was clear that approach wasn't going to map to the messiness of the real world. Working on proof of correctness for real programs made it painfully clear how brittle formal logic systems are. Nobody was going to get to common sense that way. The logicians were in denial about this for a long time, which resulted in the "AI winter" from 1985 to 2000 or so.
Then came the machine learning guys, and progress resumed. Science progresses one funeral at a time.
What an incredibly sad story - burning years of work in meloncholy, all thanks to the lies of an angry woman. Wiener, though, shares a great deal of blame - a man should not pass judgement in on his friends without inquiring into the truth of the matter.
[1]https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...