Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Manhattan project happened when the entire conceptual road map to fission weapons was understood. This is manifestly not the case with AI, which can be charitably described as "add computers until magic".


I didn’t compare OpenAI to the Manhattan Project. I was pointing out that if a small number of people discover a plausible conceptual pathway to AGI, a similar project will happen.


And I'm pointing out that the conceptual breakthroughs that preceded such an engineering sprint happened in the open literature. Wells was writing sci-fi about atomic weapons in 1914. He based it off of a pop-science book written in 1909.

We don't have any such understanding, or even a definition, of 'AGI'.


Wells’ atomic bombs sci-fi was of the type «there is energy in the atom, and maybe someone will use this in bombs someday». Nowhere close to the physical reality of a weapon, more in the realm of philosophy that strong AI currently is. We have an existence proof of intelligence already, after all. The idea is not based on pure fantasy, even though the practicalities are unknown.

Leo Szilard had more plausible philosophical musings in the early thirties, that did not have root in any workable practical idea. The published theoretical breakthroughs you mention didn’t happen until the late thirties. Nuclear fission, the precursor to the idea of an exponential chain reaction, happened only in 1938, 7 years before Trinity.


The issue with strong AI is not that "practicalities are unknown", any more than the issue with Leonardo da Vinci's daydreams of flying machines were that "practicalities are unknown".

He didn't have internal combustion engines, but that's a practicality, other mechanical power sources already existed (Alexander the Great had torsion siege engines). They would never be sufficient for flight, of course, but the principle was understood.

But he could never have even begun to build airfoils, because he didn't have even an inkling of proto-aerodynamics. He saw that birds exist, so he drew a machine with wings that flapped. Look at the wings he drew: https://www.leonardodavinci.net/flyingmachine.jsp

That's an imitation of birds with no understanding behind it. That's the state of strong AI today: we see that humans exist, so we create imitations of human brains, with no understanding behind them.

That lead to machine learning, and after 40 years of research we figured out that if you feed it terabytes of training data, it can actually be "unreasonably effective", which is impressive! How many pictures of giraffes did you have to see before you could instantly recognize them, though? One, probably? Human cognition is clearly qualitatively different.

The danger of machine learning is not that it could lead to strong AI. It's that it is already leading to pervasive surveillance and misinformation. (idlewords is pretty critical of OpenAI, but I actually credit OpenAI with taking this quite seriously, unlike MIRI.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: