I enjoyed this read and agree Lenat was a grifter, which is easy to see based on contracts and closed source. But I dislike how the article seems tilted towards a hit piece against search, heuristics, reasoning, symbolic approaches in AI, and even striving for explainable/understandable systems. It's a subtext throughout, so perhaps I'm misinterpreting it.. but the neats vs the scruffies thing is just not really productive, and there seems to be no real reason for the "either/or" mentality.
To put some of this into starker contrast.. 40 years, 200 million dollars, and broken promises is the cost burned on something besides ML? Wait isn't the current approach burning that kind of cash in a weekend, and aren't we proudly backdating deep-learning to ~1960 every time someone calls it "new"? Is a huge volume of inscrutable weights, with unknown sources, generated at huge costs, really "better" than closed-source in terms of transparency? Are we not very busy building agents and critics very much like Minky's society of mind while we shake our heads and say he was wrong?
This write-up also appears to me as if it were kind of punching down. A "hostile assessment" in an "obituary" is certainly easy in hindsight, especially if business is booming in your (currently very popular) neighborhood. If you didn't want to punch down, if you really want to go on record as saying logic/search are completely dead-ended and AGI won't ever touch the stuff.. it would probably look more like critiquing symbolica.ai, saying that nothing like scallop-lang / pyreason will ever find any use-cases, etc.
While I haven't been surprised to see Cyc become more and more clearly a failure, Doug Lenat was no grifter. To the very end, he (alongside a handful of others at the company) was the truest true-believer I've ever known. Cyc was his life's mission, and he never doubted that.
What do you think made Lenat work so secretively? I suspect his ideas could have advanced a lot quicker if he'd been plugged into the community, sharing and getting feedback. In research it's almost never a good move to pursue a dream in isolation.
I'd say that being a believer doesn't necessarily conflict with being a grifter. In fact the true believer is practically obligated to engage in tricks, because the ends justify the means. For example taking government contracts for anti terrorism because the problem isn't well defined, your results are hard to check, and the money is going to be good if temporary. Anything where you don't really expect to provide promised value is always worth it because you're going to "pay it back" with big results later. For more recent stuff along the same lines, colonizing mars and getting away from oil are like that too, because for the true believer stuff like this is worth a little stock manipulation or whatever else is required.
Grifting suggests a level of cynicism, that one knows that one is selling snake oil.
I don’t know Lenat and don’t have an opinion one way or another. But be careful suggesting someone is grifting verses just believing in an idea that ultimately doesn’t come to fruition.
> even striving for explainable/understandable systems
It's been almost 6-8000 years since the advent of writing and we still cannot explain or understand human intelligence and yet we expect to be able to understand a machine that is close to or surpasses human intelligence? Isn't the premise fundamentally flawed?
I think I'd remain interested in more conclusive proof one way or the other, since by your logic everything that's currently unknown is unknowable.
Regardless of whether the project of explainable / understandable succeeds though, everyone should agree it's a worthy goal. Unless you like the idea of stock-markets, resource planning for cities and whole societies under the control of technology that's literally indistinguishable from oracles speaking to a whispering wind. I'd prefer someone else is able to hear/understand/check their math or their arguments. Speaking of 6-8000 years since something happened, oracles and mystical crap like that should be forgotten relics of a bygone era rather than an explicit goal for the future
It is actually incredibly silly to expect full explain ability as a goal because any system sufficiently intelligent to do basic arithmetic will have behavior that is inexplicable.
I like the guys general ideas about research but um. Did you see the section describing contracts? Article states 50% of funding came from the military. People would be freaking out if they heard the same about Google, Facebook, or OpenAI.. for good reason.
I'm not a fan of weaponizing AI, and I think that's what we're talking about. Either it was a glorified CMS, in which case the presentation as AI was dishonest and cynical. Or it really was AI, in which case it was weaponized research.
If we're talking about graves, then it might be good to also consider all of the ones that you're not mentioning, the ones presumably resulting from the details about where the money came from. How many? How many of those deserved it and how many were bad inferences? I guess we'll never know.
Oh, OK. Well I don't call that a grifter, just an ordinary, garden variety, techie. Many (not all) do that; actively seek funding from militaries for their work.
E.g., just this week MS fired two people for protesting the use of Azure to power the Palestinian Genocide [1].
When people talk about the military-industrial complex, what they really should be talking about is the military-FAANG complex. AI and military intelligence are both the same sad joke.
Lenat was no different in that, so I don't think it's fair to call him a grifter. I do think it's fair to call him out on being an asshole who put money above peoples' lives.
Btw, I've released some of my free stuff under a modified GNU 3.0 with an added clause that prohibits its use for military applications. I've been told that makes it "non-free" and it seems that's a bad thing. Lenat is only one nerd in a long line of nerds that need to think very hard about the ethics of their work.
To put some of this into starker contrast.. 40 years, 200 million dollars, and broken promises is the cost burned on something besides ML? Wait isn't the current approach burning that kind of cash in a weekend, and aren't we proudly backdating deep-learning to ~1960 every time someone calls it "new"? Is a huge volume of inscrutable weights, with unknown sources, generated at huge costs, really "better" than closed-source in terms of transparency? Are we not very busy building agents and critics very much like Minky's society of mind while we shake our heads and say he was wrong?
This write-up also appears to me as if it were kind of punching down. A "hostile assessment" in an "obituary" is certainly easy in hindsight, especially if business is booming in your (currently very popular) neighborhood. If you didn't want to punch down, if you really want to go on record as saying logic/search are completely dead-ended and AGI won't ever touch the stuff.. it would probably look more like critiquing symbolica.ai, saying that nothing like scallop-lang / pyreason will ever find any use-cases, etc.