We know in hindsight that lisp became most useful for representing computation, but what ever happened to AI? McCarthy says it's characteristic of LISP. SICP also mentions AI as being fundamental to lisp at the beginning of the book. Norvig & Russel used Common Lisp for the first edition of their book. But, then what happened? Why did it just disappear for no reason?
Lisp was ideal for reasoning systems, its homoiconic and meta-programmable nature is perfect for manipulating symbolic structures and logic.
But when AI shifted toward numerical learning with neural networks, tensors, and GPU computation, Lisp’s strengths mattered less, and Python became the new glue for C/CUDA libraries like NumPy, PyTorch and TensorFlow.
Still, nothing prevents Lisp from coming back. It would actually fit modern deep learning well if a "LispTorch" with a CUDA FFI existed. We would have macros for dynamic graph generation, functional composition of layers, symbolic inspection, interactive REPL exploration, automatic model rewriting etc.
We almost had it once: Yann LeCun’s SN (the first CNN) was built on a C core with a Lisp interpreter on top to define, develop and inspect the network. It eventually evolved into Lush, essentially "Lisp for neural networks", which in turn inspired Torch and later PyTorch.
So Lisp didn't die in AI, it's just waiting for the right people to realize its potential for modern neural networks and bring it back. Jank in particular will probably be a good contender for a LispTorch.
Norvig actually did comment on this publicly once on the Lex Friedman podcast. Basically what he said was that lisp ended up not working well for larger software projects with 5 or more people on them, and the reason why they never used lisp in any of their books again was because students didn't like lisp. Norvig doesn't seem to get why students didn't like lisp, and neither do I but somehow this is the real reason why it was abandoned.
> Basically what he said was that lisp ended up not working well for larger software projects with 5 or more people on them
I don’t think "doesn’t work for teams of 5+" is a fair generalization. There are production Clojure and Emacs (Lisp) codebases with far more contributors than that.
Language adoption is driven less by inherent team-size limits and more by social and practical factors. Some students probably don't like Lisp because most people naturally think in imperative/procedural terms. SCIP was doing a great job teaching functional and symbolic approaches, I wish they hadn't shifted their courses to Python, since that increases the gravitational pull toward cognitive standardization.
The AI winter happened. And the AI they talk about is classical, symbolic AI where you try to explicitly represent knowledge inside the computer. The new LLM stuff is all neural networks, and those benefit more from fast low-level vector implementations than high-level ease of symbolic manipulation.
So modern AI is all mostly C or even Fortran, often driven from something more pedestrian, like Python.
I'm not sure what exactly you're referring to, but one avenue to implement AI is genetic programming, where programs are manipulated to reach a goal.
Lisp languages are great for these manipulations, since the AST being manipulated is the same data structure (a list) as everything else. In other words, genetic programming can lean into Lisp's "code is data" paradigm.
As others mentioned, today everything is based on neural networks, so people aren't learning these other techniques.
I'm referring to the fundamental idea in AI of knowledge representation. Lisp is ideal for chapters 1 through 4 of AIMA, and TensorFlow has shown that NN can be solved well with a domain specific language which lisp is known to be great for.