Not to mention that "the work involved in co-operating and communicating across the interface has to be added", which takes on new meaning and magnitude when one considers the energy requirements of LLMs.
I'm actually a pretty big Dijkstra fanboy after having taken the time to read through most of the archive the submission is from. For whatever reason, I suddenly remembered this paper, and yeah it does seem apropos so I shared it.
- "prompt engineering" is exactly the large intellectual investment on the human side alluded to in an article
- tiny imprecisions make the entire programs invalid or incorrect
- machines also tend to grossly misinterpret instructions (hallucinations)
- they are unable to replicate novel approaches based on small amounts of data