I think that is a promise that is doomed to failure.
Something we have learned as a civilization over the past ~70 years is that deterministic algorithms are an incredibly powerful thing. Designing processes that have a guaranteed, reliable result for a known input is a phenomenal way to scale up solutions to all kinds of problems.
If we want AI to help us with that, the best way to do that is to have it write code.
AI is automating cognitive work of a human brain. There is barely anything deterministic, guaranteed, reliable or scalable about human brains. (To be honest, this should be apparent if you hired or worked with people.) If anything, being able to process these workloads without the meatware-specific deficiencies has terrifying scalability. The current wave of “““reasoning””” models demonstrate this: the LLM instantly emits a soup of tokens that could take you hours to analyze, greatly boosting the accuracy of the final answer. Expect a lot more of that, quantitatively and qualitatively.
Something we have learned as a civilization over the past ~70 years is that deterministic algorithms are an incredibly powerful thing. Designing processes that have a guaranteed, reliable result for a known input is a phenomenal way to scale up solutions to all kinds of problems.
If we want AI to help us with that, the best way to do that is to have it write code.