You are conflating Neural Model with LargeLangageModel
There are a lot more models than just LLM. Small specialized model are not necessarily costly to build and can be as (if not more) efficient and cheaper; both in term of training and inference.
I’m not implying what you inferred. I am only referring to LLMs in response to GP.
Another way to put it is most people building AI products are just using the existing LLMs instead of creating new models. It’s a gold rush akin to early mobile apps.
There are a lot more models than just LLM. Small specialized model are not necessarily costly to build and can be as (if not more) efficient and cheaper; both in term of training and inference.