They are described as being useful for novel problems, but no matter the vendor or if it is an agentic system I set up, or I watch a reasoning model prattle on, I understand those things as filters that are limiting the conceptual search space, and regardless, it is very easy to bump up against the limits. I understand the use as a rubber duck, that's fine, but this cult like belief that we can't criticize is out of control. My skill issue is that I keep trying to use all of these skills and yet I don't have this default pro-LLM belief which seems to be the requirement. Just today I got multiple models to invent QEMU configuration items that don't exist while trying to solve my problem, which I guess I have to say now is novel by your list here, but it was also something pretty easily found in the documentation I later found out... and even knowing that I wasn't able to get the models to understand that even when I explicitly gave them that information. I've had other experiences like trying to squeeze a watermelon seed. At this point, there is just too much risk of anything they produce giving me a wild goose to hunt down. It is absolutely maddening, but also the people telling me I need to pray about it aren't helpful. These things have not fundamentally improved since they did an impression of D&D games, but I can totally see why people would think that. They approach a database with a natural language query interface but that implies that it knows the context of your language and that they have the data, and when they don't, they make it very difficult to find their errors because they are so adjacent to correct.