IME when AI "hallucinates" API endpoints or library functions that just aren't there it's almost always the case that they should be. In other words the AI has based it's understanding on the combined knoweledge of hundreds(?) of other APIs and libraries and is geenrating an obvious analogy.
Turning this around: a great use case is to ask AI to review documents, APIs, etc. AI is really great for teasing out your blindspots.
If the training data contains useless endpoints the AI will also hallucinate those useless endpoints.
The wisdom of the crowd only works for the end result not if you consider every given answer, then you get more wrong answers because you fall to the average.
Hard no. I’ve had that in lots of cases, it just applied some symmetry pattern to come up with purposefully absent public endpoints that exist for employees only. It gets dangerous the moment you put as much trust in it as you’re suggesting.
Turning this around: a great use case is to ask AI to review documents, APIs, etc. AI is really great for teasing out your blindspots.