That's just security-by-obscurity and doesn't actually buy you anything except a speed bump for a hacker. It was a bogus argument from proprietary software vendors against open source a couple of decades ago, and it is a bogus argument for web services, too.
The presence of an error at all is a tell for the hacker as they search the surface area of the service's API, making the wording unclear is simply anti-user (sometimes quite literally when these errors are used as part of anti-fraud measures and shut down accounts without informing the user of what they even did wrong).
I mean showing the exact path to the configuration file likely isn't a good idea, so there is likely a mix of user friendliness and avoiding information leakage.
The messaging to the user should be actionable, so the exact filename going to them doesn't make any sense, but giving them a clear sentence of what exactly went wrong and what should be done to fix it if possible, or a UUID (or "Guru Meditation" value if you're feeling old school) to give to the helpdesk that can then be used to look up all of the relevant information on the other side is reasonable.
We were talking about obfuscating what the user did wrong and giving them a misleading message to somehow improve security. Saying that giving them information they can't actually use (like the path to a configuration file in a proprietary web service) is what we're discussing is moving the goalposts.
I think this is probably taking the advice about not letting people on the sign-in window know if a username/email exists in the service or not (to determine whether or not it is worth spending the time trying a list of potential passwords for another user and access data they shouldn't) and expanding it without understanding the nuance. Before they have signed in you don't know who or what is accessing the login path and therefore there's much less confidence that it's a legitimate user. Once the login is successful and that auth token is being used, though, the confidence is much higher and obfuscating details of the relationship between the company and that particular user is pretty strictly anti-user since the user can no longer be certain if the implicit contract of services provided will continue as expected or not. (Couple that with network effects, migration costs, etc, and the relationship becomes even more lopsided.)
And you don't think anti-fraud teams use error logs triggered by users as a signal for potentially banning them?
These teams are incentivized to eliminate fraudulent accounts that cost the company money and are pressured/punished when their tools produce false-negatives (fraudulent users that are considered okay), but get no such pushback on false-positives (okay users that get flagged as fraudulent), and accounts that are triggering errors in the backend service(s) can look a lot like someone actively trying to hack it. Basically any sort of anomalous behavior that correlates with negatives for the business get flagged by these tools, and doing so unjustly is not an explicit goal, but it isn't really punished within the corporation.
(The false-positives do get negative feedback in the rare instances when it blows up on social media, so these teams often include a whitelist of high profile accounts to just skip over but still impact the regular users capriciously, only "solving" the false-positive problem insofar as it impacts the business.)
The presence of an error at all is a tell for the hacker as they search the surface area of the service's API, making the wording unclear is simply anti-user (sometimes quite literally when these errors are used as part of anti-fraud measures and shut down accounts without informing the user of what they even did wrong).