I have, in my life as a web developer, had multiple "academics" urgently demand that i remove error bands, bars, notes about outliers, confidence intervals etc from graphics at the last minute so people are not "confused"
I obviously cannot assess the validity of the requests you got, but as a former researcher turned product developer, I had several times to take the decision _not_ to display confidence intervals in products, and to keep them as an internal feature for quality evaluation.
Why, I hear you ask? Because, for the kind of system of models I use (detailed stochastic simulations of human behavior), there is no good definition of a confidence interval that can be computed in a reasonable amount of computing time. One can design confidence measures that can be computed without too much overhead, but they can be misleading if you do not have a very good understanding of what they represent and do not represent.
To simplify, the error bars I was able to compute were mostly a measure of precision, but I had no way to assess accuracy, which is what most people assume error bars mean. So showing the error bars would have actually given a false sense of quality, which I did not feel confident to give. So not displaying those measures was actually done as a service to the user.
Now, one might make the argument that if we had no way to assess accuracy, the type of models we used was just rubbish and not much more useful than a wild guess... Which is a much wider topic, and there are good arguments for and against this statement.
After a lot of back-and-forth some years ago, we settled on a third option: If the error bars would be too big (for whatever definition of "too big" we used back then), don't show the data and instead show a "not enough data points" message. Otherwise, if we were showing the data, show it without the error bars.
That is baldly justifying a feeling of superiority and authority over others. It's not your job to trick other people "for their own good". Present honest information, as accurately as possible, and let the chips fall where they may. Anything else is a road to disaster.
Some people won't understand error bars. Given that we evolved from apes and that there's a distribution of intelligences, skill sets, and interests across all walks of society, I don't place blame on anyone. We're just messy as a species. It'll be okay. Everything is mostly working out.
Sometimes they do this because the data doesn't entirely support their conclusions. Error bars, noting data outliers etc often make this glaringly apparent.
Can you be more specific (maybe point to a website)? I am trying to imagine the scenarios where a web developer would work with academics and does the data processing for the representation? Of the few scenarios that I could think about where an academic works directly with a web developer they would almost always provide the full figures.
It really depends what it is for. If the assessment is that the data is solid enough for certain decisions you might indeed only show a narrow result in order not to waste time and attention. If it is for a scientific discussion then it is different, of course.
Its depressing