Individual polls were sometimes outside the margin, but collectively the polls were not far off the mark and collectively they were within the margin of error. This is why sites like 538 were able to model the various outcomes with some degree of certainty and ONCE AGAIN manage to get results within the range of expected probabilities. You really seem to have a hard time understanding both polling and probability and in particular the relationship between sample size and error. I suggest a bit more research.
Dude, I really don't get why you're being so gung ho on this. The errors on the individual polls were way out of line with the actual results.
When we regularise these results like 538 or the Economist did, we get slightly better results, but they were still not particularly accurate.
This is a real problem, and not one that we should be ignoring. I gave the polling companies a pass after 2016, but that this happened again is extremely concerning.
> This is why sites like 538 were able to model the various outcomes with some degree of certainty and ONCE AGAIN manage to get results within the range of expected probabilities.
But they were super close to the edges of the intervals. Like the most expected number of electoral votes for Biden was 380-400, which is not what happened.
> You really seem to have a hard time understanding both polling and probability and in particular the relationship between sample size and error. I suggest a bit more research.
Please don't be dismissive of other people. Many people (including people who do stuff like this every day) were very surprised by the results. We definitely shouldn't tar and feather the pollsters, but I think it would be great if we could get an understanding of why the polls appear to have been systematically biased.