> when polls don't match up with reality, as they didn't in 2016, the pollsters have a responsibility to re-calibrate the way they conduct the poll.
Pollsters do that for continuously, and there were definite recalibrations in the wake of 2016.
OTOH, the conditions which produce non-sampling errors aren't static, and it's impossible to reliably even measure the aggregate of non-sampling error in any particular event (because sampling error exists, and while it's statistical distribution can be computed the actual error attributable to it in a by particular event can't be, so you never no how much actual error is due to non-sampling error much less any particular source of non-sampling error.)
Pollsters do that for continuously, and there were definite recalibrations in the wake of 2016.
OTOH, the conditions which produce non-sampling errors aren't static, and it's impossible to reliably even measure the aggregate of non-sampling error in any particular event (because sampling error exists, and while it's statistical distribution can be computed the actual error attributable to it in a by particular event can't be, so you never no how much actual error is due to non-sampling error much less any particular source of non-sampling error.)