Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Getting a more representative sample" is premised on getting as representative a sample as possible, not necessarily perfectly.

Using language such as "pester" is making assumptions about the group. In fact, many likely were busy or have just forgotten to reply. As I mentioned above, there are techniques for assessing biased/inconsistent responses.



> "Getting a more representative sample" is premised on getting as representative a sample as possible

But representative of what? You can't get a sample that is "representative" of non-responders--because they don't respond.

If you ask once and include whoever responds in your sample, your sample is representative (you hope) of people who respond.

If you ask non-responders again, your sample now includes people who responded when asked once, and people who responded when asked twice. But the "asked twice" part introduces an extra variable: did the fact that you asked them twice change something that you would rather had not been changed? Given that, it's not clear that the second sample is any more "representative" of anything useful than the first.


Representative of the population under study.

In practice, "extra variables" are naturally present in every study. For example, surveying mall-goers in the morning vs evening. Or surveying people at one end of the mall vs the other. These variables are likely far more confounding than "must send a followup email".

People asked in the evening might be more annoyed, but you shouldn't assume this a priori and decide to skip surveying them. Just as "busy people" in the studied population might forget to initially respond to an email and need a reminder.

Just as a personal anecdote, I've dealt with this issue quite a lot. In my surveys, the people who respond initially are almost always eager to give glowing reviews. If we didn't send follow-ups, we would have extremely positively biased results. Maybe it is true that sending reminders makes respondents slightly more negative than they normally would be. However, we'd completely exclude "unsatisfied people" otherwise and the survey results would be worthless.


> "extra variables" are naturally present in every study

That's true, but it doesn't change the fact that if you ask people a second time to respond, that's an extra variable you created, not one that was naturally there already. It's not a good idea to create extra variables in the course of doing the study.

It's true that the people who respond to the survey without any further prompting are not necessarily going to be representative of the entire population. But that doesn't mean you can "fix" that by further prompting. What it actually means is that surveys are a tool with only limited usefulness. That's just an inherent, inconvenient truth about surveys.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: