Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Princeton research collaborator here. Glad to answer questions about Rally.

> What "data"? Browsing history? Identity? Something else?

That depends on the Rally study, since research questions differ and studies are required to practice data minimization. Each study is opt in, with both short-form and long-form explanations. Academic studies also involve IRB-approved informed consent. Take a look at our launch study for an example [1].

> Why? What's in it for them? Since when was giving our data to third parties a good idea? There is literally no motivation presented here.

The motivation is enabling crowdsourced scientific research that benefits society. Think Apple Research [2], NYU Ad Observatory [3], or The Markup's Citizen Browser [4]. There are many research questions at the intersection of technology and society where conventional methods like web crawls, surveys, and social media feeds aren't sufficient. That's especially true for platform accountability research; the major platforms have generally refused to facilitate independent research that might identify problems, and platform problems often involve targeting and personalization that other methods can't meaningfully examine.

[1] https://rally.mozilla.org/current-studies/political-and-covi... [2] https://www.apple.com/ios/research-app/ [3] https://adobservatory.org/ [4] https://themarkup.org/citizen-browser



These "This Study Will Collect" and "How We Protect You" sections are really good. It probably wouldn't convince me personally to sign up, but it's as comprehensive as I would expect. It's a shame that these comments didn't make it into the blog post.


I think that the motivation of 'enabling citizen science' is not a very strong one. You will get very, very skewed results, moreso than typical WEIRD, if you conduct studies on the people for whom that is sufficient motivation.

A stronger motivation would be providing a product or service that tangibly adds value to someone's life.

After reading this, I have no idea how Rally would provide any tangible benefits to me.


Exactly. It is so weird to see all this marketing speak that makes it sound like users can get to benefit from something, but in the end this is just something that gets people to work and provide data for free to multi-billionaire universities.

We don't any more studies or research to know that the best privacy policy is to not collect any data in the first place.


I know you mean well but I think you completely missed the above commenters point.

You've replied here with answers to address their (our?) potential concerns, but the commenter never said they had concerns about the project itself, rather that this particular blog post doesn't "sell" or explain the value add well. That's feedback on the project's communication strategy, not on what it's actually doing.

> > Why? What's in it for them? Since when was giving our data to third parties a good idea? There is literally no motivation presented here.

> The motivation is enabling crowdsourced scientific research that benefits society.

You seem to be confusing "theys". The question is what motivates participants, not what motivates researchers.


> You seem to be confusing "theys". The question is what motivates participants, not what motivates researchers.

Contrarily, you seem to be confusing “theys”, yourself.

There exist participants that are motivated by participating in research that benefits society.

Just like there exist individuals motivated by lending their computing resources to the various @Home research efforts.


But if the participants are limited to people who are motivated solely by participating in research, wouldn't that add significant bias to that research?


Indeed, sampling bias is a large concern.

Nonetheless, much of psychology research conducted in the US has made do with ridiculous sampling bias - the US college student is anecdotally considered to be the most-studied population in the world.


Doesn't the field of psychology have pretty serious issues with the replicability of their experimental results?


Indeed, anecdotally, if not empirically, that is the case. Nonetheless, psychology is a highly operationalized field.

In other words, every thorough study begins with an assessment and revision of the consensus language being used to describe reality.

On that front alone, psychology is one of the most hard sciences around.

Deep learning is directly attributable to psychology research, for what it is worth.


Personally I don't think that researchers have any more business doing this kind of surveillance than Google and company do.

The idea that this will benefit society seems naive to me. I feel like it will only serve to legitimize the practice by putting ostensibly trustworthy faces on the packaging.


Not just surveillance, but conducting research within corporate platforms. Therefore, they would have access to my data and a corporation's engine. If I think that google knows too much about me, do I get to opt-in whether that hyper-knowledge is shared to researchers (because I won't).


> Personally I don't think that researchers have any more business doing this kind of surveillance than Google and company do.

As other commenters have noted, then you should decline to opt-in to participating in research such as this.


> The motivation is enabling crowdsourced scientific research that benefits society.

Oh, well since it “benefits society”...

Tell me, how is it that you filter for the research that benefits society vs the research that doesn’t?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: