I am honestly not really sure I strongly agree or disagree with either. I see the argument for why it is unethical. These are trust based systems and that trust is being abused without consent. It takes time/mental well being away from those who are victims who now must process their abused trust with actual physical time costs.
On the flip side, these same techniques are almost certainly being actively used today by both corporations and revolutionaries. Cambridge Analytica and Palantir are almost certainly doing these types of things or working with companies that are.
The logical extreme of this experiment is testing live weapons on living human bodies to know how much damage they cause, which is clearly abhorrently unethical. I am not sure what distinction makes me see this as less unethical under conditions of philosophical rigor. "AI assisted astroturfing" is probably the most appropriate name for this and that is a weapon. It is a tool capable of force or coercion.
I think actively doing this type of thing on purpose to show it can be done, how grotesquely it can be done, and how it's not even particularly hard to do is a public service. While the ethical implications can be debated, I hope the greater lesson that we are trusting systems that have no guarantee or expectation of trust and that they are easy to manipulate in ways we don't notice is the lesson people take.
Is the wake up call worth the ethical quagmire? I lean towards yes.
There’s a utilitarian way of looking at it, that measures the benefit of doing it against the first-order harms.
But the calculation shouldn’t stop there, because there are second order effects. For example, the harm from living in a world where the first order harms are accepted. The harm to the reputation of Reddit. The distrust of an organization which would greenlight that kind of experiment.
On the one hand, not what you should expect from university ethics. On the other hand, this does happen 100% in covert ways and the “real” studies are used for bad.
Though I do not agree with the researchers, I do not think the right answer is to “cancel culture” them away.
It’s also crazy because the Reddit business is also a big AI business itself, training on your data and selling your data. Ethics, ethics.
What is Reddit doing to protect its users from this real risk?
I am honestly not really sure I strongly agree or disagree with either. I see the argument for why it is unethical. These are trust based systems and that trust is being abused without consent. It takes time/mental well being away from those who are victims who now must process their abused trust with actual physical time costs.
On the flip side, these same techniques are almost certainly being actively used today by both corporations and revolutionaries. Cambridge Analytica and Palantir are almost certainly doing these types of things or working with companies that are.
The logical extreme of this experiment is testing live weapons on living human bodies to know how much damage they cause, which is clearly abhorrently unethical. I am not sure what distinction makes me see this as less unethical under conditions of philosophical rigor. "AI assisted astroturfing" is probably the most appropriate name for this and that is a weapon. It is a tool capable of force or coercion.
I think actively doing this type of thing on purpose to show it can be done, how grotesquely it can be done, and how it's not even particularly hard to do is a public service. While the ethical implications can be debated, I hope the greater lesson that we are trusting systems that have no guarantee or expectation of trust and that they are easy to manipulate in ways we don't notice is the lesson people take.
Is the wake up call worth the ethical quagmire? I lean towards yes.