It does seem rather unethical, but I must admit that I find the topic very interesting. They should definitely have asked for consent before starting with the "attack", but if they did manage to land security vulnerabilities despite the review process it's a very worrying result. And as far as I understand they did manage to do just that?
I think it shows that this type of study might well be needed, it just needs to be done better and with the consent of the maintainers.
“Hey, we are going to submit some patches that contain vulnerabilities. All right?”
If they do so, the maintainers become more vigilant and the experiment fails. But, the key to the experiment is that maintainers are not vigilant as they should be. It’s not an attack to the maintainers though, but to the process.
In penetration testing you are doing the same thing, but you get the go-ahead for someone responsible for the project or organization since they are interested in the results as well.
A red team without approval is just a group of criminals. They must have been able to find active projects with a centralized leadership they could ask for permission.
I don’t know much about penetration testing so excuse me for the dumb question: are you required to disclose the exact methods that you’re going to use?
Yes. You have agreements about what is fair game and what is off limits. It can be that nothing can be physically altered, what times of day or office locations are OK, if it should only be a test against web services or anything in between.
Do you? You have agreement with part of the company and work it out with them, but does this routinely include the people who would be actively looking for your intrusion and trying to catch it? Often that is handled by automated systems which are not updated to have any special knowledge about the up coming penetration test and most of those supporting the application aren't made aware of the details either. The organization is aware, but not all of the people who may be impacted.
It depends on the organization. Most that I've worked with have said everything is fine except for social engineering, but some want to know every tool you'll be running, and every type of vulnerability you'll try to exploit.
What you do during pentesting is against the law, if you do not discuss this with your client. You're trying to gain access to a computer system that you should have no access to. The only reason this is OK, is that you have prior permission from the client to try these methods. Thus, it is important to discuss the methods used when you are executing a pentest.
With every pentesting engagement I've had, there always were rules of engagement, and what kind of things you are and are not allowed to do. They even depend on what kind of test you are doing. (for example: if you're testing bank software, it matters a lot if you test against their production environment or their testing environment)
If the attack surface is large enough and the duration of the experiment long enough it'll return to baseline soon enough I think. It's a reasonable enough compromise. After all if the maintainers are not already considering that they might be under attack I'd argue that something is wrong with the system, a zero-day in the kernel would be invaluable indeed.
And well, if the maintainers become more vigilant in the long run it's a win/win in my book.
"We're going to, as part of a study, submit various patches to the kernel and observe the mailing list and the behavior of people in response to these patches, in case a patch is to be reverted as part of the study, we immediately inform the maintainer."
>Your message would push maintainers to put even more focus on the patches, thus invalidating the experiment.
The Tuskegee Study wouldn't have happened if its participants were voluntarily, and it's effects still haunt the scientific community today. The attitude of "science by any means, including by harming other people" is reprehensible and has lasting consequences for the entire scientific community.
However, unlike the Tuskegee Study, it's totally possible to have done this ethically by contacting the leadership of the Linux project and having them announce to maintainers that anonymous researchers may experiment with the contribution process, and allowing them to opt out if they do not consent, and to ensure that harmful commits never reach stable from these researchers.
The researchers chose to instead lie to the Linux project and introduce vulnerabilities to stable trees, and this is why their research is particularly deplorable - their ethical transgressions and possibly lies made to their IRB were not done out of any necessity for empirical integrity, but rather seemingly out of convenience or recklessness.
And now the next group of researchers will have a harder time as they may be banned and every maintainer now more closely monitors academics investigating open source security :)
I don't want to defend what these researchers did, but to equate infecting people with syphilis to wasting a bit of someones time is disingenuous. Informed consent is important, but only if the magnitude of the intervention is big enough to warrant reasonable concerns.
>to wasting a bit of someones time is disingenuous
This introduced security vulnerabilities to stable branches of the project, the impact of which could have severely affected Linux, its contributors, and its users (such as those who trust their PII data to be managed by Linux servers).
The potential blast radius for their behavior being poorly tracked and not reverted is millions if not billions of devices and people. What if a researcher didn't revert one of these commits before it reached a stable branch and then a release was built? Linux users were lucky enough that Greg was able to revert the changes AFTER they reached stable trees.
There was a clear need of informed consent of *at least* leadership of the project, and to say otherwise is very much in defense of or downplaying the recklessness of their behavior.
I acknowledged that lives are not at play, but that doesn't mean that the only consequence or concern here was wasting the maintainers time, especially when they sought an IRB exemption for "non-human research" when most scientists would consider this very human research.
But it wouldn't let maintainers know what is happening, it only informs them that someone will be submitting some patches, some of which might not be merged. It doesn't push people into vigilance onto a specific detail of the patch and doesn't alert them that there is something specific. If you account for that in your experiment priors, that is entirely fine.
I think it shows that this type of study might well be needed, it just needs to be done better and with the consent of the maintainers.