Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess someone had to do this unethical experiment, but otoh, what is the value here? There's a high chance someone would later find these "intentional bugs" , it's how open source works anyway. They just proved that OSS is not military-grade , but nobody thought so anyway


> They just proved that OSS is not military-grade , but nobody thought so anyway

...and yet FOSS and especially Linux is very widely used in military devices including weapons.

Because it's known to be less insecure than most alternatives.


I assume they don't use the bleeding edge though


Like in most industrial, military, transportation, banking environments people tend to prefer very stable and thoroughly tested platform.

What HN would call "ancient".


> They just proved that OSS is not military-grade...

As if there is some other software that is "military-grade" by the same measure? What definition are you using for that term, anyway?


> but nobody thought so anyway

A lot of people claim that there's a lot of eyes on the code and thus introducing vulnerabilities is unlikely. This research clearly has bruised some egos bad.


Nothing is perfect, but is it better than not having any eyes? If anything, this shows that more eyes is needed.


The argument isn’t having no eyes is better than some eyes. Rather, it’s commonly argued that open source is better for security because there are more eyes on it.

What this research demonstrates is that you can quite easily slip back doors into an open contribution (which is often but not always associated with open source) project with supposedly the most eyes on it. That’s not true for any closed source project which is definitely not open contribution. (You can go for an open source supply chain attack, but that’s again a problem for open source.)


> it’s commonly argued that open source is better for security because there are more eyes on it.

> What this research demonstrates is that you can quite easily slip back doors into an open contribution

To make a fair comparison you should contrast it with companies or employees placing a backdoors into their own closed source software.

It's extremely easy to do and equally difficult to spot for end users.


Recruiting a rogue employee is orders of magnitude harder than receiving ostensibly benign patches in emails from Internet randos.

Rogue companies/employees is really a different security problem that’s not directly comparable to drive-by patches (the closest comparison is a rogue open source maintainer).


Maybe for employees, but usually it is a contractor of a contractor in some outsourced department replacing your employees. I'd argue that in such common situations, you are worse off than with randos on the internet sending patches, because no-one will ever review what those contractors commit.

Or you have a closed-source component you bought from someone who pinky-swears to be following secure coding practices and that their code is of course bug-free...


The reward for implanting a rogue employee is orders of magnitude higher, with the ability to plant backdoors or weaken security for decades.

And that's why nation-state attackers do it routinely.


Yes, it’s a different problem that’s way less likely to happen and potentially more impactful, hence not comparable. And entities with enough resources can do the same to open source, except with more risk; how much more is very hard to say.


Despite everything, even NSA is an avid user of Linux for their critical systems. That says a lot.


To make it a fair comparison you should contrast... an inside job with an outside job?


This is an arbitrary definition of inside vs outside. You are implying that employees are trusted and benign and other contributors are high-risk, ignoring than an "outside" contributor might be improving security with bug reports and patches.

For the end user, the threat model is about the presence of a malicious function in some binary.

Regardless if the developers are an informal community, a company, a group of companies, an NGO. They are all "outside" to the end user.

Closed source software (e.g. phone apps) breach user's trust constantly, e.g. with privacy breaching telemetries, weak security and so on.

If Microsoft weakens encryption under pressure from NSA is it "inside" or "outside"? What matters to end users is the end result.


The insiders are the maintainers. The outsiders are everyone else. If this is an arbitrary definition to you I... don't know what to tell you.

There's absolutely no reason everyone's threat model has to equate insiders with outsiders. If a stranger on the street gives you candy, you'll probably check it twice or toss it away out of caution. If a friend or family member does the same thing, you'll probably trust them and eat it. Obviously at the end of the day, your concern is the same: you not getting poisoned. That doesn't mean you can (or should...) treat your loved ones like they're strangers. It's outright insane for most people to live in that manner.

Same thing applies to other things in life, including computers. Most people have some root of trust, and that usually includes their vendors. There's no reason they have to trust you and (say) Microsoft employees/Apple employees/Linux maintainers equally. Most people, in fact, should not do so. (And this should not be a controversial position...)


The candy comparison is wrong on two levels.

1) Unless you exclusively run software written by close friends both Linux and $ClosedOSCompany are equally "outsiders"

2) I regularly trust strangers to make medicines I ingest any fly airplanes I'm on. I would not trust any person I know to fly the plane because they don't have the required training.

So, trust is not so simple, and that's why risk analysis takes time.

> There's no reason they have to trust you and (say) Microsoft employees/Apple employees/Linux maintainers equally

...and that's why plenty of critical system around the world, including weapons, run on Linux and BSD, especially around countries that don't have the best relations with US.


They were only banned after accusing Greg of slander after he called them out on their experiment and asked them to stop. They were banned for bring dishonest and rude.


> A lot of people claim that there's a lot of eyes on the code.

Eric Raymond claimed so, and a lot of people repeated his claim, but I don't think this is the same thing as "a lot of people claim" -- and even if a lot of people claim something that is obviously stupid, it doesn't make the thing less obviously stupid, it just means it's less obvious to some people for some reasons.


Eric Raymond observed it, as a shift in software development to take advantage of the wisdom of crowds. I don't see that he speaks about security directly in the original essay[2]. He's discussing the previously held idea that stable software comes from highly skilled developers working on deep and complex debugging between releases, and instead of that if all developers have different skillsets then with a large enough number of developers any bug will meet someone who thinks that bug is an easy fix. Raymond is observing that the Linux kernel development and contribution process was designed as if Linus Torvalds believed this, preferring ease of contribution and low friction patch commit to tempt more developers.

Raymond doesn't seem to claim anything like "there are sufficient eyes to swat all bugs in the kernel", or "there are eyes on all parts of the code", or "'bugs' covers all possible security flaws", or etc. He particularly mentions uptime and crashing, so less charitably the statement is "there are no crashing or corruption bugs so deep that a large enough quantity of volunteers can't bodge some way past them". Which leaves plenty of room for less used subsystems to have nobody touching them if they don't cause problems, patches that fix stability at the expense of security, absense of careful design in some areas, the amount of eyes needed being substantially larger than the amount of eyes involved or available, that maliciously submitted patches are different from traditional bugs, and more.

[1] https://en.wikipedia.org/wiki/Linus%27s_law

[2] http://www.unterstein.net/su/docs/CathBaz.pdf


> A lot of people claim that there's a lot of eyes on the code

And they are correct. Unfortunately sometimes the number of eyes is not enough.

The alternative is closed source, which has prove to be orders of magnitude worse, on many occasions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: