A common misconception IMO is that running and owning your own infrastructure is somehow more secure. To that I lol, and I’m confident that the thousands of AWS/GCP/Azure/iCloud security engineers are all doing a more thorough job than you can. At the very very least they receive embargoed bugs which they often mitigate before the general public.
One doesn't have to expose it to malicious actors. It is most-useful that way, sure. Mine is at 10.27.0.68. Have fun, hackers!
Also, I lol at most CVEs. Butterfly farted outside, oh uh.
Take the top one:
In Nextcloud Desktop Client 3.13.1 through 3.13.3 on Linux, synchronized files (between the server and client) may become world writable or world readable. This is fixed in 3.13.4.
You mean to tell me a few minor point releases imitated umask, making world-readable [and possibly added writable]? Oh no! The tragedy! Keep in mind most clients are single user systems anyway.
Judge them on their facts, there are vulns and then there are vulns. CVEs are a sign of attention on a project. No more or less.
I find that one concerning in an enterprise setup (which they target). Or the fact that the desktop client has 999 open issues. Or that the last version silently takes you off the stable channel. I could go on … Nextcloud desktop has severe quality control issues.
Yeah, one CVE is literally "You can use the MacOS variant of LD_PRELOAD on the client to hook libc calls! Oh no!!" This is a bogus CVE; any application can perform arbitrary actions when its system calls are hooked, but it requires such a strong threat model that the adversary realistically gains no ground by doing so.
("A code injection in Nextcloud Desktop Client for macOS allowed to load arbitrary code when starting the client with DYLD_INSERT_LIBRARIES set in the enviroment")
Yeah, it's strange to me that's a CVE. That seems like "working as intended" if I, the owner of the machine, want to load other libraries, why shouldn't it respect that?
Your right of course. No way an individual can compete with an army of specialists.
But for some of us it is a bit of a hobby to run our own infrastructure. And some of it only ever runs on a private network.
I rolled my own docker setup for Nextcloud a few years ago, and couldn't be happier with the outcome. It does require me to log in and update the system and setup from time to time, but that's just a good time to drink a hot bevarage and listen to podcasts in my mind.
For anyone hosting their own instance, Nextcloud offers this scan[0] of your public facing url which might come up with something worth fixing.
"Do everything" solutions go against the principle of minimizing the attack surface.
EDIT: More is not always better in security. With more people doing more things, the statistical odds of miscommunication and misconfiguration increases.
As well as if this reflects a systemic issue with the codebase or if it is just getting much needed attention from security researchers. More CVEs can just mean they're cleaning up after vulns really well. But at the same time, if they have critical vulns over and over again, that might indicate bad coding practices or carelessness.
Generally you use these disclosures to make directional decisions about infrastructure. The list of fixed and disclosed CVEs combined with the legacy PHP code base doesn’t really pass the security sniff test. You really wouldn’t know for sure without doing a full code audit.
>A common misconception IMO is that running and owning your own infrastructure is somehow more secure.
If done properly cve-s don't matter that much. You create a headscale install on a pi and the headscale port and your router's ssh (key only) are the only things visible from the outside. Take any other than a home router - aka something with support. And you are done.
I think it depends on the CVEs and where they are. If it's a software vuln that requires root or some other complex prerequisites then w/e. But, if we're talking about low level problems in either the OS or network layer (e.g. firewalls, routers) then big clouds are most likely going to have that patched and rolled out more quickly IMO.
> thousands of AWS/GCP/Azure/iCloud security engineers are all doing a more thorough job than you
All these cloud services are just attack surfaces with a huge target on their backs. And the security engineers slip up too [0], in the case of Microsoft it's become more of a meme now. The North Korean hackers basically own them.
Somewhat depends on your threat model. The relative value of an iCloud/aws/gDrive 0day is going to be higher than Nextcloud. If you’re in the category of people concerned about this type of breach, self-hosting a PHP web app and claiming it’s somehow safer wont save you either. For this risky population, neither solution works since attackers are willing to throw expensive exploits at your data in either scenario.
If you aren’t being specifically targeted, then you would care about low hanging fruits discovered by something like automated scanning. Not exposing your service to the internet does solve this assuming you’re confident in the stack which provides this isolation. But managing this stack and performing risk calculus here is actually where the security horse trading happens. I think most people aren’t safer managing this themselves — arguably they’re actually worse off.
I have high standards for the confidentiality of my data. I care about things like lateral movement and the massive attack surface that isolation tech to prevent such movement has. I also won’t design monitoring and alerting, ensure a patch state, or perform code audits on Nextcloud and all the isolation tech required to secure it to a comporable level of security. Because of this, I instead reason around the cost of exploitation. I want it to be higher than what I believe Nextcloud provides and I’d rather require an attacker to use an expensive 0day to extract my data off a cloud provider like Google versus a potentially cheap one against my own infra.
Yes, there is a concept of "shared responsibility" in the cloud. Obviously the provider is going to handle some things and you have to take care of others.
I'm not a security specialist, but it seems to me that while managed services typically have better security and sysadmin resourcing, they also have the downside that their security can fail at a massive scale. If someone defeated the security of, say, GitHub, they could leak all the private repos stored there.
Managed services also have to accept connections from the public Internet, which on-premises solutions do not.
banks aren't safe because they're unrobbable, they get robbed all the time. They're safe because they're the ones taking on the risk. Data isn't fungible like cash though.
Running and owning your own infrastructure exposed to the outside world can be more insecure, running your own infrastructure at home in segmented networks with wireguard will solve most problems.
I think the threat model isn't that these popular services are going to be attacked, but that they will engage in the denial of service themselves without legal recourse.
Like sure, someone particularly interested in your home nextcloud instance could probably find their way in eventually.
But if you are concerned more about dropbox killing your account due to nonpayment, cloud backups getting encrypted and the master key being lost, cloud engineers snooping on your files, cloud platforms targeting ads based on your downloaded files etc etc etc it offers an alternative.
I've had a disturbingly large number of friends cancel on weekend or late evening plans because something at Amazon broke, and they had to drive into a downtown office to fix it.
If you aren't paying drivers, that's a lot of margin, yeah? If you're the only driverless game in town all of your vehicles can potentially earn an the salary of a driver, like 20$ an hour. They'd pay for themselves within a year pretty easily.
Can it? You still need a ton of very expensive infra, engineering, ops to manage all of this.
Waymo has clearly been playing the “but it’ll be cheaper” game for a long, long time now. It makes sense if you squint really hard and fudge some numbers about unit economics.
If the expected outcome is for it to be eventually cheaper, then why when I open the app does it costs almost twice as much as an Uber or Lyft? You’d think they would want to at least convince investors and train customers that the savings are real and they’re going to prove it by passing it on.
Waymo obviously has nearly no economies of scale yet. We are just entering that part of the regime and it is going to be 2-3 years more before we are spreading those costs across a large fleet.
I think the companies cited in this article might be weird to compare.
Apple is very RTO heavy because they’re an old school hardware company. Hardware work is easy to demand in office work because: (1) apple secrecy and prevention of leaks and (2) access to lab equipment. #2 likely holds true for spaceX as well.
Adding Microsoft to the mix is weird as nobody I know there actually RTOs.
I think people need to actually specifically measure which roles (senior? engineering?) in tech we are discussing RTO about here. I agree that for most software engineering it backfired. But if you’re an apple hardware engineer, there aren’t many places in town that’ll pay you as much so you’ll accept whatever horrible RTO hand you’re dealt. Companies apply these rules to everyone which is very, very stupid IMO.
I think the most interesting part about this being on the inside is the rationale behind RTO. It’s always the same citing culture, collaboration, or other fuzzy things. It is never quantitative. Are you telling me that the people making these decisions are doing so without data? I think that’s unlikely, it’s just that the data isn’t in their favor and execs are smart enough than to let remote versus not remote become yet another bargaining chip for an employee, let alone senior ones.
TLDR, I think senior vs not senior in tech is likely too much of a generalization. But the people with the actual data aren’t speaking up probably because discussing the results don’t benefit them.
CHERI vs MTE is a bit of a nuanced topic. At least one part of the limiting factor for MTE is that you get a finite number of tag “color codes” which opens the opportunity for some form of probabilistic attacks. Of course this helps with defense in depth as it’s yet another layer of security, but it isn’t as strong of a prevention as a CHERI capability for example.
I've just started down that route. I've got the nitro key hsm2 in the mail, have heard the advice on using two levels (first root in the Key, and intermediary on the Device for easier revoking). I mainly want to issue client certificates so that I can expose internal sites on the public Internet via proxy without having to require a VPN for all of my users, though I'm also interested in certificate based SSH
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=nextcloud
A common misconception IMO is that running and owning your own infrastructure is somehow more secure. To that I lol, and I’m confident that the thousands of AWS/GCP/Azure/iCloud security engineers are all doing a more thorough job than you can. At the very very least they receive embargoed bugs which they often mitigate before the general public.