It's time to decentralize the internet. There is no good reason why we can't have email, webpages, photos, even facebook-like social stuff housed on our own machines in our own homes (or some other place under our control).
The current situation is akin to having to travel to some centralized letter-reading facility in order to read letter mail. Your grandma sends you a letter in the mail and you have to go to a central facility downtown, then prove your identity, and then they hand over the (opened) letter.
We put a man on the moon more than 40 years ago. We must be able to sort this out.
> It's time to decentralize the internet. There is no good reason why we can't have email, webpages, photos, even facebook-like social stuff housed on our own machines in our own homes (or some other place under our control).
I think there is a good reason. Who wants to spend the time setting up and running a server? I happen to run my own, but it is definitely not something I would recommend to my friends and family.
Maybe someone will come along and create a super easy to install and low maintenance server platform, but if everyone uses that then it is still "centralized" in the sense that everyone is running homogenous setups. I think the key is that everything should be based on open protocols and formats. Email and the web are already there. Tent.io might be a good place to start for the social media stuff. That seems far more important than geographical dispersion of physical servers.
If you have a smartphone, it allows the carrier (=client) to
access the device (=server) remotely.
Same for cable modems.
Maybe you also have a Wifi router.
It is listening for HTTP requests because it has a "Web GUI".
It is thus a web server, among other things.
Maybe it also has a "backdoor" for remote login as reports
continue to show this is common.
The idea is your devices are listening for incoming
connections and they have data stores to serve on demand.
You may disagree but I would call these servers.
Many people manage to setup Wifi routers at home.
And with a modem or smartphone, there is almost no setup.
You just turn it on and it starts serving.
So, I'd argue that "running your own server" is actually
something anyone can do.
Maybe the real barrier to "running your own server" is
only just confusion over terminology.
You can cache the data. I mean just think about how Git works. You can sync the information at random intervals between peers, and you don't need a direct connection all the time. A well-designed P2P decentralized network solves all these problems. A can send to C through B, if A and C are not currently connected and so on.
In fact, these are solved problems, and implemented in e.g. Freenet as far as I know.
My mom can't install that herself, and wouldn't be able to find information on how to do it by browsing the freedombox website.
It's so obvious that the current freedombox website is targeted at computer geeks only and not general people. It is not 'super easy to install' if it's not granny-proved.
Email was created for a time when connections were expected to be unreliable. Email servers respond to a failure to connect by retrying at increasing intervals for several days.
If your power at home has gone out, then very likely one of these two items is also "off":
1) your internet connection endpoint
2) your wireless router
At which point you could not read any of the emails that an external "service" might receive for you anyway.
Plus, as another poster already stated, email servers retry several times to deliver a message (as required by the spec), so the email just waits in the senders queue until your power is back, and then arrives, a few hours late.
> There is no good reason why we can't have email, webpages, photos, even facebook-like social stuff housed on our own machines
You know, this is exactly how the internet works, today. You can do it if you want. You will find that you're gonna have to spend some (some would say a lot of) time to amass enough knowledge to become self-sufficient. But there's no big technical issue, there only is a mindset issue. Do you actually want to spend the time, face downtimes and low speeds that having a real internet would incur, or do you want to remain in your comfy walled garden ?
I'm sure what you really meant was "It's time to decentralize the services we use".
> We put a man on the moon
"We" as in "The government and countless private contractors, all with an unlimited budget", not as in "We the people". The centralization you see today is just the same thing.
> It's time to decentralize the internet. There is no good reason why we can't have email, webpages, photos, even facebook-like social stuff housed on our own machines in our own homes (or some other place under our control).
This is how the internet is designed, and you can already do this today. In my case, I host my own dns, email, and my own webpages, locally on my home connection. You just have to be willing to learn, and willing to do. Once you've learned, the actual "do" is rather trivial.
>This is how the internet is designed, and you can already do this today. In my case, I host my own dns, email, and my own webpages, locally on my home connection. You just have to be willing to learn, and willing to do. Once you've learned, the actual "do" is rather trivial.
That's great that it works for you. But there are lots of people who are perfectly capable of doing this who don't want the hassle (let alone the huge numbers of people for whom this is completely impossible). I spent most of my youth screwing around with computers and learning a lot about how all this stuff works and it was great fun. Now that I'm getting older, it is frankly growing tiresome. The last thing I want to do on a Saturday-- when I should be playing with my kids and enjoying life-- is fuss with some file server that is acting up, preventing my wife from posting vacation photos. I've got enough work to do around the house so as it is. I don't need to be on call 24/7 for IT infrastructure support.
My point was: "there is no change to the internet at all necessary for this to happen".
The original comment to which I replied implied that the commenter believed the internet needed to change in some way in order to decentralize. My point was it was already, and still is, natively decentralized. What is preventing "decentralization" is convenience, and lack of knowledge, not the underlying architecture of the internet.
It's true... but I view this as a technology problem not a problem _in_principle_. After all we don't all need to be engineers and electricians to operate a refrigerator, or our automobiles, for example. Society has built up the infrastructure to support individuals owning cars. We aren't all forced to use buses.
Well many can, but far from all in the context of hosting from home. If you are stuck behind NAT at home then you can't without paying for server resource externally and that is going to be more of a problem over the coming years as IPv4 increasingly becomes a problem and IPv6 (despite recent acceleration) taking a fair long time to become ubiquitous.
> In my case, I host my own dns, ..., locally on my home connection.
What do you do for secondary DNS? Some services respond differently to "address found from name, but server not responding" than they do to "couldn't lookup name".
One particular problem I've seen reported relatively recently is bounced mail when the senders ISP replaces "could not contact DNS server to lookup address" to "I know, I'll show them my own error/adverts web page": of course the user's MTA doesn't know anything about web pages and tries to connect to the host address provided to send mail and is told to go away (either no mail service found or there is one but it refuses mail for that recipient). If you had a secondary then the lookup would pass to there so the right (but currently non-functioning) address would be given to the MTA, it would fail to connect and drop the message back into the queue for a later retry.
It is rare it makes any difference these days (many agents that were sensitive to this, such as mail servers, no longer are because it is quite common for small arrangement not to have secondary DNS and for ISPs to break DNS in the name of making an extra penny here or there) but it is still worth having an externally hosted secondary DNS host even if everything else is a single point of failure for all (hosted on one box on one link). For a small home concern it needn't cost more than ten or fifteen $/year either to run your own bind instance in a inexpensive VPS or to use a specialist DNS service.
I run most things from home too, but have external resources (a backup location and a web service that occasionally needs more bandwidth then the outgoing link at home can comfortably provide) so my secondary services sit out there with them.
> If you are stuck behind NAT at home then you can't without paying for server resource externally
Actually, no. I run NAT on my firewall/router, and still host the services. It just takes tweaking the Linux iptables rules to run externally visible services, while still running NAT in general. Again, a "knowledge" and/or "convenience" issue, but not a technical issue.
> What do you do for secondary DNS?
I use Afraid.org for secondary DNS, but if my home link is down (hasn't happened yet...) then the service (email/web) is also down so I'm just "offline" until the link comes back up.
> I run NAT on my firewall/router, and still host the services.
By "stuck behind NAT" I was meaning a NAT or 6-to-4 arrangement that is not under your control (so you can't configure port forwarding). This isn't all that common in the US or EU aside from mobile and satellite based providers (and you aren't likely to run servers off them!) but elsewhere it is more of an issue and will eventually become so everywhere where IPv6 take-up doesn't pick up fast enough.
> so I'm just "offline" until the link comes back up
I'm not sure what effects are common now (if any) but historically some services responded differently to the two downtime situations "DNS lookup worked, but I can't connect to that address" and "DNS lookup failed" - some MTAs used to be more likely to bounce rather than requeue for instance - so I always recommend topologically distant secondary DNS even for a single server+link situation. If you branch out later (for instance I currently have most things at home but a few bits external which I don't want down if my home link blips) you already have it setup so don't have the extras work to do. Also secondary DNS is required by the relevant RFCs, so if nothing else do it for that reason (I know you already do, I've added this bit for other readers should they stumble here in future).
So I'm looking at moving my VPS into a box at home, but I have an IP that changes every so often. What's the best way to fix that? I've got no problems with DNS being hosted on Route53 or something else.
Yep. They give you a little program that runs on your computer that checks your external IP address at frequent intervals and updates the DNS records when it changes. Better yet, some home routers have a configuration page where you can select among popular Dynamic DNS providers and then you don't need to run the proprietary program. My ISP-provided router (FiOS) has ZoneEdit in the list which I've used for years even before I had FiOS, so it was a nice surprise when I found it there. Just for sake of example here's a link: http://www.zoneedit.com/dynamicDNS.html
Some registrars even provide such a service for free (well, included in the annual registration fee), though look for reviews to see how reliable your registrar's is before trying it in anger.
Most people don't need it, but there are reasons it can be convenient. And some just like to do everything in-house either as a learning exercise or for control freakery reasons.
Control of TTL values is on example. Most registrars use 4 hours these days but it used to be that 24 hours was the value used by most (with not option of anything else). That can be a minor convenience if you expect to move things around much. I have all mine set to 5 minutes (not a great idea for anything high traffic, but nothing of mine is). If you know what you are doing running a small DNS service is no great hardship at all (though it is surprising how many people don't get it right). Custom dDNS is another reason you might want this (though I think some registrars and specialist DNS hosts offer this for little or no cost these days).
In my case it costs nothing as the three bind instances I run live on geographically separate machines that I already have for other reasons (home line, external web service, backup location) - if you have no external resources already then you'd need to pay for somewhere to host a secondary server of course (cheap, reliable enough, and fast enough VPS services are common so that needn't be much cost - though a specialist DNS services needn't any more expensive these days either and will likely be more scalable than anything we setup manually).
In addition to the other replies, which are also good reasons, remember that dns poisoning is a real thing. While running unbound[1] to check DNSSEC signatures HAS discovered invalid results, and you can bypass some (but not all[2]) of those problems if you bypass the bad (ISP/whatever) resolver.
There really isn't much of a performance hit by recursively resolving DNS - it all gets cached anyway.
[2] It protects against a resolver that lies, but race conditions (e.g. NSA/QUANTUM) are not affected. Hopefully, DNSSEC itself protects against poisoned results, regardless of the method.
Not what I had in mind. Ignoring the NXDOMAIN results, what makes you trust the VALUE of the NS/A/AAAA/MX/whatever records you get from Google (or any other resolver)?
Because I have frequently seen provably-wrong results from other resolvers, and some highly-suspicious results from 8.8.8.8 on occasion (though I haven't checked particularly often).
The point being, unless you're proving the results with DNSSEC (or similar), you can't trust any source.
Originally as a learning exercise (but that was about 15 years ago now). Now just because it is not hard, and I don't have to put up with any arbitrary "rules" from a provider.
* ISPs discouraging or even prohibiting servers, due to bandwidth or other concerns.
* Laziness/generally complacent attitude. I think this is a deeper issue than the first one, since a change here may make ISPs reconsider. A lot of people just don't want to learn about how to setup their own server for email/webpages/etc. and would willingly give control to someone else despite the privacy/freedom implications of doing so. In other words, they value "easy" over control, privacy, and freedom. The same reason walled-garden DRM ecosystems have been so successful.
I'm strongly in support of a decentralised and more "user-centric" Internet, where almost everyone has their own servers and websites, and knows more about how things work, but I think there has to be a huge shift in social attitudes first.
Decentralization is absolutely necessary, it's just completely counter to the way the economy on the Internet currently works. SAAS falls apart if everyone is hosting their own stuff, let alone the do it for free and sell ads in it model.
Unless people are willing to start paying software developers directly en masse for decentralized locally run versions of products we currently get for free it isn't going to happen. We're even getting to the point where platforms where you could conceivably do that are being killed off.
What would be a game changer is the equivalent of Android for servers. A proper standard environment with app interoperability, and potential to create app stores, just for server code instead.
It's funny isn't it. The current state of affairs and the direction we're moving in reminds me (since I'm old enough to remember) of the early days of server based computing and dumb terminals. VAX's in some room somewhere and DEC terminals for the people. Now it's google server farms and chromebooks
I think it's time to stop cooking up technology solutions to political problems. It's a lazy hack and a distraction.
The counter-parties that encryption will supposedly neutralize are commercial entities doing monitoring for advertising (or other purposes) and government surveillance. Commercial entities have all sorts of ways to collect said information (ie. by compelling you to opt-in in exchange for services). The government has a long history of successfully breaching encryption when it's motivated to do so.
The solution is to leash these powerful entities with regulation. Elect Senators with the courage to curb the intelligence community -- it was done before after the excesses of the Vietnam Era.
I think it's time to stop cooking up technology solutions to political problems. It's a lazy hack and a distraction.
I very much agree with this philosophy in general, but when government and corporate interests are fighting their political battles with tech by designing software to exploit you, there is no option but to push back on multiple fronts.
This is an interesting idea, although as an alternative analogy: in an apartment complex, you would go down to your mailbox and insert a key to authenticate you to retrieve your mail. I think the biggest challenge to having on-premises software for all of this stuff is the maintenance of it - but a decentralized social media platform certainly sounds interesting.
There is good reason: it's bad for software businesses.
Even if you had a turnkey solution (it doesn't exist yet) to self-host emails & stuff, the companies selling it wouldn't be able to get good search engine rankings.
The web giants have no interest in solutions that don't require them to host your data (so they can serve you advertising or hosting plans). Even small shops are usually based around a Service As A Software Substitute (SaaS) business model and don't give a crap about you not depending on them when you start their application.
With sufficient transparency and checks/balances, wouldn't a partially centralized internet be acceptable?
It's not an easy solution by any stretch of the imagination. If you can come up with a realistic solution then please let us know and we will help implement it.
Nothing is stopping you from hosting servers at your home or place of business (except maybe specific ISPs), isn't this how the internet was originally?
It's not impossible, its just very very unlikely in the context of current economic realities. Those that are currently begrudgingly trusted have time and time again proven themselves so untrustworthy that debates on Hacker News are about not about if, but how thoroughly they should be able to fuck the end user over and justify it with "because money".
With sufficient transparency and checks/balances, wouldn't a partially centralized internet be acceptable?
Finding an alternative seems like the path of less resistance. The people running the checks and balances have proven themselves only interested in the money of the untrustworthy above and their own intrusiveness.
While trust is unlikely given the current state of politics, there is a better reason to not trust anything: the same reason you give up unnecessary privileges in a server jail.
Limiting trust doesn't just protect against malicious or targeted attacks - it also helps with plain old bugs. In general, limiting access and trust to what is actually needed is a good engineering practice.
Or, simply: we gave up on /etc/hosts.equiv (and .rhosts) a long time ago, and it would be a bad idea to repeat that kind of design mistake.
I for one do not want to be lumbered with maintaining my family's collection of micro services, making sure they are working OK, not compromised, and fully patched security-wise, and so forth. Its bad enough being expected to disinfect their laptops every other time the fail to follow my advice on being careful what they browse.
This is a completely specious argument. Of course there is no reason why we can't do a thing. We can do basically anything with computers. That does not mean that anything we do is a good idea.
People who don't understand how systems work often jump to decentralization as a solution to any problem. I'm not sure why. Maybe they've been burned in the past by a centralized service, and think the best solution is to remove what they perceived as their roadblock: the centralization.
Our current situation is nothing like traveling. You sit in your home, or in your car, or at your work, or in a park, or on an airplane, and you can access anything you want anywhere in the world, instantly. There's virtually no time between wanting to read your letter and receiving it. It's fast, it's easy, and it's uncomplicated. And you don't have to prove your identity if you've done it once in the past 30 days. Do you realize how that is possible? None of that is due to decentralized services. Every single service you use on the web is provided via centralized services. And they work beautifully.
Your wireless card listens for access point probes. It hears one from the AP it wants, and it connects. The AP begins the process of creating an encrypted connection. It checks that your credentials are valid, or passes the request up to a RADIUS server, that holds all the valid credentials of all the users. It then establishes the connection. Your host polls for a DHCP server asking for a lease, and the DHCP server, diligently making sure no leases have a conflict, gives you all the local network information you'll need. Your client sends a DNS request to the DNS server, which resolves, caches, and returns your request by polling the closest possible upstream DNS server, giving you a result that's geographically closest to you. Your browser initiates a connection, across your AP, across the cable modem, across the DSLAM, across the POP, across the MAN, across the many myriad routers, switches, traffic filters, interlinks, to finally get to the load balancers that maintain a balance of requests between the web servers of your destination. The HTTPS session is established after you verify the server is who they say they are, and then you tell the server who you are over HTTP, finally asking for your letter, and receiving it. And each route long the way has its gateway configured and routing tables filled with curated BGP tables so that they know exactly where to send your data to get you from a cable modem in Indiana to a datacenter in Brazil.
What would happen if we replaced every one of those centralized services with a decentralized model?
For one thing, you'd have less bandwidth and higher latency. All the devices on the internet would be constantly communicating, trying to find consensus, trying to update routes, trying to search distributed rings (my personal favorite topology) to find peers, connect services, authenticate trust relationships, query data, update indexes, etc. To say nothing of the potential for disruption of networks by various attacks or the difficulty of troubleshooting a random network failure in a massive decentralized network.
For another, storing data on your own host in your own home is taking 100 steps backward. As a very simple comparison, it's like taking your valuables out of a safe deposit box in an FDIC-insured bank and putting them on the front dash of your car. Even besides the security concerns, if you have a fire in your house, there goes your data! It's just a really dangerous idea to put data you care about in a place that's easily available to thieves and isn't backed up to an offsite location and maintained by professionals. That's what centralized services were meant to provide. It makes no sense to try and support those scenarios in a decentralized way, because the very idea of decentralized services is contrary to high-availability data.
Final thought: we put a man on the moon with centralized services. If we had done it using decentralized services, it would have taken a lot longer than it did with a lot more work, for exactly the same result. Sometimes it just makes more sense to be centralized.
I can't be arsed posting a 9 page rebuttal with endnotes (most of my comments of that nature have 1 point) so I'll just pick on a few things:
>[big description of how the internet works]
As has been pointed out elsewhere in this thread, the internet was designed from day 1 to be decentralized. None of your examples count as "centralization" at the society level. There are millions of DHCP servers etc. The exception is DNS, which was a centralized replacement for the original decentralized mechanism of hosts files - an understandable decision as software techniques for decentralized consensus were unknown at the time. Fortunately we can now correct this, with Namecoin or similar.
>All the devices on the internet would be constantly communicating, trying to find consensus, trying to update routes, trying to search distributed rings (my personal favorite topology) to find peers, connect services, authenticate trust relationships, query data, update indexes
You seem to have a very detailed idea of how a "decentralized" internet would work at the technical level. Sounds like you're describing a global meshnet, as imagined today. Obviously we do not currently have the tools to develop an efficient global meshnet. However, just because you cannot personally imagine one doesn't mean it isn't possible.
>FDIC-insured bank
Must have missed something - what's Facebook's FDIC equivalent? I wasn't aware that coughcloudcough services had a regulatory body that would compensate me if they failed to do their job.
>It's just a really dangerous idea to put data you care about in a place that's easily available to thieves
I agree. How exactly is a hard drive in my basement more accessible to thieves than a data center (a much juicier target) run by people of unknown competence and ethics? Of course, if you're paranoid about your house burning down, there's nothing stopping you from putting an encrypted, uuencoded zip of your files on pastebin.
>the very idea of decentralized services is contrary to high-availability data
Bittorrent: Faster, cheaper, and more robust than a central server.
>we put a man on the moon with centralized services
Really? I thought we did it by funding a whole bunch of aerospace companies to design and build different pieces. Sounds pretty decentralized. "But the funding came from the government!" The funding came from taxes, which came from millions of people, who would not have had that money were it not for the economy! Now there's a whopping great decentralized system for you. Shall we switch to a planned economy? Would that be more efficient?
The best way to make a system do what you want is design its ground rules from the get-go such that it self-organizes that way.
The only problem I see with https everywhere is the current CA system. I don't trust CAs, and I don't want to pay them. If we can get rid of them somehow there is nothing in the way of https everywhere anymore. I really like the http://convergence.io/ approach, but anything else that gets rid of a central authority I have to trust will do for me.
I don't trust CAs either but I use self-signed certificates when I want to collaborate securely online with people I know offline --- just give them a hardcopy of the certificate for them to verify and then add to their browser. It's unfortunate that newer browsers are making it harder to do this; I can see the justifications for this being in the name of "security", but can't help thinking that it's another way to keep the CAs happy in their monopoly over trust.
I'm wondering how well this scales. If you have thousands of upon thousands of people/companies that you trust, how do you ensure that they all remain trusted? What happens if you need to go through all of the certificates to ensure they can still be trusted?
You mean like https://www.startssl.com/? Their standard certificates are accepted by all major browsers released since 2010 (see http://en.wikipedia.org/wiki/Startssl#Trustedness) and are free for non-commercial use (I've heard claim of trouble with Windows Phone 7 devices but I've not had opportunity to check that myself).
http://www.cacert.org/ is another option, but their CA cert is generally trusted by default so it is no good for publicly targeted services.
The current implementation is far from perfect, but that is more than a tad strong.
And it need not be as bad as it is, we just need to find the right changes to the process and get everyone to agree on them...
Don't tar all PKI based solutions with the same overly large brush.
> You mean the one with a terrible user interface
I'll not disagree with you there. Though for free are you expecting perfection? It at least does the job.
> who charge 25$ to regenerate a cert?
Pro tip: backups.
If you properly protect your keys and certificates you only need to pay if you need the old cert revoked (this is annoying if something beyond your control makes you need a full resign rather than a reissue, but things like heartbleed that necessitate this are hardly common).
Trust me, that is not an accusation I bandy about lightly.
<rant intensity="120%">
It's a scam in that SSL purports to verify identity, when really you're trusting the CA to handle that; some do more, some do less, there's no standard, no real accountability beyond major cockups when the community decides to disown you out of necessity for bad behavior (see Diginotar from not that long ago). Having a CA signed cert proves nothing beyond that you gave some person who is trusted some amount of money who ostensibly verified something.
It's a racket in its implementation. The PKI model used nowadays is rotten and exploitative. Renewals which exist as a pure profit vehicle for CAs (upwards of $100 to run some code and generate a hash? Fkn seriously?), the implementation in every browser which treats a self-signed certificate as coming from a known bad guy, or the slightest misconfiguration as same.
But every single user out there has it drilled into their head to look for the little lock (or nowadays, the much more expensive by a factor of ten or so green bar) before shopping, and every browser out there throws really scary looking errors (Chrome's red screen of doom) for something so much as an expiration date. The CA's can effectively say "Gee, that's a nice commerce website you have there, would be a shame if something were to happen to it and all your customers got scared off since you didn't pay your protec^H^H^H^H renewal fee" - Some basically do, especially around renewal time.
Charging to regenerate or revoke a certificate (again, completely automated processes with zero human interaction required) is just rent-seeking dick behavior of the highest order.
Why do I have to pay upwards of $100 for a wildcard certificate? They don't cost the CA anymore to issue or handle, it's identical to any other certificate except the CN field is written slightly differently. No, the fuckers expect me to either pay for a massively overpriced bit of hashed code or pay lots of times for slightly different overpriced bits of hashed code. Either way, i'm getting screwed. All so people who visit my website don't get scary and misleading warnings.
the implementation in every browser which treats a self-signed certificate as coming from a known bad guy
Ah, you are a "self signed certificates should be accepted" guy. I very much don't agree there.
A self-signed certificate effectively gives none of the protection of a cert signed by a trusted CA so the warnigs areperfectly valid. Of course self-signed certificates are perfectly valid in specific communities: a company might sign the certificates for all its internal apps themselves and make sure their standard employee desktops & laptops trust their internal CA, or you could sign your own and have your friends and other contacts install your CA as a trusted one. But for public use they are simply not valid.
The commonly given comparison here is the way SSH works, but that is not comparing oranges to oranges. With SSH you also have some authentication credentials that have been given to you via another channel, with a self-sign certificate you do not. You can emulate SSH's "remember this servers ket fingerprint and only tell me if it has changed" in most browsers by adding a permanent exception, so it'll only moan again if the self-signed certificate changes. Firefox does this.
Having a CA signed cert proves nothing beyond that you gave some person who is trusted some amount of money who ostensibly verified something.
That is true, but having a self-signed cert proves nothing at all. There is some process (a process that has been shown to have exploitable flaws I'll grant you, but a process exists and is far from completely broken) to try make sure that the CAs that are generally trusted are generally trustworthy.
Renewals which exist as a pure profit vehicle for CAs
Renewals exist because certificates expire. Certificates expire because an infinitely valid certificate is potentially dangerous because the revokation process can not be relied upon in many cases.
... for something so much as an expiration date ...
Gee, that's a nice commerce website you have there, would be a shame if something were to happen ...
If your e-commerce site finds $7/year or the hassle of using StartSSL's interface for free (or $120/y or $70/y respectively for EV certs) a problem, then it isn't much of a profit making e-commerce site...
Charging to regenerate or revoke a certificate is just rent-seeking dick behavior
I'll say again: keep secure copies of the relevant keys and you won't need to use the regeneration or revocation procedures.
again, completely automated processes with zero human interaction required
Automated processes that rely on infrastructure that someone needs to create, monitor, and maintain. If you ignore the infrastructure creation and maintainance costs then yes the process is zero effort and near zero cost (just a little electricity for the CPU time) but ignoring those factors is simply fallacious reasoning.
If you so strongly believe that this is a complete and utter rip-off and that you could do it so much cheaper, why not do so? If you can do it as well while charging less (or nothing) then people will flock to your service. One man's margin is another's opportunity.
Why do I have to pay upwards of $100 for a wildcard certificate? They don't cost the CA anymore to issue or handle
That I'll grant you, but artificial market segmentation exists everywhere for better or worse (usually for worse from the consumer's PoV) rather than being a property of this particular area.
All so people who visit my website don't get scary and misleading warnings.
The scary warnings are not misleading in the worse cases. If a DNS hijack passes your bank's traffic through a server that has a self-signed certificate would you want your browser to warn you or just carry on because self-signed certificates are fine usually? Unfortunately there is no way to differentiate the bad and fine situations so we give the scary warning for both to make sure we give it when it is really needed.
If you can think of a better way then do let people know, everyone in the know knows the current arrangement way isn't perfect so a good idea well presented should get listened too if addequately explored and explained.
If what you are looking for when you refer to "people visitin my website" is simple anti-snooping protection (so random people on the same WAN can't see everything for instance) rather than identity assurance, then there are already moves in that direction. HTTP2, due to be submitted for approval in final form later this year, will use encrypted traffic in all cases in a way the ensures this level of protection (though I've not yet read into the details of these proposals myself, I'll reserve judgement on how effective that will be until I have) meaning that is soon to become a sorted problem if things work out as expected. For e-commerce or anywhere else where greater trust is required though, a CA-signed certificate will still be required because for those uses identity assurance is essential.
>You can emulate SSH's "remember this servers ket fingerprint and only tell me if it has changed" in most browsers by adding a permanent exception, so it'll only moan again if the self-signed certificate changes. Firefox does this.
Doing this en-masse would be a much better system than we have right now. Example: Have a random internet user visit a site and note down the details of the certificate they get (automatically). Repeat a few thousand times. Compare notes. Wait a sec, why do I have a different certificate than 99% of the other visitors? Hmm. Throw alerts.
The chances of the average user being the target of a MITM is utterly minuscule, so this system ensures that you as the bad guy either have to own the service provider directly by installing a certificate you have keys for, or every single user that hits the site simultaneously to prevent the others from being alerted.
Pinning on steroids, basically. No CA's required.
>but having a self-signed cert proves nothing at all.
..other than the connection being encrypted, the CN on the cert matching the domain name, and it not being expired. This is the same data that the lowest level certificates from every CA provides.
I believe that the past decade or so has shown that CAs do not deserve the level of trust they're implicitly given. I certainly don't trust them. We know they can be coerced by the bad guys with guns to generate certs anyways. QED, they are not trustworthy.
>Renewals exist because certificates expire. Certificates expire because an infinitely valid certificate is potentially dangerous because the revokation process can not be relied upon in many cases.
Then re-generate the bloody certificate and don't charge me money for the privilege of giving me exactly what I had before with a different date on it!
>f your e-commerce site finds $7/year or the hassle of using StartSSL's interface for free (or $120/y or $70/y respectively for EV certs) a problem, then it isn't much of a profit making e-commerce site...
Ah yes, the old "It's not that expensive, so why are you complaining?" canard. A dime here, a nickel there. It's nothing.
Why just a little of a bad thing remains bad is left as an exercise to the reader.
>Automated processes that rely on infrastructure that someone needs to create, monitor, and maintain. If you so strongly believe that this is a complete and utter rip-off and that you could do it so much cheaper, why not do so? If you can do it as well while charging less (or nothing) then people will flock to your service. One man's margin is another's opportunity.
Why are there no serious community CAs? Why is every single one of them a large corporation? Larger infrastructure projects exist that are community, free/donationware efforts, after all. Is it really that hard, or is there some other reason?
I'll wager that one of the reasons is that entrenched players (the VeriSigns and Comodos and Thawtes of the world) get to set the standard for the market. For audits (e.g. Webtrust) that cost tens-to-hundreds of thousands of dollars to complete.
De-facto regulatory capture, except the regulations are more of a consensus than top-down legislating.
>The scary warnings are not misleading in the worse cases. If a DNS hijack passes your bank's traffic through a server that has a self-signed certificate would you want your browser to warn you or just carry on because self-signed certificates are fine usually?
I have no way to prove this, but I would be willing to bet large and ridiculous amounts of money that 99 out of 100 SSL warnings an average computer user sees is going to be the result of either misconfiguration (a badly set CN is common) or expiration. The connection is still encrypted and the certificate was still generated by a "trusted" CA so we know that the identity is valid (what are the chances that the owner of bar.com doesn't know of the existence of foo.bar.com?) - yet we still cry wolf like something is very definitely wrong.
And I say "cry wolf" for a reason. With the majority of SSL warnings being bogus (bogus as in they fail part of some validation test, instead of bogus as in the user was in some actual danger), we're training users to override the annoying warning every time.
And you can't override parts of the validation - it's all or nothing. If I know a certain website has an expired cert, and everything else is still valid, why can't I just override the expiration check? Being N+1 days out of date doesn't reduce the security of anyone concerned. - Instead I have to completely except the cert from all checks and then I won't get warned if say, the issuer changes or the CN doesn't match or some other data point, which when taken together, might add up to a different whole.
> So perhaps this should start with a reduction in the cost of valid, "don't throw a security warning" certificates down to zero.
And then people will complain that the CA is not required to revoke certificates that it issued for free when their keys become compromised.
What is people’s idea of DANE? DNSSEC adoption seems slow at the moment, but otherwise it appears to be a valid approach to this whole distributing-public-keys issue?
There's the obvious security/privacy point, but I think on a technical basis it would also be useful: the internet is full of pesky middleboxes which meddle with content, and this breaks things. For example, sometimes WebSockets can't connect unless run securely because somewhere a middlebox is looking at the content and changing it, expecting it to be HTTP traffic when really it's a websocket. Encryption removes the possibility for infrastructure-breaking middleboxes.
HTTP/2.0 is working on encrypting everything though NSA shills have managed to derail the working group by proposing "trusted proxies" that are allowed to decrypt traffic for nonsense optimization reasons.
But to stay on topic: encouraging this kind of major shift to SSL spreads a problem that is still there but is very little acknowledged or worked on -- revocations.
Certificate revocation check is using either CRL or OCSP. CRL is a list of all the revoked certificates - browser needs to download the whole file and then check if the cert is revoked. OCSP is an optimized protocol to perform a more efficient certificate validation.
CRL is slowly being phased out as it grows too fast and there are jokes around that thanks to heartbleed the size becomes comparable to a blockchain.
But now the real issue: almost NONE of the mobile browsers check for revocation! Add to this that by default Chrome is not checking for cert revocation either. So isn't this rendering the whole revocation mechanism almost useless against a real attack?
There is a reason why it's not done in mobile (and Chrome) -- it makes the requests slower, especially in mobile. There is a timeout fallback in browser so that whenever OCSP responder times out then browser assumes the cert is valid. Which is not helping when the real attack is executed.
To sum up, until there is a 99% browser-market covering revocation mechanism that is not slowing down the browser CA-based SSL certificates are too fragile to get us to the safe and encrypted future.
However, it seems possible to explicitly use a secure browser (i.e. one that checks certificate revocations) if the need arises. You don’t need 99% browser-market coverage to allow people to securely connect to your site, nor do you need 99% browser-market coverage to securely connect to a given site. You only need that site to implement HTTPS, either with a self-signed cert and e.g. Certificate Patrol on your side or a CA-signed cert and a revocation-checking browser. In the latter case, you should also configure your browser to consider an OCSP failure an invalid certificate, not a valid one.
This all sounds good but at the same time only Firefox has OCSP Hard Fail feature (OCSP failure==invalid cert). Please correct me if it's still possible with others.
Also user must become a rather paranoid person in order to start using a browser in mobile that supports hard fail OCSP. Getting HTTPS everywhere shouldn't mean promoting false sense of security.
Considering that most HTTPS attacks require MITM it seems to me that OCSP without Hard Fail leaves attack vector wide open.
For as long as the general population remains as computer illiterate as they are today no amount of encryption or decentrialising will save the internet.
Despite its growing importance, the topic of computing is completely ignored at school.
They teach the kids how to click on a few icons on Microsoft Word and Excel and PowerPoint, they graduate and become adults and think they know about computing.
We have trained people to not learn anything of substance about computers, just click on this button, and push that button ... easy. There's only so far that stuff goes.
When they teach you calculus, you don't turn around and tell the mathematicians "oh well ... just go make it easier for me this is not user-friendly".
But with software no one is willing to take a single step towards learning some basic principles.
So we end up making sacrifices because people can't be bothered with anything more complex than pushing this big green button right here.
A better internet requires a population that understands some basic principles and is therefore willing to put some effort in protecting themselves.
> You have to purchase TLS certificates from one of several certificate authorities
I'd like to stop this misconception. You don't buy certificates. You make your own certificate, which is unknown to the world, and _then_ you buy a stamp from a trusted third-party on your own certificate, so everyone can trust it.
The crux of the matter is whether you can actually trust these third-parties.
That's not accurate. You send them a 'Certificate Signing Request', which is a request for them to issue you with a certificate. It's not a complete certificate template just lacking a signature. In any case... semantically a certificate isn't a certificate until it's been certified, it's just meaningless paper.
In some ways this reminds me of the misconception about birth (and marriage) certificates from people new to genealogy. In the UK a birth certificate is just a piece of paper that contains markers of authentication to ensure people who read it can trust that it's true likeness of a record held at Her Majesty's GRO. It doesn't provide any guarantees about the honesty of data itself, or even if the person was ever born, in the same way me reading an x509 doesn't guarantee the data it contains is factual or was issued to the right person. Even if a CA does these checks, there's no way to insert a proof of such in to a certificate, therefore the certificate isn't such a proof...
The question is better posed as "Can encryption put the morals of 'you know who' back in place?"
As I see it encryption works against your average trouble maker. But the current wave essays that the data needs to be protected from even more people. Definitions like GOVERNMENT, CORPORATE, etc.
Traditionally the GOVT needs that info to make macro-level decisions, so it was trusted by most people. CORPS on other hand could get that info, and the GOVT was supposed to regulate that process.
Now it seems that the GOVT has gone beyond its neutral stance, but wait! it is actually not the GOVT, but the individuals at fault here. If you see this problem as something induced by corporate greed, than its possible to see this situation from a tech-security-guy-point-of-view as:
The GOVT needs to keep all the windows safe from such greed, and the CORPS just need few windows to get inside, just like in Computer Security. Now join reality with this hypothesis, and ask yourself this:
How many politicians fail to draw a line between:
(1) cooperation (with CORPS) for "selfish" reasons
(2) cooperation (with CORPS) for "democratic" reasons
Now yes, historically many politicians have fallen prey to this type of thing, but up to what extent? You take succumb to it sometimes or regularly on a personal level? And yes, many corporate individuals/entities feel that they are at disadvantage if they don't cheat slightly, but does it stays within the self drawn line called "slightly/just a bit" or goes beyond it?
At the end it becomes a chicken and egg problem. But is that really impossible to solve? Is the encryption alone or together with other things answer to this dilemma? And is it okay to sit at home, knowing you have the antidote to all the problems in the world, and let everyone outside "go to hell?"
PS: Seemingly, the traditional 4th party: Elite evil hackers seems to be in line, thanks to a lot of people in US security structure working against them.
Here's my quick write-up on why encryption won't get us anywhere without a proper revocation mechanism (yes, it's 100% broken in the mobile). https://news.ycombinator.com/item?id=7604641
No company has unlimited resources, but what company, in your opinion, has enough resources at their disposal as well as an incentive to crack a fully encrypted Internet?
Ah, the old "we can't stop everything so why bother trying" attack. (There's a better name for it but I cannot remember it)
But a few points:
- No company has unlimited resources. The universe doesn't have unlimited resources, AFAWK.
- Even quantum computers can only take the square root of the complexity of many algorithms (Shor's algorithm).
- Even if a company can, with concentrated effort, break the encryption used, it prevents (or at least drastically limits) mass surveillance. (If it costs your company a cent to decrypt a transmission, you'll decrypt a whole lot more than if it costs a million dollars per.)
The current situation is akin to having to travel to some centralized letter-reading facility in order to read letter mail. Your grandma sends you a letter in the mail and you have to go to a central facility downtown, then prove your identity, and then they hand over the (opened) letter.
We put a man on the moon more than 40 years ago. We must be able to sort this out.