Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Cloud Firewalls (digitalocean.com)
189 points by AYBABTME on June 6, 2017 | hide | past | favorite | 112 comments


Features like this feel like table stakes for cloud hosting in 2017, so it's nice to see DigitalOcean on board.

It'll be interesting to see what the tooling support looks like for this, it looks like it's launching with API support day one: https://developers.digitalocean.com/documentation/v2/ which is great. It looks like they're already working to get it into Terraform: https://github.com/hashicorp/terraform/pull/15121 which is fantastic!

I look forward to the day that I can automatically spin up a DigitalOcean set of Droplets running Kubernetes using this.


>I look forward to the day that I can automatically spin up a DigitalOcean set of Droplets running Kubernetes using this

Stackpoint.io can do this, you paste your DO API key and get a k8s cluster in a few minutes. Would be nice if DO built something like that in-house.


> Would be nice if DO built something like that in-house.

Curious, why? Isn't it much nicer to keep that separate from DO so that you can move away from DO easily should there be a reason to?

I'm not sure if I'm weird this way, but one main reason we use DO and not, say, AWS is because we're afraid of vendor lock-in. The more we depend on specialized services, the harder it gets to move somewhere. I wonder whether this is a common sentiment in 2017 or whether I'm just old-fashioned.


Hosted K8s isn't much lock-in...you could move to GKE or Azure.


Isn't this just doing the same exact thing as iptables only worse since it's not transparent to the operating system?

I've created bad firewall rules by mistake many times and enforcing them transparently so the machines can't see them makes the issue almost impossible to debug and fix.

Of course I have the same gripe with AWS VPC setups I guess... I just think it's funny how the cloud keeps reinventing cloud versions of things that perform objectively worse than the original, but then everyone still uses them out of pure convenience or stupidity.


1. These are stack-neutral: they let "cloud orchestrator" software plug one service into another by starting up Instance B and then opening the firewall port on Existing Instance A to talk to it, without having any ability to talk to or manage Existing Instance A, let alone knowledge of what it would have to say. Existing Instance A might be a Windows Server instance, or some custom unikernel; this approach would still work.

2. Depending on how they've implemented this, traffic that hits their firewall and bounces off might not be counted toward your bandwidth bill (presuming there's any part of DO's services that bills for bandwidth.) Once the traffic is served to your instance, they can't know whether your instance's OS firewall has just thrown it away, so they have to assume it hasn't and bill you for that. SDN-level firewalls enable "automatic DDoS protection"-type services, where you receive (and get charged for!) regular traffic, but not malicious traffic.


Most people don't need anything more complex than this for their firewall needs, so iptables is overkill.

Not only that, but iptables is just terrible to use and it just makes you want to kill yourself.

I've deployed a pretty standard policy now in DO with a couple of clicks, works as expected.

(And before anyone jumps, you should be using a host firewall too; defence in depth)


> Not only that, but iptables is just terrible to use and it just makes you want to kill yourself.

I can't agree more. Luckily though, if you have some setup scripts that you reuse, you don't have to think about iptables... Until the moment that you need to make this harmless quick change that shouldn't cause any problems and you end up locking yourself out of the server somehow.


iptables is terrible, but nftables is great and mostly available.

I wrote a post about my nftables config a while back.

Plug: https://stosb.com/blog/explaining-my-configs-nftables/


Personally, I think it is more convenient to think about things like this (ie firewall rules) as data, which makes the use of an API a convenient way to work with the data. The converse, in my mind, is that I'd have to configure each node and ensure a text representation of my firewall rules are correct. That opens the door for some thinking about concurrency that I thankfully get to avoid with an API like this.

That said, I can see your point that you are hiding some details from the OS that might be helpful such as what hosts you can talk to.

Fortunately, just because you might configure a firewall with an API rather than some Ansible plays, it doesn't mean that you can't continue to use Ansible to fill in the gaps. For example, if you did use Ansible to previously configure your iptables, you might change the playbook to call the API based on some YAML. You might use the same YAML to write some information on the host that your application can use to understand the firewall rules that are used.

The point being is that it is always good to remember these are not either/or decisions.

Lastly, I'll also speak up for those folks that don't know much about firewalls and iptables. I understand the principles, but I'm far from feeling confident managing that system myself. In my case, I'm really glad to have an option that lets me get the benefits without forcing me to operate a system I'm not well equipped to do.


Many many people work in environment where their machines are firewalled by a different time so perhaps it's no worse to them. These sorts of services are never as flexible as iptable and friends but they're still useful, especially for defense in depth (my planned usage).


Sometimes software like Docker (in certain network configurations) will use iptables for its own purposes, clobbering some of your own iptables rules. Having an external firewall for access control is super advantageous in cases like this.


Docker in _most_ (all?) situations on linux will use IPTables to allocate and admin it's networks and interfaces.


Finally I can block all SSH access by default and only open up to 1 IP address on demand then remove when finished. No jump hosts required.


DigitalOcean is killing it against Linode - I just migrated my last services off Linode because you still cannot attach arbitrary sized disks to your instances, something they've been promising as arriving "soon" for months. Go DO!


They both have their pros and cons. Linode offers more RAM for the same price. Linode's load balancers also support IPv6, DO's did not the last time I looked. And Google's Cloud Load Balancer beats both Linode and DigitalOcean hands down in terms of TLS handshakes per second.


Agreed, same exact situation here. Not having block storage in 2017 is nuts. I'm also about to finally move my last server over to DO.


As long as Linode's portal is written in CodeFusion I wouldn't trust it.



Am I the only one who has constant issues with VMs in Linode? I feel like that platform is the worst out of all the ones I've tried so far. Unfortunately, our company is stuck with it for now. :(


Disclaimer: I work there.

What kind of issues are you running into? There's a lot of issues that can happen on a server, but a lot of them are due to not enough resources or a misconfiguration.

Now, if your server is seeing constant issues on the host your server is on...


The emails we get are generally ones saying it was an issue affecting the physical hardware that the VM is hosted on, yeah. :( I tend to just ignore them now though, lol. There are only a few VMs there that are absolutely critical. Everything else is configured for HA.


That's no good, I'd like to take a look and see what's happened - having multiple hardware issues isn't necessarily normal. If you have time, please shoot a ticket our way. Mention that you spoke to Soh on Ycombinator and that I requested this ticket to be opened so we can seek out any solutions.


Issues like what? I used to have my machines freak out and crash once in a while before moving it all to KVM. Haven't had trouble since.


It's mostly "Hardware issue" errors. We'll get an email from them with the name of the VM saying that an error was detected, and that the issue will be resolved soon. It is generally fixed pretty quickly. It has the side effect of forcing me to learn how to recover from these events, but it's not so fun when my master database's server goes out (that's happened twice in the two years I've been at my current employer). We also have everything moved over to KVM already.

We occasionally experience problems with %steal due to other VMs in the same host. It was a lot worse when we had a critical service hosted in Linode, but that was gutted and moved to actual hardware. Only a small, low-traffic bit still remains in Linode.


I use the San Jose facility for my Linodes. I haven't had a problem since the rolling power outages years ago where Hurricane Electric's backup power failed to kick in.


Once they double the RAM on all plans like Linode and Vultr did, I will move all of my servers back to DigitalOcean.

I love these features but double the RAM for the same price still outweighs them.


Vultr has the cheaper side down I will admit, but their network hasn't been to great for me. It's great for development work but for production-level stuff I wouldn't put anything on it.

Linode is like that old king on the block. They've had security issues in the past (multiple) and since they do store your credit card information those did get released (If I recall). They're decent and they work. They're a bit slower than the other two in my opinion, but they work and that's something I need.

DigitalOcean has been fairly solid and reliable. Yeah you can say you get more resources for how much you spend elsewhere, but DigitalOcean has been more reliable and solid than Vultr. I mean I remember opening support tickets with them and their response being "We took care of someone else on that node. Go ahead." Took me a solid 2 hours just to install a Ubuntu image on one of their (Vultr's) storage nodes. DigitalOcean has never had to give those kinds of responses to me and overall performance and the composition of their nodes has been fairly solid and reliable.

Yeah Vultr and Linode are cheaper and you get more, but I really feel DigitalOcean is solid and more reliable than Vultr. I mean Vultr's SLA is 100% uptime, but their credit return policy is to the effects of "we'll just give you your cheap money back". I don't care about credits and I just want my service online and not having to worry about anything, and most of Vultr's answers has usually been "wait" or "we did it" (but no real long-term solution to the problem). DO has been focusing on providing long-term solutions to problems I've had and no "bs" excuses. Linode has been solid as well, but don't have many of the features DO is starting to roll out with (which fair enough, it's their decision). DO has nothing but praise from me.


Vultr has been rock solid for me for years in the Sydney region.


I had so many problems with Linode. The company itself seems really poorly structured, too.

Do you consider this when you're talking about value? I know little about vultr.

5 vs 10 vs 20 vs 40 dollars doesn't mean much, and Linode pricing starts to converge on DO's after the $80 price point.


> 5 vs 10 vs 20 vs 40 dollars doesn't mean much

This depends entirely on what you're doing with it. For a hobby project, it makes a big difference to me.


That's a good point. I forget, because I have a bunch of DO boxes: several, sometimes, for each client.


This is exactly why my next server will be on Vultr, too, although I still curse Linode for still not having block storage and forcing me to pay double price for a larger server even though I just need 20 more GB of disk space.


Not supporting Linode, but can't you use something like AWS for that?


Hmm, how do you mean? Mount an EC2 volume on a Linode server? The latency would be horrible.


Or S3, or another DO box, I guess. You're right that you'd never get excellent latency, but it could be good for storage.


Unfortunately it's a database that's gotten big, so I need low latency :/


Hmm! How big?


40 GB or so, enough that I needed a larger server :/


Really all we (the company I work for) need now is Block Storage in LON1!


We need it for ams3


We published the roll-out schedule for Block Storage a little while back. Both LON1 and AMS3 are planned for this year: https://blog.digitalocean.com/block-storage-comes-to-singapo...


sweet. Is it possible to enable backups on block storage?


You can backup the volume via snapshots.


Can someone clarify if the traffic between two droplets is "secure" i.e other droplets cannot see them? On AWS, I can create a VPC and put two ec2 instances on that.


We are about 2 quarters in on VPC, and we are looking to launch a full VPC solution in Q4 of 2017 or early Q1 of 2018. That coupled with firewalls will allow you to completely segment your traffic and also ensure that all private traffic is completely segmented as well.


I'm not expert on hypervisors but is what you're saying even possible? One would think that dispatching the correct packets to the correct VM would be an integral part of the virtualization environment.


Not an expert either, but afaik customer isolation is quite easy when you put each customer on their own VLAN (or similar) and remove the VLAN tag only after the packet has reached the virtual NIC of the customer's VM.


It isn't by default, that's why DO calls it "shared private network" (their words, not mine).

To secure internal traffic on DO you have to encrypt it, using for example something like tinc.


Don't use Tinc if you care about performance. Tinc is implemented in userland, and is an order of magnitude slower than a kernel-negotiated encryption. IPsec and GRE tunnels are much better solutions.


I have no idea if something changed, but at least about year ago "Private Networking" on DO was shared between more than just your droplets and required additional firewall / encryption.


They seem to be headed towards being an "AWS light". Would be nice to be have an alternative with reasonable egress costs. Still a long way to go though. At a minimum, they would need a more configurable load balancer and some S3 type function.


I've recently done a moderately complex hybrid setup that used DO in conjunction with S3 and Route53.

My biggest takeaway from the experience was how much the simplicity and speed of DO's dashboard interface stood out - the AWS web interface just felt laggy by comparison. I know it sounds like a poor reason to favour a platform but DO was just a simple pleasure to navigate and use.


I think the takeaway here is that AWS is designed to be managed via its API and that shows when one tries to use the GUI (Source: Had a similar experience).


If you're managing your infrastructure via GUI it's probably not moderately complex.


I would be extremely happy with DO if they put out an S3 competitor. Right now, most of my servers are on DO, the only things I need AWS for is a single windows server to run some windows only software, and S3 to store my database backups.


Object Storage is currently in internal-company-beta =]

After that we will be doing a customer beta followed by a GA release scheduled for Q3.

=]]


That makes me really happy to hear. Is there any good way to keep up on potential up-coming features you are working on? I feel like that would be good knowledge to have when making decisions.


We don't have a formal system for that but we do announce customer early request betas. Sometimes they are invite only, other times we open them more broadly. In those cases where it is more broad it's usually featured on our homepage and then it's on a first come, first serve basis in terms of getting access and limited by the number of invitations that we are accepting.

It's a bit of a fluid process as it depends on the type of service, product, feature, we are rolling out.


Any chance I can get in on the beta for this? I've done the Load balancer and block storage beta? <username>@gmail.com


We will make sure to add you once the beta is available.


Throwaway, but I would bet money that you'll see this pretty soon on DO - hang in there!


This is one of the big reasons I come to HN...insider intel.


Yes. Filestorage is an essential piece of most apps we build and keeps us tied to Amazon (or other competitors like Microsoft or Google who have storage.)



B2 is nice, but the single location is something of a deal-breaker as far as being an S3 replacement.


Can you explain your use case? I know a lot of businesses solely using us-east-1


Well, for starters, all customers that are not in us-east. Such as US west coast, Europe or Asia.

Besides latency, data protection issues also become a problem when you cross borders. My employer's (SAP) cloud platform advertises as a unique selling point that they have data centers in a wide variety of locations, so that a customer's data never has to leave their jurisdiction (which is important e.g. for government and its contractors).

And that's before we get to high-availability setups. Remember how AWS us-east was down just a few months ago?


A business needs their S3 component highly available. If they don't have a DR plan in place, and a massive S3 outage (like the one that happened a couple months ago) occurs, they're fucked.


My specific use case isn't really that interesting. Many use cases where a single AZ or region failure shouldn't stop business exist.


It is very, very rare from my professional ops/devops experience to see an org built to survive a region outage; the tools are there, the money/business case is not.


It doesn't have to be HA and replication. Sharding by customer would mean a region outage takes out only 1/N customers. That wouldn't have a higher cost, other than whatever dev effort is needed to work that way.


it's always smart to store your backups on another service anyway for any doomsday scenarios


We use Google Cloud Storage Coldline as our backup file storage. Synced using rclone (rsync for cloud) nightly. The cost is next to nothing. IIRC, ~3TB stored for $25/mo.


Oh even if DO offered an S3 competitor, i'd still have them going to S3 as well. I'd just have them also going to DO's solution, so restores would be MUCH faster, and wouldn't cost an arm and a leg.


And to double the single-points of failure? Now your "stuff" is only alive if both the virtual machine(s) are up, and the object-store is accessible!


What do you mean? It's not like if your object store becomes unavailable your database sending it's backups there will suddenly stop working.


Indeed, but if you're hosting user-images on S3 and they go away your site is broken.

That might not be a big deal, or it might mean your site is 100% broken. I can't guess, but I'd assume since you went to the effort to setup a store you need it in some way.

(No backups? Of a database? That's one power-cut, or hardware failure away from complete data loss too!)


Feels like a losing battle. AWS codestar is essentially the DO/Heroku killer.


And yet I instantly prefer DO to AWS because of the experience and good customer support and developer relations. Takes almost no time getting up to speed, while AWS feels like a better fit for large enterprise, which I'm not in that world. Codestar all they want, it's still AWS.


I don't necessarily think so. There's still a reasonable market for things like DO/Heroku, AWS inherently isn't trying to make it easier for people. I just wish I could have their image support and interface with dedicated hardware.


So awesome. It made me giggy today to remove all droplet level iptables rules and convert them to network level Cloud Firewall rules.

Love how DigitalOcean allows you to specify droplets as sources, groups of droplets (tags) as sources, or CIDR ranges.


How do you prevent one compromised host spreading to the rest of your hosts? Shouldn't this be an added level of security rather than a replacement?


Is there a way to upload a list of ip addresses instead of having to paste, remove focus, re-focus over and over? I use cloud9 IDE and just for 1 region there are 90 possible IPs that they could be using to ssh into my DO box.


I know this is not strictly related but how well does Digital Ocean hold under a DDoS nowadays? Are they closer to Hetzner who just blackholes your IP or OVH who can withstand virtually anything?


In my experience last year, Digital Ocean blackholes you for 24 hours, during which they don't answer support tickets.

In other words, you can take down any Digital Ocean site for 24 hours after paying $1 to a booter unless they are behind CloudFlare or some other mitigation.


DigitalOcean automatically blackholed one of my droplets due to a DDoS attack last year. They notified me immediately via email and a human responded to my support ticket within 30 minutes with technical details.

I was able to provision a new droplet right away, so the downtime was minimal. I think the way DigitalOcean handled the incident was perfectly reasonable, and was a much better experience than I've had with other cloud providers in the past.


It's not clear what this offers over the usual iptables/firewalld + ansible solution. What am I missing?


Disclaimer: DO Support here

The traffic is blocked/allowed at our network layer before being routed to the droplet. The rules are easily configurable through the control panel and API. You can also specify Droplets (individual or tagged) and our recently new Load Balancers as the targets.

You can also layer multiple firewalls on top of one another if you want to apply specific firewall rules to only a specific set of Droplets/LBs

The Intro tutorial we have is great for details: https://www.digitalocean.com/community/tutorials/an-introduc...

Feel free to reach out to us if you have any more specific questions. :)


How many firewalls can a single user create, and how many rules can be in each firewall? How is the order of multiple firewalls applied to the same droplet determined? Where is there logging to show when a rule matched? Is there any future plan to support REJECTing packets rather than only DROPing? Will the user interface warn the user when they are about to block all traffic (including ssh) to a droplet?


(DO employee here)

The intro article answers many of these questions (and more): https://www.digitalocean.com/community/tutorials/an-introduc...

> How many firewalls can a single user create, and how many rules can be in each firewall?

100 firewalls, 50 rules per firewall

> How is the order of multiple firewalls applied to the same droplet determined?

The rules are all added together and applied at the same priority. Order doesn't matter.

> Where is there logging to show when a rule matched?

No logging is available.

> Is there any future plan to support REJECTing packets rather than only DROPing?

No plans for this.

> Will the user interface warn the user when they are about to block all traffic (including ssh) to a droplet?

There are no footgun warnings, no.

Let me know if you have more questions... thanks!


Just checking if I have this right...

> The rules are all added together and applied at the same priority. Order doesn't matter.

If I make several firewalls, and the order of the rules when mixed results in unexpected traffic flows compared to the firewalls being applied individually, I have a bug that is hard to see, only experience during traffic as "timeout" or "not a timeout", and because of a lack of logging, no way to troubleshoot other than rewriting all the firewalls to try to remove any possibility of conflict, or writing whole new firewalls.

If I understood correctly, it's basically unsafe to mix more than one firewall per droplet, and in general a pain to troubleshoot. This is in contrast to iptables, where you can have multiple tables and chains, they follow a prescribed order, and you can mix and match them with expected results. Not to mention you can add logging whenever you need it.


It's possible you're assuming the firewalls have more rule types than they actually do. Basically these firewalls default to dropping all packets, and any rules you add are to accept a port or port range. Adding such rules together is simple and doesn't depend on order.


Oh... I hadn't quite grasped the limitations. So these "firewalls" are basically two chains with a drop policy and rules with ACCEPT jump targets, and either a source or destination. This seems to be a port whitelist rather than a firewall.

It does seem that if you create one single firewall per role, this is a simple and effective means of applying really basic port access rules to a large number of droplets at once. But by calling it a "firewall", people actually believe it replaces a real modern firewall and have actually dropped real firewalls from their droplets, making overall security worse. Not to mention the many ways you could accidentally open up or restrict more than you wanted to.

Maybe I missed something again. It says your firewalls are stateful. Are the input rule targets really "NEW,ESTABLISHED" and the output rule targets really "ESTABLISHED,RELATED" ? If they are doing connection tracking and verifying the 3way handshake before passing on the connection, I suppose this is useful to prevent syn floods that don't complete a handshake. I'd be interested to know what actual protection these firewalls give other than port whitelisting. (And yes, I see a generic icmp type is included as well as tcp & udp)


The firewalls are stateful, with connection tracking.


Question. Does DO log access to a clients vps in some way if they do not have the cloud firewall product?

In other words if a VPS has traffic to http https ssh ftp and so on coming in from all over. Does DO have logs that show that traffic where it's coming from and where on DO it's going to? Or does DO only know who has spun up and requisitioned the VPS?

(Same question for the firewall product as well).


We do not log any type of network access to a client Droplet/VPS. Whether they use Cloud Firewall or not.

I hope that answers the question. If you have more or would like more information, feel free to reach out to our Support team!


The same thing that their load balancer project offers over HAProxy + Ansible - the ability to configure it via their API (or their Web UI) and have DigitalOcean own the implementation and management of everything. This includes making improvements under the hood, security updates, etc.

There's very little you can't do with enough Ansible and other utilities, but letting your cloud hosting provider handle it comes with a lot of benefits.


Ive always like Ansible/Puppet/file storage. I can quickly boot from ethernet, populate a req key from puppet from ansible, load puppet to the appropriate config needed, and off I go.

No having to dork with servers one at a time. And with Ansible, if I change puppet manifests, I can reload puppet with ansible instead of waiting the default 30m.


iptables will consume resources on your droplet (CPU/network) to filter out the traffic. Cloud Firewalls are applied directly by the DO network, before ever hitting any of your droplets, so it uses none of your resources.


It defines a firewall outside of your machine so your iptables/firewalld configuration on individual droplets can be much simpler. (assuming it works like security groups on AWS, create one apply to many.)


For example, let all your Droplets talk to each other (and no one else), but not have to explicitly track the list of IPs and generate firewall rules from that list on every host every time it changes.

Deny outbound access except through specific hosts (config management, internal package mirrors) so that even an attacker with root can't phone home.

The same reasons anyone uses firewall devices vs. host-based firewalls.


Point and click configuration for those of us not so comfortable with text files, I guess


A GUI. Seriously though, they are probably using the technologies that you mentioned (or some equivalent) under the hood.


This just made my day. Running apps on FreeBSD is awesome, but dealing with IPFW config is a nightmare.


I'm trying to explain what these are to my grandmother. Any car analogies? ;)


A firewall is named for the structure in an automobile which keeps engine compartment fires out of the passenger cabin.

https://en.wikipedia.org/wiki/Firewall_(engine)


On top of it, there are holes (ports) in the firewall to let various wires through. It would also keep road grime, rocks, etc that might be kicked up from entering the passenger compartment.


This is why I love Hackernews.


A physical firewall is "a wall or partition designed to inhibit or prevent the spread of fire" and a car typically has one in-between the engine and the passenger compartment. That way if the engine starts on fire, the passenger should have time to get out of the car before the fire gets to them.

Computer firewalls are similar in that they are designed to protect the computer (the passenger compartment) from security threats (fire). Just like the firewall in your car, a firewall on a computer probably won't completely prevent security threats from getting through but it will help.


They are providing strong walls with carefully planned entry and exits to the "highway." The point is to only allow access to the road from predetermined, controlled points. Although its not 100% guaranteed to control access, it will mitigate many (if not most) of the unwanted traffic. As far as the actual entry and exit points that are sanctioned, those can still cause problems and still need to be monitored.


Think of a fire door :) it lets people in and out (intended purpose) but doesn't let fire in and out :)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: