Hacker News new | past | comments | ask | show | jobs | submit login
DigitalOcean VPC (digitalocean.com)
357 points by SudoAlex on April 28, 2020 | hide | past | favorite | 164 comments



I'm glad they plugged their outbound network transfer fees compared to the others[1]. I was shocked and horrified when my AWS bill (which I pay myself) quadrupled due to outgoing network transfer fees. It's truly outrageous what they charge. I use Digital Ocean a lot now simply to avoid nasty surprises like that. I hope AWS and Google change that.

[1] https://blog.digitalocean.com/its-all-about-the-bandwidth-wh...


Guess who‘s favorite evil corp cloud is a magnitude cheaper than AWS on transfer pricing? Oracle, which is why Zoom just signed a deal.

https://www.lastweekinaws.com/blog/why-zoom-chose-oracle-clo...


“A magnitude cheaper” _today_. As soon as the focus shifts from acquiring cloud customers to making them profitable, expect the screws to tighten.

As long as Larry lives and breathes, Oracle gonna Oracle.


As long as you don't use vendor specific tools, who cares? That's the point of kubernetes and to abstract away the whole "cloud" provider. Let it become a commodity and a race to the bottom.


Zoom runs on AWS and Google too, so they're apparently cloud vendor agnostic. If Oracle starts acting up, hasta la vista.


Oracle cloud? The same technology that runs certain state unemployment systems and has been completely unable to scale, leaving hundreds of thousands of people with no income for the last five weeks?

Oracle really should remove its logo from the footers of all those collapsing web sites. It's embarrassing.


The sum of the national unemployment systems, generally built prior to cloud scale out architecture, were able to handle 25x normal traffic load (5M claims vs. 200,000 in a normal week, even 2008-2010 peak was 700,000 claims).

Dislike Oracle all you want, even many cloud architected applications would fall over with a 25x traffic increase in a single week (remember Fail Whales?).


That stack was most likely legacy middleware + database backend, a combination of fusion middleware, oracle database, CRM, etc. running on physical hardware or virtualized. Not automated, very basic HA, not easily scalable.

Nothing to do with Oracle Cloud (although Oracle Cloud won't be on my list unless it's marginally cheaper than other cloud service providers).

BTW: Talking about Oracle Cloud, my free tier trial ended miserably, 2 weeks after provisioning the VMs, Cockpit (I run it on my home NAS - managing/monitoring a small group of cloud VPS using the web UI) reported connection failed, only to find that my account has been terminated without any warning or notification along with my 2 free VMs based in Phoenix, lucky that I didn't actually put any workload on it (left them running only - feeling something's gonna happen...), contacted support and was told account deleted, no reason, redirected me to customer support (my oracle support, I couldn't figure out how that works, so give up...). I still don't understand how Oracle Cloud login works...


Geez, thanks for sharing your experience. I run a number of things that I pay for myself and my cloud bill each month is becoming non-negligible. Even tho it kills me inside I considered looking at Oracle, but this is enough to steer me away. Thank you :-)


Looks like you should have a good look at DigitalOcean to run your personal side projects or fun stuff, which has a much more simpler and transparent billing model (capped, no nasty hidden cost).

I've been a long time DO customer and overall happy for the past 7 years. I've write my personal experience [1] with DO in another post.

[1]: https://news.ycombinator.com/item?id=23016669


I had the same experience on my trial, along with SUPER pushy sales people. In the end, after I gave them our setup on DO and they said they'd come up with a proposal for an equivalent setup in Oracle Cloud, they came back and said "can't do it" and their proposal was a little over 3x what our current bill was.


Yes, Most Gov systems are still not on cloud, OCI is way better and modern, cheaper than AWS (Disclaimer : I work at Oracle).


Do we know if the underlying cloud failed or if it was an application issue? I strongly suspect the latter especially when it comes to government projects.


Especially government projects that were contracted out to Oracle. :)


Proof?



Oracle middleware and other software services are an abomination. They're up there with JIRA and Confluence in terms of how overwhelmingly complex and slow they run. Sure, you get reconfigurability but if anything Oracle goes down it's far more likely to be their software applications rather than their hosting solution.


> Oracle really should remove its logo from the footers of all those collapsing web sites.

Why remove the logo if you can't see the website? ;)


A single app doesn’t really have any relation to IaaS


They offer worse services, so they had to offer lower prices, otherwise it wouldn't make any sense.


Why did zoom go with Oracle? Maybe they just wanted more security nightmares and zerodays.

Or maybe it's palliative care.


well there's got a be a reason why they're our favorite evil corp doesn't there?


$0.01/GB is fantastic, but I have a bandwidth intensive ML media application and don't know how to monetize or sell it quickly enough to pay for my bandwidth costs.

Is there a cloud or dedicated server farm with even cheaper outbound bandwidth?

Edit: as much as I hate Oracle, their first 10TB is free, and each GB after that is $0.0085/GB. Better...


If you’re okay with servers located in Germany, Hetzner is a provider I can vouch for and they offer additional egress at 1EUR/TB. 20TB included, too. (Billing has been rather painful, though.)


I have a dedicated server at Hetzner for 25 euro a month. I get an i7, 16 GB of RAM and 2x3 TB in RAID 1, which is nice since I'm hosting large media files.

I'm currently sitting at 4.2 TB of outbound traffic for the last 30 days, so I still have plenty of room to scale up my outbound traffic before I hit any limits. But most importantly my costs are fixed.


How reliable do you find Hetzner servers to be?


Quite. I did have some issue with my server becoming unreachable every couple of weeks at the start of the year for a couple of times. Not sure if it was a fault that I caused or if there was some kind of a networking issue. I know I tinkered with the server a bit earlier, but it seems to have resolved itself without me really doing anything, so it could really be either way.

One problem is also that apparently some Americans have really bad peering to my server. As an European, I can't really confirm if this is the case, but it's what I've heard.


Thank you


They've generally changed to unlimited traffic for the standard 1gbps uplink (with 1gbps guaranteed bandwidth).

Not unlimited to 10gbps - but free up to 20tb:

> Traffic usage is unlimited and free of charge. Please note that our unlimited traffic policy does not apply to servers that have the 10G uplink addon. In this special case, we will charge the usage over 20TB with € 1.00/TB. (The basis for calculation is for outgoing traffic only. Incoming and internal traffic is not calculated.) There is no bandwidth limitation.


Can also vouch for Hetzner. Used them at several companies and they've always been pleasant to deal with.

I've moved several people off AWS into Hetzner exactly because of their egress costs, in one case cutting their total hosting cost by 90% for that reason.

Even for people who stick with AWS and don't want to deal with any added complexity, even something as simple as putting a caching proxy in Hetzner and routing European customers to it can sometimes produce significant cost reductions.


Check out Time4vps, they have some excellent prices on storage servers and their egress is generous.


Never had issues with their billing. They allow usage alerts, give decent price previews and detailed invoices.


Besides Hetzner, I can also highly recommend netcup.eu, especially for hobby projects. Their prices are even lower and include more data volume. Their interface is not as nice as Hetzner or Digitalocean, but I am fine with that. https://www.netcup.eu/vserver/vps.php Billing: If you have access to an European bank account they offer SEPA direct debit, which works like a charm.


Have you looked at Time4vps, I have been using one of their 1TB storage servers, and I pay quarterly what netcup seems to charge monthly. It's openvz instead of kvm, but I use it to backup with rsync and borg. I also run a calibre library on mine and I've never had any issues.


Can you elaborate why billing is a pain point ? thanks.


Could not set up auto-charging, had to visit the billing portal once a month and manually initiate a Paypal or credit card transaction. Probably okay if you’re a company, not so convenient for an individual with a side project (at least I prefer set and forget).

That was two years ago though, maybe it has improved.


This is only the case with PayPal, if you switch to credit card it auto charges you


Hmm, weird, I believe I switched from credit card to PayPal at some point and there was no auto-charging prior to that either. Anyway, happy to be corrected.


Haven't been an issue for me. I pay via credit card, no trouble.


I also haven't had an issue paying by a credit card. The invoice is detailed enough too.


Hetzner also has some traffic flatrate servers, but after a specific threshold your bandwidth will be capped.


I'm transferring out ~5TB/day and pay no charges for it. I'm using scaleway (https://www.scaleway.com/en/pricing/)


How are you finding the reliability. I've been hosting my personal website on ScaleWay, and there's been quite a bit of downtime (say 40 minutes every few weeks). Not a problem for my personal website, but I'm not sure I'd want to host production services on it.


this is indeed true earlier with C2* series instances. I used to face this problem daily, since I also used NAS. They have deprecated that dedicated box series now and currently using GP1-M which is reliable now.


Would you mind sharing how much you’re spending at Scaleway each month (a ballpark would be enough)? I’m just generally wary of claims of unmetered resources at oversubscribed cloud providers — I mean if I’m paying $5/mo and transferring 150TB they probably have every incentive to cut me off. A clearly defined quota with moderate overage fees actually gives me peace of mind.


current monthly charges are around 300eur. totally running 3 servers.


  External outgoing traffic
  First step up to 75 GB: 0 € per GB
  From 75GB to 499TB: 0.01 € per GB/month
Am I missing something?


Yes, you're reading the object storage traffic section.

The servers themselves come with unlimited transfer.


That's object storage, not the servers. AFAIK, Scaleway has unlimited server traffic.


You might want to look for a provider who is a member of the "Bandwidth Alliance" [0].

---

[0]: https://www.cloudflare.com/bandwidth-alliance/


Digital Ocean's bandwidth pricing is pretty solid at $10/TB. It's pooled between droplets, too, so it's often cheaper to spool up a few droplets you're not using to get slightly better bandwidth prices if you use a lot. Sadly, I just missed out on being grandfathered in at free bandwidth, which would have been great for my PortableApps.com open source project. We'll be hitting 100 TB a month soon across all downloads.


PortableApps.com looks like an interesting project. Could you give me a quick technical explanation of what it does?


It allows you to use Windows apps without needing to install them into Windows, so you can sync it between machines in a cloud folder like Dropbox/Google Drive, carry it on an external flash/hard drive, or use it on a machine you may not have install rights to. You can also keep separate copies of the same app for work and personal on the same Windows account. It's packaged as an app manager with a start menu, app store, automatic software updater, backup/restore functionality, etc.

On the technical side, we make use of an apps ability to direct where it stores its settings if it has one, and also move settings into/out of the registry and/or APPDATA on the local machine when needed. Our open source 'launcher' acts as a helper app to handle this for each app so it doesn't mess up a local version that's already there and so it adjusts paths if you move around between PCs and the paths to your apps or documents change.


Very interesting, thank you!


Scaleway has 0 bandwidth fees and unlimited bandwidth.


OVH or Soyoustart (or even Kimsufi, if you’re happy with smaller instances, no SLA and only 100Mbps) has unmetered bandwidth.


I haven't been shopping around for a while but most I saw in europe was "unlimited" bandwidth for most smaller accounts. We use Glesys now.


This is precisely why we switched from AWS to DigitalOcean. This cut out monthly hosting costs by 80% and vastly simplified our entire setup. We also saw some solid performance gains in some areas (wrote up our benchmarks: https://goldfirestudios.com/blog/150/Benchmarking-AWS-Digita...).


So there is a link in that Blog post (https://www.digitalocean.com/pricing/bandwidth/) wich got me thinking about the $.01 Price and the Bandwithpool.

1000GB Data consumption is 10$ (1.000 GB @ $0.01 / GB) but adding a Droplet for 5$ adds 1000GB to you Datapool.

So with adding cheap Droplets you could lower the Bandwith Cost 50%?


At a previous company, we literally did that, and also had to get our VM limit increased.

But it only gets you so far, its a cost optimization at the low end, but eventually it isn't "worth it".

You are checking your bandwidth use against the pool limit, then spinning up a new VM when you get close to the limit, but then back down the next month so you aren't paying for unneeded bandwidth, paying the extra $5 and not worrying about some custom price hack is a lot easier and less stressful.


We use Direct Connect to get traffic to edge nodes where we buy fixed 10Gbps links and pay a fraction of the AWS cost. AWS bandwidth costs are ridiculous.


Direct connect still charges per GB out. The cheapest listed location is $0.02/Gb.


Yes I said it’s a fraction of the cost. One fifth.


Still cheaper than normal outbound data transfer rates though!


I'm not familiar with Direct Connect. Does it work out as AWS giving reduced bandwidth fees for certain providers?


It’s a private link to an external provider and you pay much less for transit to that provider.


I hope they follow linode (https://www.linode.com/docs/platform/billing-and-support/net...) bandwidth the ability to combine bandwidth from different droplet


Bandwidth is already pooled between all your Droplets... this was implemented sometime last year (I think). (DO employee here)


I cannot seem to find AWS bandwidth pricing. Only thing I found was some blogpost from 2011. How do I check this out?


Ok, it seems to show up at calculator.aws when you go to "advanced" section


Yeah it's ridiculously obscured, which is partly how it suprised me so much. I hadn't seen the charge mentioned anywhere. That alone seems like it should send up giant red flags of bad business practice.


I find some of the limits weird https://www.digitalocean.com/docs/networking/vpc/

- VPC network ranges cannot overlap with the ranges of other networks in the same account. (Edit: Does this mean each VPC in the account has to have a non overlapping subnet?)

-Resources do not currently support multiple private network interfaces and cannot be placed in multiple VPC networks.

- Not being able to change the VPC connected to stuff without taking a snapshot


Pretty standard? Taking AWS for example:

- You can do this, but it's highly discouraged since it means no VPC peering if you ever need that.

- Can't do this at all with network interfaces, it all is via VPC peering.

- Can't change the VPC after an instance has been created, you have to take a snapshot and relaunch it.


I basically have no clue.

I thought one would have a node two VPC, e.g. app and database, so it can speak to both, but the load balancers can't.

With peering one would have an app VPC and a database VPC and peer them?


In AWS you use a mix of private and public subnets within a single VPC.

Not sure best practice for DO since I haven't tried their VPC setup but it doesn't appear to have a way to let two VPCs interact yet.


Interesting, didn’t know that about AWS. In more familiar with the Google cloud version of VPC. Seems the DO implementation is more like the AWS version


For what it's worth VPC ranges are allowed to overlap in GCP -- and do by default -- but then you aren't able to peer them. I kind of prefer the DO/AWS constraint.


Agreed, having paid the cost of a few VPC moves to separate ranges on AWS in order to gain peering.


No such constraint in AWS.


I misremembered. Thanks for the clarification.


Heh, no worries -- mostly the same deal there:

- You can do it, but it's probably not a great idea if you need to do VPC peering (or attach multiple VPCs to one VM, see next).

- Does actually work, but it does not work if the VPCs you're trying to attach to a single VM have overlapping CIDRs.

- Same deal, almost. You cannot add or remove network interfaces from an existing VM.


you cannot peer 2 vpcs that have an overlap but you can have multiple vpcs that have overlaps. it only matters for the 2 vpcs you want to peer


But AWS tells you to not overlap them, and likely keeps that behavior for legacy reasons.


I could have missed it, but I've never seen a suggestion not to overlap network addresses unless you want to peer them.

If you're launching ephemeral networks for testing VMs / virtual appliances in their own, isolated networks, it can be totally feasible to have lots of them using the same addresses. You can only create 5 VPCs (at all) by default per AWS account, but they'll raise that limit for you if you request it.


> - VPC network ranges cannot overlap with the ranges of other networks in the same account. (Edit: Does this mean each VPC in the account has to have a non overlapping subnet?)

Overlapping subnets tend to be a mistake and will bite you in the behind whenever you want to peer them. What'd be your reason to want overlapping subnets in the first place?


Got to love DO.

Simple pricing, nothing hidden, not the most feature rich ecosystem, but I get no billing surprises.

Source: customer for 3 years.


Over 7 years, 100s of VMs atm. Tried to get into AWS and GCP few times over the years just for the resume-building and fun, didn't see the point. While I can understand they fit some use-cases, I think a good rule of thumb would be - if you don't know why exactly you need AWS/GCP/Azure, go with DigitalOcean. I worked at 2 companies that, if they went with AWS as they've planned, they would've gone under three times over, mostly because of the egress. If you're startup and DO's services are mostly there for your use case, it's antithetical for you to go with AWS/GCP.


Just switched to AWS from DO after 5 years. DO was working great, but needed a more managed solution (am very familiar with the sysadmin, but just wanted to reduce workload) so went with AWS Fargate and happy with it so far.


Same here, happy customer for 4 years. Currently we have ~60 VMs of different sizes (down from ~100 before COVID lockdown).

My main wish at this point is cross data center load balancers.


What do you use all the VMs for, if you don't mind me asking?


It's not a personal project, but for work. Basically to run our mobile backend (nginx, spring boot, mongodb, rabbitmq, etc), staging and production. And all that needs redundancy of course.

We manage them using Ansible.


Me too, but they recently let me down with their managed Redis . They clearly mention that their offering has daily backups, but it actually doesn't (had to contact support to find out, though). Had to migrate away from them because of that.


Hey, Kamal from DigitalOcean here. I'm sorry that happened to you! You're right, managed Redis Databases do not support backups[0] currently. I found the page on the website that says they do and let the team know. They will correct it asap.

[0]: https://www.digitalocean.com/docs/databases/redis/#redis-lim...


Hey Kamal, good to see you here. I'm sorry to hijack this thread, but I'm hoping someone from DO could provide an official response to this often-cited post on HN regarding security issues on your K8S offering: https://news.ycombinator.com/item?id=22490390

Is there a chance you could poke someone into looking into this?


Hello,

I'm the tech lead for Kubernetes at DO. Just wanted to jump in and provide some clarification around the security issues you brought up.

The blog post you're referring to came out in December 2018, shortly after we released DOKS as a Limited Availability offering. By the time we announced our General Availability release in May 2019, we had done the following:

1. Changed our node bootstrapping process so that etcd information is no longer necessary in the metadata API, and removed said etcd information from metadata. 2. Firewalled off etcd so that it's accessible only inside the cluster. 3. Shifted how we run the CSI controller component so that a DO API token no longer needs to be stored as a secret in the cluster. 4. Switched from Flannel to Cilium as the CNI plugin, which allows users to configure network policies. We don't configure any network policies by default, but the option is there for users who want to use them.

These changes fix the vulnerabilities explained in the blog post. We do have further hardening measures planned, including limiting the scope of API tokens (one of the suggestions from the blog post, and also an often-requested feature from DO customers), but that's a big project so we can't provide a firm timeline for it at this point.

Hope this clarifies the current situation. If you or anyone else finds new security issues with DOKS (or other DO products) we would love to know about it. Our security team is always accepting vulnerability reports via their disclosure program: https://www.digitalocean.com/legal/contact-security/


It does, thank you for the in-depth response! I'll refer to this comment if I ever see that post brought up again.


Hey, I ran this by the DOKS team and they confirmed that this was taken care of a while back. Just to clarify, that issue existed while the product was in Limited Availability (think alpha). Nodes are now bootstrapped in a different way that eliminates the need to expose sensitive info in metadata or anywhere within the cluster itself.


Thank you!


I am a happy DO customer but wish they would have the ability to pay yearly.

Corporate prefers paying yearly to paying monthly, and for that reason work uses Linode. (Which is not bad either, IMO)


I'm a DO client since the beginning. Can anyone tell me how they compare to linode?


DO has so far been very good at keeping my CC details safe. Can't say the same for Linode (https://news.ycombinator.com/item?id=5552756).


As a Linode user, DO has many more features and makes me want to switch: Managed K8S in my region, Managed Postgres, private networking and now VPC...


Do they talk at all about what they're using to provide the VPC overlay? I have a DO k8s cluster and it uses Cilium for the CNI, which turns out to be quite useful, so I guess I'm wondering if they're also using Cilium for this.

(Over in AWS land, they wrote a CNI for their own VPC networking. It turns out to have many strange limitations. For example, you can only run 17 pods on a certain type of node, because that node is only allowed to have 19 VPC addresses. I was quite surprised when pods stopped scheduling even though CPU and memory were available. Turns out internal IP addresses are a resource, too. DigitalOcean has the advantage of starting fresh, so might be able to use something open source that can be played with in a dev environment and extended with open source projects.)


Better way of doing natively addressable pods is assign whole subnets (like /25) as secondary interface and distribute that to pods via cni. I think gke pod network works that way. Not sure why eks decided 17 pods is ok lol


[flagged]


Parent poster sounds more credible. Maybe tone down a bit this is a forum for discussion not a lan party.


Thanks I’m aware that aws hacked the shit out of their inflexible legacy design to support this (as well as hid docs on github and continue to charge you for those ENIs). What else is new?


[flagged]


Hm I think you are right on the price (at least I don’t see it anymore). The fact that it’s ridiculously complex feature remains though. We ended up just running regular overlay since messing around and planning for ENIs is not worth it (I suppose not an option for eks nodes).


It looks like each physical server in EC2 can have 750 IPs so if your VM is 1/Nth of the server you also get 1/Nth of the IPs.


It is actually based on the ec2 instance type you decide to boot, and generally the bigger the instance the more ENI's you can attach. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-en...

The EC2 k8s network driver they wrote essentially will attach/detach extra ENI's on the fly and pre-allocate IP addresses to your EC2 host to allow for fast pod spin up/down.

I found this article pretty helpful to explain some of the AWS differences: https://www.contino.io/insights/kubernetes-is-hard-why-eks-m...


> Turns out internal IP addresses are a resource, too.

That's not what is happening in AWS. IP address are resources (duh), but that's no the issue. With their CNI plugin each pod gets its own Elastic Network Interface. ENIs aren't just virtio's virtual network, it could be ENA (100Gbps) or Intel VF (10Gbs). It's a hardware limitation of amazon virtualization stack starting with previous generation instances.

> I was quite surprised when pods stopped scheduling even though CPU and memory were available.

This is well documented here: https://github.com/aws/amazon-vpc-cni-k8s


Why don’t most VPC providers offer IPv6? Is there some kind of implementation issue with it, or just that you don’t need it.


When you're using a private network v4 address exhaustion doesn't matter much and the simplicity of only 4 octets helps with IP memorability and simplicity. I would still prefer a v6 option though, as keeping private networks on v4 might be contributing to the slow adoption of v6.


Life sure would be easier if "cloud native" meant IPv6-only (except the load balancer) with non-overlapping unique addresses everywhere. 10/8 doesn't go far if you give each VM a /24 and each k8s cluster a /16.


What network are you running where you're giving each virtual machine a /24? That's insane.

10/8 should go very far if you do it correctly, hence why it's in use in almost all internal networks.


kubernetes, because for what ever reasons people are suspicious of using DHCP provided by the VPC.


Where have you experienced shortages? Even with a /16, you are talking about millions of unique IPs.


A /16 has ~65k unique IPs.


> When you're using a private network v4 address exhaustion doesn't matter much

Until you start trying to connect to enterprise networks...

This is seriously ridiculous. It's 2020, and Google and now DO have no IPv6 in their cloud networks.


DO has IPv6 on the public side of their network.

Couldn't tell you why they've chosen not to use IPv6 for VPC networks, though. Probably just for management simplicity.


DO sort of has IPv6. Their load balancers don't do IPv6, and you can't use IPv6 in their managed k8s offering.

(My guess, honestly, is that nobody asks for it. IPv6 is a problem for Some Other Day.)


They also only give you 16 addresses. 16! Most providers give you an entire /64 or sometimes even a /48.


Wow. In France, home ISP give you a /64 generally (Free for example, known for Online/Scaleway). I think 80 % of their fiber deployments comes with IPv6 enabled now. And in few years, it should be 100 %.


When I was a customer, Free gave you eight (or was it 16?) /64 (as it appeared in the UI[0], I seem to recall they may not all be contiguous), 7 of them you could use for prefix delegation (can't recall why using the first one caused issues, IIRC it was special cased in some way as the "main" one that the Freebox wants to manage). Also it was 6rd, not purely native IPv6. Hopefully they changed that because latency was bad enough the devices would often pick IPv4 through Happy Eyeballs and throughput was a half to a tenth of the v4 path.

After that, I had a whole native /56 at Red by SFR.

Currently I'm at Sosh by Orange and I have a native /56.

[0]: https://lafibre.info/remplacer-freebox/freebox-erl-et-ipv6/?...


As a (former) network security guy, I shudder at the thought that you WANT to expose your internal routing details. That's a recipe for security disaster.

It is also an administration problem - you've given out an internal IP network to someone else -- and now your ability to move it to a different IP address for whatever reason depends on the change processes at the other enterprise, which can take months -- or on the ability of your own network infrastructure to NAT it to the new location.

It tends to work much better all around if you have a well defined "external" interface, that gets properly monitored and filtered with a network/app firewall, and gets routed to the right server -- and then internal change for your organization are decoupled from change processes in the others.

And for that, v4 exhaustion really doesn't matter.


I don't get this. IPv4's address space is so small I can trivially scan any internal network, and there are a ton of ways to gather that data covertly from behind the firewall via maliciously crafted web pages or apps, abuse of any number of P2P apps or protocols (VoIP, video chat, WebRTC, etc.), and so on. IPv6 actually makes scanning harder since the address space is massive. E.g. if I have a /64 routed internally I have to scan 2^64 addresses to find internal hosts!

If knowing internal IP addresses is a security risk, then IMHO you have serious security problems. I used to do netsec too and the cornerstone was internal scans for unpatched or rogue systems and services and keeping systems patched and locked down. A network is only as secure as what is connected to it! We also had smart switches and APs where you could lock port to MAC and IP and thus could prevent rogues.

My personal rule was: any system that would not be safe to directly connect to the Internet without a firewall is insecure and needs to be fixed. The only exception is backplanes for things like internal databases/services or testing/dev, and those were separate networks for that purpose only. Separation was either physical or virtual/cryptographic. Back then we didn't have stuff like ZeroTier so we did that with IPSec and it was ugly, but we did it. Those nets could sometimes access the Internet (with restrictions) but could not even see the controlled internal LAN. They accessed the net via a port to outside the DMZ.

Next up was auditing software installed on internal systems. Next up was monitoring network traffic to detect anomalous activity. Firewalls are always the last line of defense. NAT is not a security feature at all.

I never once worried about keeping internal IPs secret (why?) and we ran IPv6 internally without NAT because IPv6 NAT is dumb.

We had two incidents when I was there. Both were the result of phishing to get malware onto personal PCs or phones.

My very strong personal opinion is that security people worry about the wrong things. They worry about network security and firewalls when what should really terrify them is phishing, auto-updating software made by who-knows-who, popular apps and SaaS services that are invisible security dumpster fires (Zoom anyone?), and of course barbarous demonic evocations like "npm install ...". Your firewall will do very little to save you from any of that, and NAT won't do crap because once again NAT is not a security feature.


Mostly agree, but do want to clarify:

Obscurity is NOT security. But obscurity as one layer in a larger defense-in-depth setup IS helpful.

Do note that scanning IPv4 through a fishing page is still about a million times harder (literally) than targeting a known address.

And NAT is not security, but in some context is still helpful as one layer in a defense-in-depth setup - you can’t directly attack something that’s not routable.

Security is not binary; there are costs and there are benefits to various setups. My point was that the benefits provided by being able to provide an internal IPv6 address to an external entity are dwarfed by both Netsec and netadmin costs.

Also, if you can so easily scan my internal network with malicious web pages, you can probably passively listen for the v6 addresses. On the networks I managed, browsing happened through VNC to a browser on tightly controlled host that could only connect outside and only through a proxy. How do your fishing pages counter this?


My approach is and was always edge-first. Security begins at connected devices; everything else is an afterthought. The only time you try to secure something primarily at the network level rather than the device level is when it's legacy junk you can't secure otherwise and that you must use.

I am not opposed to network firewalls and such, but they're just defense in depth. If the whole network wouldn't remain secure if it were connected to the Internet with no firewall, it's not secure.

Given that these things are afterthoughts, I am not willing to prioritize them much over efficiency, complexity reduction, and user experience. Afterthoughts should be sacrificed to complexity reduction because complexity negatively impacts security a lot more. Inefficiency and poor UI/UX also have security implications. They increase the amount of "shadow IT" type activity and also seem to make phishing easier. If you secure something in ways that prevent people from getting their work done, they will get their work done insecurely.

Treating NAT as a must-have or should-have rather than the ugly hack you don't want to have increases complexity and negatively harms UI/UX by making P2P stuff not work and making people have to work harder to do simple things. If removing NAT makes you insecure, you were insecure to begin with.

Needless to say I am a fan of the BeyondCorp/deperimeterization approach. Ideally physical networks should be dumb pipes and everything should be virtual. The LAN itself is legacy baggage.


I do not disagree.

If you look at my original message, I was not suggesting NAT was useful - on the contrary, I was cautioning against relying on your internal NAT as a mitigation of the other enterprise's change processes. My whole post was about complexity reduction (as it relates to inter-enterprise conections)...

> Needless to say I am a fan of the BeyondCorp/deperimeterization approach. Ideally physical networks should be dumb pipes and everything should be virtual. The LAN itself is legacy baggage.

I also like it. But the post I was originally replying to implied a server->server connection between two enterprises, which is afaik not at all addressed by BeyondCorp or any of the projects it inspired - specifically, you need to treat the other corp like a Google user at home, rather than a Google employee in a hotel because you cannot enforce trusted hardware, inventory tracking, or any of the other things that make BeyondCorp as useful as it is.


DigitalOcean seem to be slowly but surely becoming a "cloud provider", rather than a "VPS provider" - it's really great to see some attractively priced alternatives to Azure/AWS/GCP!

I was wondering if DO publish some kind of roadmap? I'd really like to know what else they plan on delivering over the next year or so?


Not being able to reassign, delete, or change the cidr of the default VPC is going to be a problem for most folks. Looking forward to the next release where this is fixed, and the fact that we have day 1 support for Terraform is awesome!


> day 1 support for Terraform

VPC support on DigitalOcean was soft-launched almost a month ago:

https://www.digitalocean.com/docs/networking/vpc/quickstart/

https://www.reddit.com/r/digital_ocean/comments/g1hkhu/digit...


All I'm missing now is ability to provision droplet without public IP. Sure I can disable the interface, but in VPC I really don't want publicly accessible resources except well defined entry points.


Aside: I got curious about their web video player. Turns out it's hosted using a service called Wistia. Their 'about us' video is fantastic. https://wistia.com/about-wistia


They must be great, my servers are constantly receiving hack attempts from Digital Ocean IPs.


Does this mean that previously to this change, without a software firewall running you'd be vulnerable to attacks on the private network from other customers? (I've never used DO).


No. The private network was originally shared across all accounts, but later on they changed it to be isolated per account. It's been that way for a couple of years.

The introduction of VPC just means you can isolate within the same account.


Yes, on both Digital Ocean and its 'brother from another mother' Linode. I have a client with a few Linode VPSs and their biggest attacks by far come from the 'private' network.


Yes.

They also will automatically enable a private network interface for you if you use their Floating IP feature. This caught me by surprise when I found out the hard way :)


That suspiciously sounds like an anchor IP address and not an actual private network interface:

https://www.digitalocean.com/docs/networking/floating-ips/


Ahh you are completely correct. It caused issues for me as it added a new interface that my firewalls knew nothing about.


Texts too with phishing urls in them trace the IPs you find they are digital ocean VMs.

The worst part is here in Canada we have to pay for incoming texts.


What plan/provider requires you to pay for incoming texts?


Sounds like you are on a pre-pay plan?


Just a regular monthly plan.

Some plans will claim it's an "unlimited texting plan" but really the charges are still there hidden. Maybe the provider has an agreement with other mobile phone providers to reimburse them for texts their customers send.

My last provider refunded the cost if I forwarded the message to their spam department text number.

We have pretty terrible mobile phone plans and rules here in Canada.


I live in Canada (Ontario) and have never heard of any plan doing something like this. I am on prepaid myself and have unlimited texting.


That sounds very generous for a pre-paid plan. Is it unlimited for all incoming phone numbers or do you choose specific phone numbers?

I've seen plans that you choose friends or pay more but in some way you are paying more for the free part a bit like insurance. Maybe plans with unlimited texting versus regular plans are more common so the extra cost is seen as normal?

Hopefully it's changing or maybe texting is becoming less of a thing due to more mobile Internet access.


Oh this is really cool!! I've been wanting them to do this for a few years, glad they finally did. It has some quirks (have to clone to add to an existing VM) but its at least a great start!

One thing I want to do is setup a VPN tunnel from my home network and lock everything else down. Wasn't possible before but it is now with this.


I did this with Tailscale and it was super slick


This is nice, but Kubernetes already does enough in that department for our needs.

Given that now “Security and customer trust are at the core of what we do”, it would be nice if they could fix the massive oversight in their Spaces offering where every API key has full access to all spaces/buckets.


Didn't know that... I'm running a personal project on DO, so don't have many keys, but that's good to know.

I also wish they'd add the feature to turn DO Spaces into a static server, like most other cloud providers.


I didn't realise they offered Kubernetes as a managed service. Will seriously evaluate when our GCS credits are getting closer to running out. VPC, Kube and managed DB is all we need (and Terraform providers).


According to [0] there were serious security problems with their managed Kubernetes in the early days. May since have been fixed.

[0] https://news.ycombinator.com/item?id=22490390


Yikes - only 60 days ago! Thanks.


When are you going to have a datacenter in Brazil? We don't mind if we have to pay more than your listed prices for other locations. We know Brazil is more expensive. Just do it already.


Can confirm, existing cloud providers I worked with in Brazil were not very good. My clients insisted on using them because of their billing setup with local payment processors (pagseguro).


Yes. That is the only reason we dont try them at work. The latency is too high for our case.


I'd have been more interested if it could be made to work across regions... I also thought private network addresses had been available on DO for a while now.


Yes, they had private networking. Now with VPC’s it is basically multiple ‘private networks’ within one account. As mentioned in the article.


https://console.hetzner.cloud/ has had a free VPC for a while ... great alternative to the aws offering ... looking forward to changing my scale up/down devops code currently on aws to work for any private network ... trying to avoid cloud vendor lock in


This is great - any word on supporting internal IP load balancers on Kubernetes? From what I've read, unlike GKE, AKS etc, all kubernetes services exposed via load balancer gets a public IP. I'd like to keep internal services locked to internal only networks like what you're proposing with this VPC feature.


I'm just learning here about the cheaper outbound network fees - I'm always afraid of the outbound costs due to any spike of traffic.

Could anyone here share some other benefits to using DO? Or any particular Must Have's on a particular cloud provider?


Cloud Firewall, VPC, glad to see useful features added.

Personal experience with DO: I've been a happy DO customer for the past [7 years](1). Linux VM [uptime](2) record has been amazing for personal use case.

This week I migrate the droplet hosting my personal website (5/m) from DigitalOcean to Amazon Lightsail (3.5/m plan) this week. Trigger being Ubuntu LTS upgrade to 20.04 again failed to boot on first few attempts again (wasted quite sometime chroot trying to fix to no avail without access to the hypervisor - IaaS...), mainly because of the way DO's flavour of KVM (hypervisor) works (I am not the only one), my other VPS (e.g. 123Systems - KVM) worked well and never had the same problem, let alone Xen powered VMs (EC2, self-hosted XenServer, etc. - I know hypervisor well because I've worked for XenSource/Citrix on XenServer for several years).

Customer (technical) support quality has dropped over the last few years, I can tell the difference by comparing the last 2 support tickets, I don't want to guess the root cause, sigh...

Finally I have had enough (4th time down with upgrade), it's time to move on to something better without paying more, migration is made easy due to the way workloads are deployed (most containerized, thanks to Docker/Docker Compose). With Lightsail, in addition to the AWS name/brand, has the advantage to move the Lightsail VMs into AWS EC2 instances so as to leverage full-fledged AWS infra (e.g. VPC, etc.) seamlessly.

Over the years, low end VPS competition has becoming much tougher (DO, Linode, Vultr, Amazon Lightsail late to the game but powerful strike, etc.) DO has lots its key competencies for bang for the buck, without offering 2.5~3.5/m plan on par with competitors.

Last but not least, I'll definitely consider DO as an option when Cloud Infrastructure is need, still ;-)

BTW: On Oracle, my Oracle Cloud free tier trial ended miserably, 2 weeks after provisioning the VMs, Cockpit (I run it on my home NAS - managing/monitoring a small group of cloud VPS using the web UI) reported connection failed, only to find that my account has been terminated without any warning or notification along with my 2 free VMs based in Phoenix, lucky that I didn't actually put any workload on it (left them running only - feeling something's gonna happen...), contacted support and was told account deleted, no reason, redirected me to customer support (my oracle support, I couldn't figure out how that works, so give up...). I still don't understand how Oracle Cloud login works...

[1]: https://pbs.twimg.com/media/EWhuECEUEAEJ5gV?format=jpg

[2]: https://pbs.twimg.com/media/EVSbMKmU0AAEg58?format=jpg


Very happy DigitalOcean customer for 4+ years. Great to see this, too.


The value of digital ocean used to be simplicity.

Vpc isn’t simple.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: