I'm glad they plugged their outbound network transfer fees compared to the others[1]. I was shocked and horrified when my AWS bill (which I pay myself) quadrupled due to outgoing network transfer fees. It's truly outrageous what they charge. I use Digital Ocean a lot now simply to avoid nasty surprises like that. I hope AWS and Google change that.
As long as you don't use vendor specific tools, who cares? That's the point of kubernetes and to abstract away the whole "cloud" provider. Let it become a commodity and a race to the bottom.
Oracle cloud? The same technology that runs certain state unemployment systems and has been completely unable to scale, leaving hundreds of thousands of people with no income for the last five weeks?
Oracle really should remove its logo from the footers of all those collapsing web sites. It's embarrassing.
The sum of the national unemployment systems, generally built prior to cloud scale out architecture, were able to handle 25x normal traffic load (5M claims vs. 200,000 in a normal week, even 2008-2010 peak was 700,000 claims).
Dislike Oracle all you want, even many cloud architected applications would fall over with a 25x traffic increase in a single week (remember Fail Whales?).
That stack was most likely legacy middleware + database backend, a combination of fusion middleware, oracle database, CRM, etc. running on physical hardware or virtualized. Not automated, very basic HA, not easily scalable.
Nothing to do with Oracle Cloud (although Oracle Cloud won't be on my list unless it's marginally cheaper than other cloud service providers).
BTW: Talking about Oracle Cloud, my free tier trial ended miserably, 2 weeks after provisioning the VMs, Cockpit (I run it on my home NAS - managing/monitoring a small group of cloud VPS using the web UI) reported connection failed, only to find that my account has been terminated without any warning or notification along with my 2 free VMs based in Phoenix, lucky that I didn't actually put any workload on it (left them running only - feeling something's gonna happen...), contacted support and was told account deleted, no reason, redirected me to customer support (my oracle support, I couldn't figure out how that works, so give up...). I still don't understand how Oracle Cloud login works...
Geez, thanks for sharing your experience. I run a number of things that I pay for myself and my cloud bill each month is becoming non-negligible. Even tho it kills me inside I considered looking at Oracle, but this is enough to steer me away. Thank you :-)
Looks like you should have a good look at DigitalOcean to run your personal side projects or fun stuff, which has a much more simpler and transparent billing model (capped, no nasty hidden cost).
I've been a long time DO customer and overall happy for the past 7 years. I've write my personal experience [1] with DO in another post.
I had the same experience on my trial, along with SUPER pushy sales people. In the end, after I gave them our setup on DO and they said they'd come up with a proposal for an equivalent setup in Oracle Cloud, they came back and said "can't do it" and their proposal was a little over 3x what our current bill was.
Do we know if the underlying cloud failed or if it was an application issue? I strongly suspect the latter especially when it comes to government projects.
Oracle middleware and other software services are an abomination. They're up there with JIRA and Confluence in terms of how overwhelmingly complex and slow they run. Sure, you get reconfigurability but if anything Oracle goes down it's far more likely to be their software applications rather than their hosting solution.
$0.01/GB is fantastic, but I have a bandwidth intensive ML media application and don't know how to monetize or sell it quickly enough to pay for my bandwidth costs.
Is there a cloud or dedicated server farm with even cheaper outbound bandwidth?
Edit: as much as I hate Oracle, their first 10TB is free, and each GB after that is $0.0085/GB. Better...
If you’re okay with servers located in Germany, Hetzner is a provider I can vouch for and they offer additional egress at 1EUR/TB. 20TB included, too. (Billing has been rather painful, though.)
I have a dedicated server at Hetzner for 25 euro a month. I get an i7, 16 GB of RAM and 2x3 TB in RAID 1, which is nice since I'm hosting large media files.
I'm currently sitting at 4.2 TB of outbound traffic for the last 30 days, so I still have plenty of room to scale up my outbound traffic before I hit any limits. But most importantly my costs are fixed.
Quite. I did have some issue with my server becoming unreachable every couple of weeks at the start of the year for a couple of times. Not sure if it was a fault that I caused or if there was some kind of a networking issue. I know I tinkered with the server a bit earlier, but it seems to have resolved itself without me really doing anything, so it could really be either way.
One problem is also that apparently some Americans have really bad peering to my server. As an European, I can't really confirm if this is the case, but it's what I've heard.
They've generally changed to unlimited traffic for the standard 1gbps uplink (with 1gbps guaranteed bandwidth).
Not unlimited to 10gbps - but free up to 20tb:
> Traffic usage is unlimited and free of charge.
Please note that our unlimited traffic policy does not apply to servers that have the 10G uplink addon. In this special case, we will charge the usage over 20TB with € 1.00/TB. (The basis for calculation is for outgoing traffic only. Incoming and internal traffic is not calculated.) There is no bandwidth limitation.
Can also vouch for Hetzner. Used them at several companies and they've always been pleasant to deal with.
I've moved several people off AWS into Hetzner exactly because of their egress costs, in one case cutting their total hosting cost by 90% for that reason.
Even for people who stick with AWS and don't want to deal with any added complexity, even something as simple as putting a caching proxy in Hetzner and routing European customers to it can sometimes produce significant cost reductions.
Besides Hetzner, I can also highly recommend netcup.eu, especially for hobby projects. Their prices are even lower and include more data volume. Their interface is not as nice as Hetzner or Digitalocean, but I am fine with that.
https://www.netcup.eu/vserver/vps.php
Billing: If you have access to an European bank account they offer SEPA direct debit, which works like a charm.
Have you looked at Time4vps, I have been using one of their 1TB storage servers, and I pay quarterly what netcup seems to charge monthly.
It's openvz instead of kvm, but I use it to backup with rsync and borg. I also run a calibre library on mine and I've never had any issues.
Could not set up auto-charging, had to visit the billing portal once a month and manually initiate a Paypal or credit card transaction. Probably okay if you’re a company, not so convenient for an individual with a side project (at least I prefer set and forget).
That was two years ago though, maybe it has improved.
Hmm, weird, I believe I switched from credit card to PayPal at some point and there was no auto-charging prior to that either. Anyway, happy to be corrected.
How are you finding the reliability. I've been hosting my personal website on ScaleWay, and there's been quite a bit of downtime (say 40 minutes every few weeks). Not a problem for my personal website, but I'm not sure I'd want to host production services on it.
this is indeed true earlier with C2* series instances. I used to face this problem daily, since I also used NAS. They have deprecated that dedicated box series now and currently using GP1-M which is reliable now.
Would you mind sharing how much you’re spending at Scaleway each month (a ballpark would be enough)? I’m just generally wary of claims of unmetered resources at oversubscribed cloud providers — I mean if I’m paying $5/mo and transferring 150TB they probably have every incentive to cut me off. A clearly defined quota with moderate overage fees actually gives me peace of mind.
Digital Ocean's bandwidth pricing is pretty solid at $10/TB. It's pooled between droplets, too, so it's often cheaper to spool up a few droplets you're not using to get slightly better bandwidth prices if you use a lot. Sadly, I just missed out on being grandfathered in at free bandwidth, which would have been great for my PortableApps.com open source project. We'll be hitting 100 TB a month soon across all downloads.
It allows you to use Windows apps without needing to install them into Windows, so you can sync it between machines in a cloud folder like Dropbox/Google Drive, carry it on an external flash/hard drive, or use it on a machine you may not have install rights to. You can also keep separate copies of the same app for work and personal on the same Windows account. It's packaged as an app manager with a start menu, app store, automatic software updater, backup/restore functionality, etc.
On the technical side, we make use of an apps ability to direct where it stores its settings if it has one, and also move settings into/out of the registry and/or APPDATA on the local machine when needed. Our open source 'launcher' acts as a helper app to handle this for each app so it doesn't mess up a local version that's already there and so it adjusts paths if you move around between PCs and the paths to your apps or documents change.
This is precisely why we switched from AWS to DigitalOcean. This cut out monthly hosting costs by 80% and vastly simplified our entire setup. We also saw some solid performance gains in some areas (wrote up our benchmarks: https://goldfirestudios.com/blog/150/Benchmarking-AWS-Digita...).
At a previous company, we literally did that, and also had to get our VM limit increased.
But it only gets you so far, its a cost optimization at the low end, but eventually it isn't "worth it".
You are checking your bandwidth use against the pool limit, then spinning up a new VM when you get close to the limit, but then back down the next month so you aren't paying for unneeded bandwidth, paying the extra $5 and not worrying about some custom price hack is a lot easier and less stressful.
We use Direct Connect to get traffic to edge nodes where we buy fixed 10Gbps links and pay a fraction of the AWS cost. AWS bandwidth costs are ridiculous.
Yeah it's ridiculously obscured, which is partly how it suprised me so much. I hadn't seen the charge mentioned anywhere. That alone seems like it should send up giant red flags of bad business practice.
- VPC network ranges cannot overlap with the ranges of other networks in the same account. (Edit: Does this mean each VPC in the account has to have a non overlapping subnet?)
-Resources do not currently support multiple private network interfaces and cannot be placed in multiple VPC networks.
- Not being able to change the VPC connected to stuff without taking a snapshot
Interesting, didn’t know that about AWS. In more familiar with the Google cloud version of VPC. Seems the DO implementation is more like the AWS version
For what it's worth VPC ranges are allowed to overlap in GCP -- and do by default -- but then you aren't able to peer them. I kind of prefer the DO/AWS constraint.
I could have missed it, but I've never seen a suggestion not to overlap network addresses unless you want to peer them.
If you're launching ephemeral networks for testing VMs / virtual appliances in their own, isolated networks, it can be totally feasible to have lots of them using the same addresses. You can only create 5 VPCs (at all) by default per AWS account, but they'll raise that limit for you if you request it.
> - VPC network ranges cannot overlap with the ranges of other networks in the same account. (Edit: Does this mean each VPC in the account has to have a non overlapping subnet?)
Overlapping subnets tend to be a mistake and will bite you in the behind whenever you want to peer them. What'd be your reason to want overlapping subnets in the first place?
Over 7 years, 100s of VMs atm. Tried to get into AWS and GCP few times over the years just for the resume-building and fun, didn't see the point. While I can understand they fit some use-cases, I think a good rule of thumb would be - if you don't know why exactly you need AWS/GCP/Azure, go with DigitalOcean. I worked at 2 companies that, if they went with AWS as they've planned, they would've gone under three times over, mostly because of the egress. If you're startup and DO's services are mostly there for your use case, it's antithetical for you to go with AWS/GCP.
Just switched to AWS from DO after 5 years. DO was working great, but needed a more managed solution (am very familiar with the sysadmin, but just wanted to reduce workload) so went with AWS Fargate and happy with it so far.
It's not a personal project, but for work. Basically to run our mobile backend (nginx, spring boot, mongodb, rabbitmq, etc), staging and production. And all that needs redundancy of course.
Me too, but they recently let me down with their managed Redis . They clearly mention that their offering has daily backups, but it actually doesn't (had to contact support to find out, though). Had to migrate away from them because of that.
Hey, Kamal from DigitalOcean here. I'm sorry that happened to you! You're right, managed Redis Databases do not support backups[0] currently. I found the page on the website that says they do and let the team know. They will correct it asap.
Hey Kamal, good to see you here. I'm sorry to hijack this thread, but I'm hoping someone from DO could provide an official response to this often-cited post on HN regarding security issues on your K8S offering: https://news.ycombinator.com/item?id=22490390
Is there a chance you could poke someone into looking into this?
I'm the tech lead for Kubernetes at DO. Just wanted to jump in and provide some clarification around the security issues you brought up.
The blog post you're referring to came out in December 2018, shortly after we released DOKS as a Limited Availability offering. By the time we announced our General Availability release in May 2019, we had done the following:
1. Changed our node bootstrapping process so that etcd information is no longer necessary in the metadata API, and removed said etcd information from metadata.
2. Firewalled off etcd so that it's accessible only inside the cluster.
3. Shifted how we run the CSI controller component so that a DO API token no longer needs to be stored as a secret in the cluster.
4. Switched from Flannel to Cilium as the CNI plugin, which allows users to configure network policies. We don't configure any network policies by default, but the option is there for users who want to use them.
These changes fix the vulnerabilities explained in the blog post. We do have further hardening measures planned, including limiting the scope of API tokens (one of the suggestions from the blog post, and also an often-requested feature from DO customers), but that's a big project so we can't provide a firm timeline for it at this point.
Hope this clarifies the current situation. If you or anyone else finds new security issues with DOKS (or other DO products) we would love to know about it. Our security team is always accepting vulnerability reports via their disclosure program: https://www.digitalocean.com/legal/contact-security/
Hey, I ran this by the DOKS team and they confirmed that this was taken care of a while back. Just to clarify, that issue existed while the product was in Limited Availability (think alpha). Nodes are now bootstrapped in a different way that eliminates the need to expose sensitive info in metadata or anywhere within the cluster itself.
Do they talk at all about what they're using to provide the VPC overlay? I have a DO k8s cluster and it uses Cilium for the CNI, which turns out to be quite useful, so I guess I'm wondering if they're also using Cilium for this.
(Over in AWS land, they wrote a CNI for their own VPC networking. It turns out to have many strange limitations. For example, you can only run 17 pods on a certain type of node, because that node is only allowed to have 19 VPC addresses. I was quite surprised when pods stopped scheduling even though CPU and memory were available. Turns out internal IP addresses are a resource, too. DigitalOcean has the advantage of starting fresh, so might be able to use something open source that can be played with in a dev environment and extended with open source projects.)
Better way of doing natively addressable pods is assign whole subnets (like /25) as secondary interface and distribute that to pods via cni. I think gke pod network works that way. Not sure why eks decided 17 pods is ok lol
Thanks I’m aware that aws hacked the shit out of their inflexible legacy design to support this (as well as hid docs on github and continue to charge you for those ENIs). What else is new?
Hm I think you are right on the price (at least I don’t see it anymore). The fact that it’s ridiculously complex feature remains though. We ended up just running regular overlay since messing around and planning for ENIs is not worth it (I suppose not an option for eks nodes).
The EC2 k8s network driver they wrote essentially will attach/detach extra ENI's on the fly and pre-allocate IP addresses to your EC2 host to allow for fast pod spin up/down.
> Turns out internal IP addresses are a resource, too.
That's not what is happening in AWS. IP address are resources (duh), but that's no the issue. With their CNI plugin each pod gets its own Elastic Network Interface. ENIs aren't just virtio's virtual network, it could be ENA (100Gbps) or Intel VF (10Gbs). It's a hardware limitation of amazon virtualization stack starting with previous generation instances.
> I was quite surprised when pods stopped scheduling even though CPU and memory were available.
When you're using a private network v4 address exhaustion doesn't matter much and the simplicity of only 4 octets helps with IP memorability and simplicity. I would still prefer a v6 option though, as keeping private networks on v4 might be contributing to the slow adoption of v6.
Life sure would be easier if "cloud native" meant IPv6-only (except the load balancer) with non-overlapping unique addresses everywhere. 10/8 doesn't go far if you give each VM a /24 and each k8s cluster a /16.
Wow. In France, home ISP give you a /64 generally (Free for example, known for Online/Scaleway).
I think 80 % of their fiber deployments comes with IPv6 enabled now. And in few years, it should be 100 %.
When I was a customer, Free gave you eight (or was it 16?) /64 (as it appeared in the UI[0], I seem to recall they may not all be contiguous), 7 of them you could use for prefix delegation (can't recall why using the first one caused issues, IIRC it was special cased in some way as the "main" one that the Freebox wants to manage). Also it was 6rd, not purely native IPv6. Hopefully they changed that because latency was bad enough the devices would often pick IPv4 through Happy Eyeballs and throughput was a half to a tenth of the v4 path.
After that, I had a whole native /56 at Red by SFR.
Currently I'm at Sosh by Orange and I have a native /56.
As a (former) network security guy, I shudder at the thought that you WANT to expose your internal routing details. That's a recipe for security disaster.
It is also an administration problem - you've given out an internal IP network to someone else -- and now your ability to move it to a different IP address for whatever reason depends on the change processes at the other enterprise, which can take months -- or on the ability of your own network infrastructure to NAT it to the new location.
It tends to work much better all around if you have a well defined "external" interface, that gets properly monitored and filtered with a network/app firewall, and gets routed to the right server -- and then internal change for your organization are decoupled from change processes in the others.
And for that, v4 exhaustion really doesn't matter.
I don't get this. IPv4's address space is so small I can trivially scan any internal network, and there are a ton of ways to gather that data covertly from behind the firewall via maliciously crafted web pages or apps, abuse of any number of P2P apps or protocols (VoIP, video chat, WebRTC, etc.), and so on. IPv6 actually makes scanning harder since the address space is massive. E.g. if I have a /64 routed internally I have to scan 2^64 addresses to find internal hosts!
If knowing internal IP addresses is a security risk, then IMHO you have serious security problems. I used to do netsec too and the cornerstone was internal scans for unpatched or rogue systems and services and keeping systems patched and locked down. A network is only as secure as what is connected to it! We also had smart switches and APs where you could lock port to MAC and IP and thus could prevent rogues.
My personal rule was: any system that would not be safe to directly connect to the Internet without a firewall is insecure and needs to be fixed. The only exception is backplanes for things like internal databases/services or testing/dev, and those were separate networks for that purpose only. Separation was either physical or virtual/cryptographic. Back then we didn't have stuff like ZeroTier so we did that with IPSec and it was ugly, but we did it. Those nets could sometimes access the Internet (with restrictions) but could not even see the controlled internal LAN. They accessed the net via a port to outside the DMZ.
Next up was auditing software installed on internal systems. Next up was monitoring network traffic to detect anomalous activity. Firewalls are always the last line of defense. NAT is not a security feature at all.
I never once worried about keeping internal IPs secret (why?) and we ran IPv6 internally without NAT because IPv6 NAT is dumb.
We had two incidents when I was there. Both were the result of phishing to get malware onto personal PCs or phones.
My very strong personal opinion is that security people worry about the wrong things. They worry about network security and firewalls when what should really terrify them is phishing, auto-updating software made by who-knows-who, popular apps and SaaS services that are invisible security dumpster fires (Zoom anyone?), and of course barbarous demonic evocations like "npm install ...". Your firewall will do very little to save you from any of that, and NAT won't do crap because once again NAT is not a security feature.
Obscurity is NOT security. But obscurity as one layer in a larger defense-in-depth setup IS helpful.
Do note that scanning IPv4 through a fishing page is still about a million times harder (literally) than targeting a known address.
And NAT is not security, but in some context is still helpful as one layer in a defense-in-depth setup - you can’t directly attack something that’s not routable.
Security is not binary; there are costs and there are benefits to various setups. My point was that the benefits provided by being able to provide an internal IPv6 address to an external entity are dwarfed by both Netsec and netadmin costs.
Also, if you can so easily scan my internal network with malicious web pages, you can probably passively listen for the v6 addresses. On the networks I managed, browsing happened through VNC to a browser on tightly controlled host that could only connect outside and only through a proxy. How do your fishing pages counter this?
My approach is and was always edge-first. Security begins at connected devices; everything else is an afterthought. The only time you try to secure something primarily at the network level rather than the device level is when it's legacy junk you can't secure otherwise and that you must use.
I am not opposed to network firewalls and such, but they're just defense in depth. If the whole network wouldn't remain secure if it were connected to the Internet with no firewall, it's not secure.
Given that these things are afterthoughts, I am not willing to prioritize them much over efficiency, complexity reduction, and user experience. Afterthoughts should be sacrificed to complexity reduction because complexity negatively impacts security a lot more. Inefficiency and poor UI/UX also have security implications. They increase the amount of "shadow IT" type activity and also seem to make phishing easier. If you secure something in ways that prevent people from getting their work done, they will get their work done insecurely.
Treating NAT as a must-have or should-have rather than the ugly hack you don't want to have increases complexity and negatively harms UI/UX by making P2P stuff not work and making people have to work harder to do simple things. If removing NAT makes you insecure, you were insecure to begin with.
Needless to say I am a fan of the BeyondCorp/deperimeterization approach. Ideally physical networks should be dumb pipes and everything should be virtual. The LAN itself is legacy baggage.
If you look at my original message, I was not suggesting NAT was useful - on the contrary, I was cautioning against relying on your internal NAT as a mitigation of the other enterprise's change processes. My whole post was about complexity reduction (as it relates to inter-enterprise conections)...
> Needless to say I am a fan of the BeyondCorp/deperimeterization approach. Ideally physical networks should be dumb pipes and everything should be virtual. The LAN itself is legacy baggage.
I also like it. But the post I was originally replying to implied a server->server connection between two enterprises, which is afaik not at all addressed by BeyondCorp or any of the projects it inspired - specifically, you need to treat the other corp like a Google user at home, rather than a Google employee in a hotel because you cannot enforce trusted hardware, inventory tracking, or any of the other things that make BeyondCorp as useful as it is.
DigitalOcean seem to be slowly but surely becoming a "cloud provider", rather than a "VPS provider" - it's really great to see some attractively priced alternatives to Azure/AWS/GCP!
I was wondering if DO publish some kind of roadmap? I'd really like to know what else they plan on delivering over the next year or so?
Not being able to reassign, delete, or change the cidr of the default VPC is going to be a problem for most folks. Looking forward to the next release where this is fixed, and the fact that we have day 1 support for Terraform is awesome!
All I'm missing now is ability to provision droplet without public IP. Sure I can disable the interface, but in VPC I really don't want publicly accessible resources except well defined entry points.
Aside: I got curious about their web video player. Turns out it's hosted using a service called Wistia. Their 'about us' video is fantastic. https://wistia.com/about-wistia
Does this mean that previously to this change, without a software firewall running you'd be vulnerable to attacks on the private network from other customers? (I've never used DO).
No. The private network was originally shared across all accounts, but later on they changed it to be isolated per account. It's been that way for a couple of years.
The introduction of VPC just means you can isolate within the same account.
Yes, on both Digital Ocean and its 'brother from another mother' Linode. I have a client with a few Linode VPSs and their biggest attacks by far come from the 'private' network.
They also will automatically enable a private network interface for you if you use their Floating IP feature. This caught me by surprise when I found out the hard way :)
Some plans will claim it's an "unlimited texting plan" but really the charges are still there hidden. Maybe the provider has an agreement with other mobile phone providers to reimburse them for texts their customers send.
My last provider refunded the cost if I forwarded the message to their spam department text number.
We have pretty terrible mobile phone plans and rules here in Canada.
That sounds very generous for a pre-paid plan. Is it unlimited for all incoming phone numbers or do you choose specific phone numbers?
I've seen plans that you choose friends or pay more but in some way you are paying more for the free part a bit like insurance. Maybe plans with unlimited texting versus regular plans are more common so the extra cost is seen as normal?
Hopefully it's changing or maybe texting is becoming less of a thing due to more mobile Internet access.
Oh this is really cool!! I've been wanting them to do this for a few years, glad they finally did. It has some quirks (have to clone to add to an existing VM) but its at least a great start!
One thing I want to do is setup a VPN tunnel from my home network and lock everything else down. Wasn't possible before but it is now with this.
This is nice, but Kubernetes already does enough in that department for our needs.
Given that now “Security and customer trust are at the core of what we do”, it would be nice if they could fix the massive oversight in their Spaces offering where every API key has full access to all spaces/buckets.
I didn't realise they offered Kubernetes as a managed service. Will seriously evaluate when our GCS credits are getting closer to running out. VPC, Kube and managed DB is all we need (and Terraform providers).
When are you going to have a datacenter in Brazil? We don't mind if we have to pay more than your listed prices for other locations. We know Brazil is more expensive. Just do it already.
Can confirm, existing cloud providers I worked with in Brazil were not very good. My clients insisted on using them because of their billing setup with local payment processors (pagseguro).
I'd have been more interested if it could be made to work across regions... I also thought private network addresses had been available on DO for a while now.
https://console.hetzner.cloud/ has had a free VPC for a while ... great alternative to the aws offering ... looking forward to changing my scale up/down devops code currently on aws to work for any private network ... trying to avoid cloud vendor lock in
This is great - any word on supporting internal IP load balancers on Kubernetes? From what I've read, unlike GKE, AKS etc, all kubernetes services exposed via load balancer gets a public IP. I'd like to keep internal services locked to internal only networks like what you're proposing with this VPC feature.
Cloud Firewall, VPC, glad to see useful features added.
Personal experience with DO: I've been a happy DO customer for the past [7 years](1). Linux VM [uptime](2) record has been amazing for personal use case.
This week I migrate the droplet hosting my personal website (5/m) from DigitalOcean to Amazon Lightsail (3.5/m plan) this week. Trigger being Ubuntu LTS upgrade to 20.04 again failed to boot on first few attempts again (wasted quite sometime chroot trying to fix to no avail without access to the hypervisor - IaaS...), mainly because of the way DO's flavour of KVM (hypervisor) works (I am not the only one), my other VPS (e.g. 123Systems - KVM) worked well and never had the same problem, let alone Xen powered VMs (EC2, self-hosted XenServer, etc. - I know hypervisor well because I've worked for XenSource/Citrix on XenServer for several years).
Customer (technical) support quality has dropped over the last few years, I can tell the difference by comparing the last 2 support tickets, I don't want to guess the root cause, sigh...
Finally I have had enough (4th time down with upgrade), it's time to move on to something better without paying more, migration is made easy due to the way workloads are deployed (most containerized, thanks to Docker/Docker Compose). With Lightsail, in addition to the AWS name/brand, has the advantage to move the Lightsail VMs into AWS EC2 instances so as to leverage full-fledged AWS infra (e.g. VPC, etc.) seamlessly.
Over the years, low end VPS competition has becoming much tougher (DO, Linode, Vultr, Amazon Lightsail late to the game but powerful strike, etc.) DO has lots its key competencies for bang for the buck, without offering 2.5~3.5/m plan on par with competitors.
Last but not least, I'll definitely consider DO as an option when Cloud Infrastructure is need, still ;-)
BTW: On Oracle, my Oracle Cloud free tier trial ended miserably, 2 weeks after provisioning the VMs, Cockpit (I run it on my home NAS - managing/monitoring a small group of cloud VPS using the web UI) reported connection failed, only to find that my account has been terminated without any warning or notification along with my 2 free VMs based in Phoenix, lucky that I didn't actually put any workload on it (left them running only - feeling something's gonna happen...), contacted support and was told account deleted, no reason, redirected me to customer support (my oracle support, I couldn't figure out how that works, so give up...). I still don't understand how Oracle Cloud login works...
[1] https://blog.digitalocean.com/its-all-about-the-bandwidth-wh...