Hacker News new | past | comments | ask | show | jobs | submit login
Farewell EC2-Classic, it’s been swell (allthingsdistributed.com)
284 points by alexbilbie on Sept 1, 2023 | hide | past | favorite | 180 comments



EC2-Classic was magical. It felt like we were finally living in the future: a world where software was in charge of networking, and all of the legacy cruft that came from having to build networks out of cable could be forgotten.

Rather than care about legacy IP "subnets", the cloud cared about "security groups", which were missing only a couple features (such as "hierarchy") to entirely replace the role of a subnet in traditional networking.

Having spent a lot of time working with EC2-Classic, it made network engineering fun and easy. The new "VPC" mechanism is demoralizing in its complexity, and doesn't seem to allow anything you couldn't express using security groups.

I've written about this before--in more detail or with more rhetoric, depending on my mood--but the big feeling I get from the transition from EC2-Classic was the frustration that comes when other people make something worse.

https://news.ycombinator.com/item?id=36829190

https://news.ycombinator.com/item?id=33569889

https://news.ycombinator.com/item?id=27990847

https://news.ycombinator.com/item?id=25988915


If you create a new account, it will work like classic EC2. They will set up the VPC for you behind the scenes. Until you "break the glass" and try to configure a VPC, it will work just like old classic did.


They recently launched VPC Lattice.

Which is basically "EC2 Classic networking" but now you pay for it

https://aws.amazon.com/vpc/lattice/

Though it goes a bit further than just security groups on a flat network. Each HTTP endpoint automatically becomes an IAM resource and you can treat all our services as if they're native to AWS and use the same IAM policies. It's pretty dope!


VPC absolutely allows you to do something that classic did not - add a single entry to your on-prem route table to AWS without transiting the public internet. A shared flat network is problematic for this.


One of the first startups I worked for was all on EC2-Classic. I did enjoy its simplicity. I can understand the need for VPC when integrating with on-premise networks, VPNs, etc. However, you often run into cases where VPC is simply not necessary and over complicates things.


Off topic, but I'm confused about the IPv4 notation in this quote:

> When we launched EC2 in 2006, it was one giant network of 10.2.0.0/8.

In my understanding, /8 means the first octet would be fixed (10.0.0.0-10.255.255.255) and I'm having trouble understanding the significance of the 2 here. If the 2 is significant, wouldn't you write it as /16? Given the source and my confidence on this notation, I feel like there must be some meaning an expert can glean from this. If so, I'm curious to learn. If it's just a typo, then that's fine and I apologize if this is nit-picky.


You are right, it was a typo. It should have been 10.0.0.0/8. Updated that in the blog, also resolved a confusion about the memory in a P3dn.24xlarge.

Thanks for your critical reading!


@brianshaler not only read TFA, but found the bug in TFA, and TFA was by Werner Vogels.

I dont think there's any higher honor/role model as a HN community member. inspiring.


I’m guessing it’s a typo, but maybe it was a 10.0.0.0-10.255.255.255 subnet with EC2-Classic machines allocating purely from 10.2.0.0-10.2.255.255, and 10.x.0.0-10.x.255.255 was allocated for other services?


I think it must be a typo. because 10.2.0.0/8 is not a valid subnet mask. The largest subnet you can make starting with 10.2.0.0 is 10.2.0.0/15.


It's still a valid subnet notation, the 2 just doesn't mean anything.

When you take the logical AND of the IP and the expanded mask, you'll just end up with 10.0.0.0 as the network address and 10.255.255.255 as the broadcast address.


You are right. The /8 is 10.0.0.0/8.

I guess it's not a typo, but a weird way to write 10.0.0.0/8 excluding 10.1.0.0/16 .


It doesn't exclude 10.1.0.0/16, or any of the rest of 10/8.


My thought was that 10.2.0.0/16 is part of the 10.0.0.0/8 address range which is by default a private network address range (i.e. not routable by any other machines). I interpreted what the author wrote to say that your machine would be assigned an address in 10.2/16 but would still be able to route to other AWS services / machines in other 10/8 subnets (e.g. 10.1/16)


Formally it should be 10.0.0.0/8, but you'll often encounter CIDRs written less formally by including set bits outside of the cidr prefix length. Often it is shorthand for "the subnet that includes this IP address", so 192.254.33.12/16, for instance. Or it might be a typo! ;)


I would interpret 192.254.33.12/16 to mean 'host 192.254.33.12 in a /16 subnet'.

That's also the notation `ip` on Linux supports


> The complexities of managing infrastructure, buying new hardware, upgrading software, replacing failed disks — had been abstracted away

...and replaced by the complexities of AWS. I mean, even in the pre-AWS days, it's not like you had to buy the hardware or replace the failed disks yourself, web hosters did that for you.


> it's not like you had to buy the hardware or replace the failed disks yourself, web hosters did that for you.

That’s an oversimplification. Yes, you didn’t go into the data center and replace the disk yourself, but it was a very different process than what you get with AWS.

There are still plenty of web hosts that operate the old fashioned way if people want to use them. There is a reason people prefer services like AWS though.


Sure, but it often required a) you to detect and diagnose it on your own and b) a couple hours to a couple of weeks for them to agree and swap it out. Versus the ability to click a few buttons and have a brand new server on separate hardware.


AWS doesn’t have to be complex if those aren’t your requirements, you can use something like Lightsail. If all you need is a VPS, get a VPS. (Though if that’s what you need, I personally like Hetzner more).

But AWS needs to be complex to handle the needs of huge organizations.

I will also point out that the AWS console EC2 launching interface has come such a long way. So much to the complexity is handled for you.


> I mean, even in the pre-AWS days, it's not like you had to buy the hardware or replace the failed disks yourself, web hosters did that for you.

Given no single web hoster has grown to the size of AWS / Azure, it is safe to assume just which complexities the industry is willing to tolerate. I mean, Oracle still rakes in billions, despite everything.


"Retiring services isn’t something we do at AWS. It’s quite rare."

I read this while I was taking a break from working on an epic to migrate our stuff off of OpsWorks before it gets shut down in May.


Bloody shame, OpsWorks was a great service in my experience. I built a few clusters with it before Kubernetes and terraform were a thing.

That said, I heard from folks at AWS that it was not well maintained and a bit of a mess behind the scenes. I can't say I'm surprised it's being shut down given where the technology landscape has shifted since the service was originally offered.

RIP OpsWorks.


OpsWorks was based on a really old fork of the Chef code. I did quite a bit of Chef in my day, but it really only made sense in a physical hardware/VMware virtual instance kind of environment, where you had these "pets" that you needed to keep configured the right way.

Once you got up to the levels of AWS CAFO-style "cattle" instances, it stopped making so much sense. With autoscaling, you need your configuration to be baked into the AMI before it boots, otherwise you're going to be in a world of hurt as you try to autoscale to keep up with the load but then you spend the first thirty minutes of the instance lifetime doing all the configuration after the autoscale event.

A wise Chef once told me that "auto scaling before configuration equals a sad panda", or something to that effect.

Chef did try to come up with a software solution that would work better in an AWS Lambda/Kubernetes style environment, and I was involved with that community for a while, but I don't know what ever became of that. I probably haven't logged into those Slack channels since 2017.

IMO, there are much better tools for managing your systems on AWS. CDK FTW!


That was jarring for me, I wasn't quite sure what the article wanted to say at first. Title "we're doing X", body "we don't do X here".


Chef has been dying for a long time and was beaten to death when the company was bought by some VC and everyone was fired a couple of years ago. I can understand why the service is going away as Chef and Puppet are not exactly gaining marketshare.

AWS rarely retires services and when they do they pretty much give months/year(s) worth of notice before forcing you to migrate which is very nice.


Compare this to Azure and you'll understand.


and I, while dealing with the fallout of migrating something from AWS Data Pipeline (it entered "Maintenance Mode" and they removed console access earlier this year.)


Data pipelines was also a perennial shit show behind the scenes. Turns out integration is a lot of complexity even if the service is "simple" in concept and doesn't need to be so real time.

I feel like data pipelines and swf have been replaced by step functions+event bus+lambdas/fargate. We've furthered abstractions over time, and that's a good thing.

Edit that said no idea how they scale in comparison


> Ten years ago, during my 2013 keynote at re:Invent, I told you that we wanted to “support today’s workloads as well as tomorrow’s,” and our commitment to Classic is the best evidence of that. It’s not lost on me, the amount of work that goes into an effort like this — but it is exactly this type of work that builds trust, and I’m proud of the way it has been handled. To me, this embodies what it means to be customer obsessed. The EC2 team kept Classic running (and running well) until every instance was shut down or migrated.

This is why businesses trust AWS.


I still remember when I worked at Serif and we needed web hosting for our new social media website that would be integrated into our scrapbooking desktop software. There were a lot of shiny new technologies around at the time (as there always are). One of them was Microsoft Silverlight, and we implemented its "Deep Zoom" into the website so our users could easily zoom around in the scrapbooks that got published and enjoy the details.

Another was AWS. I think EC2 had just launched, and we happened to be re-evaluating where we hosted our web properties, because we thought the social media website would get a bit more traffic than our older "web components" offering. It was pretty exciting that we could just click a button and spin up an instance in the US, or in Ireland, or Amsterdam. And if the instance died, just click a button again to spin up another one.

As it is today, so it was then: the simple UI hid quite some complexity. There were the different kinds of storage to learn about, and which ones were persistent, and which weren't. If I remember rightly those early EC2 instances weren't as reliable as they seem to be today either, we actually lost one or two completely and had to rebuild them, so we snapshotted a lot.

There was no infrastructure as code or DevOps, but we did implement our own highly available cluster. One of the engineers I worked with actually wrote it from scratch, in C++ (we were primarily a C++ company). It would monitor the databases in the different EC2 instances, and ensure the database cluster stayed alive.

We didn't really know what we were doing, what was smart, there weren't any cloud architects around back then. But the technology worked really well. Once we got past the initial hiccups, we built a pretty active internet community. The website itself was built in PHP with MySQL, and used XSLT to transform XML data (that in turn was built from relational data from SQL) to generate the HTML. There wasn't a great deal of JavaScript, just some jQuery (another technology I just randomly stumbled on while working on this project that also changed quite a few things). Progressive enhancement, and server-side rendering.

I'm trying to remember how we deployed it. I think we used FileZilla, and then the C++ cluster software would clone the uploaded files to the other EC2 instances in the other AZ's.

I can't remember how much peak traffic we had, but in retrospect, we probably didn't need a server in 3 different AZ's. But damn, it was fun to work on, and it gave me my first introduction to AWS.

Good times.


I remember those days, there was no database layer at AWS so we used instances with Elastic Block store for durability.


Yeah, that was it! It was so confusing initially trying to figure out what should go in the EC2 instances' local disk storage, what should go into EBS, and what should go into S3. It all seems so obvious now of course...


It was also confusing to the users. I remember seeing a post every other week on HN about a service going down or losing data because they accidentally used the ephemeral devices for their databases. Haven't seen one of those in ages.


Yup, and for performance, you would RAID Stripe a bunch of EBS volumes together. But sometimes you'd get a volume on a noisy neighbor, and its performance would be terrible.

Where I worked at the time, I wrote some Chef tooling that would hdperf the volumes it provisioned, and if some weren't any good it would provision more until it got 8 good ones. Only then would it RAID them together, then deprovision the bad ones.

Now they have provisioned iops, and I haven't seen a slow EBS volume in a decade.


Now all they need to do is actually allow a fixed budget instead of possible runaway credit card charges.

Microsoft do it, and IIRC Google, so why not Amazon?

I've been waiting for this for about 20 years too :D


It would kill the ecosystem of highly paid "cloud cost optimization" consultants who do little more than flip a few switches in the AWS console.


Because it would make them less money and their customers aren't so price sensitive.


I know it’s not the same but you can setup billing alarms and budget alerts.


Personally I wish they would have just bit the bullet and continued Classic with IPv6.

Ironically I think post-classic was a regression — it looks a lot more like the infrastructure we had to deal with pre-cloud.

My new stuff is mostly on Cloudflare, so now I’m really not thinking about subnets, VPCs, etc.


It turns out the way to grow the ec2 business is to get existing companies into the cloud - not just brand new startups. They all have to integrate with on prem networks, which necessitates VPC.


So if I'm understanding correctly, all the classic instances were migrated to more modern types with no intervention from the account holder?

Did they suffer a reboot during that migration, or was it done via some live-migration process (it's hard to live-migrate off a virtualization platform that was never designed with that in mind!).

What about the original network setup? Is that still emulated, or might some customer applications have broken?


No, migrating did involve intervention from the account holder. More information here: https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-her...

It seems like AWS spent time, people and money to migrate customers off EC2 classic. They made a fairly good effort to automate the process and make it less painful for customers. For example: https://repost.aws/knowledge-center/ssm-migrate-ec2classic-v...

The original network was from an everyone-on-the-same-subnet model to a you get your own subnet, so yes, customer applications could break in the process. People do all sorts of non-smart things for good reasons, like hardcoding an ip address in /etc/hosts when a nameserver is down. And then they forget to change it back. To do these sorts of migrations well requires a sort of stick and carrot approach. The stick, which is we want to shut down this service and will eventually refuse you service, and the carrot, which includes automation, reminders that people need maintenance windows for their applications, clear directions, and above all, willingness to deal with people and actually talk to them.


Looking at that blog post, I think AWS could have done the migration for most users with no involvement of the user themselves.

In the ideal world, they would have written software to live-migrate VM's to the new platform and emulate the old networking.

Emulating old stuff should be pretty easy, because hardware moves on, and an instance back in 2006 probably had far lower performance expectations - and therefore even a fairly poor performance emulation will be sufficient to meet user needs.


"emulate the old networking" is something that can't be done per customer, and the new platform makes networking per customer.

Let's say I have my aws account "account1", and my friend has their account "account2", both running classic. We could have both talked to each other's instances by their _private IPs_ even though they're in different accounts. AWS has no way of knowing those two instances are related, other than that they're both in classic.

Sure, AWS could make a global cross-account emulated flat network, but at that point, it's probably cheaper to just use the real thing, which was already built and functions... and at that point, you're not migrating them to "the new platform", but rather to "ec2 classic 2"


If there is a small number of classic users, a single special case in the code to have all classic users connected to a single network of an admin account seems very doable...

I wonder if perhaps part of the reason for not doing this was they were worried about malware spreading across that shared internal network from one VM without security patches to the next VM without security patches.

Even if that were the case, they could monitor all VM's on the classic network, and any VM which doesn't contact another users VM's for ~1 month would have the ability to do so be blocked.


I wonder why they didn't do that in the 14 years since VPCs were introduced?


The had to be restarted, but not only that, has to have their networks reconfigured.

But they gave people YEARS to do that, and tracked down every user to help them if necessary.


> has to have their networks reconfigured.

I don't see why every user couldn't be auto-created a virtual network with the same 10.x.x.x IP addresses as their original machine had - and therefore there is no need to do any reconfiguration on the users side.


Because there's more than just the local IP address to worry about.

Remember, all of EC2 Classic was in a single /8 of private IPs. You could communicate with EC2 instances in another account via their private IP address.

If you have two instances in different accounts that need to communicate, upgrading from EC2 Classic to VPC couldn't be done automatically.


Because people could have that same network on-promises on the other side of the VPN (I have).


But that isn't a new problem if the same user already uses that address - you're just leaving them with the same issue they already had.


It's not clear, but my interpretation is that they contacted every account holder, somehow convinced them to migrate (perhaps with discounts and/or threats of termination) and then shut down once everyone migrated.

Would be very interesting to learn how that was possible, it seems surprising to me that there wasn't even one instance that the owner forgot about or just was unwilling to do any work on.

It's possible that credit card expiration was the key, as that may have automatically disabled almost all forgotten accounts.


They don't need to threaten. Their SLAs don't offer to run VMs indefinitely. AWS will send you an email about shutting down your VM if, eg. they need to rotate the disk used for storing VM image etc. It's somewhere there in the contract, and it's a usual process for someone who keeps long-running VMs in EC2.


Do they really not do live migration or at least auto-restart (if configured) in those cases?


They have live migration now (for many years) but they didn’t back in the early era. I’m not sure they set it up for the classic environment but I think they must have - in the early 2010s you could get notices that your VM’s host had failed or was about to fail and you needed to launch a new one but I think that stopped by around 2015 as I had servers running for a deprecated project which were finally shutdown this decade and it seems like rather good luck not to have any failures in 7+ years on old hardware.


I received last such email about a year ago for a VM in either free-tier or like the cheapest one available. This probably has to do with VM flavor you choose as well as with the time you created it.


Definitely possible - I only had about ten old servers but that’s a nice stretch without a hardware failure.


Its your responsibility to make your systems resistant to failure


I don't know the details of this particular migration, but I used to have a VM in some low-price tier that was running for a long time (few years), and, eventually AWS sent me an email telling they are going to shut it down for maintenance reasons.

Guess this was something similar. VMs, if not specifically configured to be able to move cannot really be moved automatically. Think about eg. randomness of ordering in PCIe bus (i.e. after moving the devices may not come up in the same order as before moving), various machine ids, like MAC address -- if you don't make sure VM isn't affected by these changes, it's likely that it will be, if moved.


> Think about eg. randomness of ordering in PCIe bus (i.e. after moving the devices may not come up in the same order as before moving), various machine ids, like MAC address -- if you don't make sure VM isn't affected by these changes, it's likely that it will be, if moved.

QEMU/KVM/libvirt/... are idempotent when it comes to hardware the VM sees - the exception is the CPU model, that one can't be changed on the fly without at least rebooting the VM in question, and hardware in passthrough mode like GPUs.

All the VM sees from a live migration is a few seconds of "lost" time, as if someone had stopped the CPU clock.


The clouds have figured this out: https://dl.acm.org/doi/pdf/10.1145/3186411.3186415


That's a very bold claim :)

If you prepare for migration, then it will work. If you don't -- it might or might not work, and it depends on way too many things to be confident that it will.

For example, in our VM deployment process we heavily rely on PXE boot and our code that runs during initramfs and also after pivot. So, even if whatever you have in the hypervisor and the virtualized OS has somehow managed to solve the moving problem, our own (screwy) code might not be so smart.

In general, the more of the underlying infrastructure you touch, the harder it will be to move you, unless you specifically prepare to move. Eg. what if you set up BMC for various components on your system? What if your code relies on knowing particular details of hardware to which you were given pass-through access from your VM, like, eg. you record somewhere the serial number of your disk in order to be able to identify it on next boot, but suddenly that disk is gone?

Even simpler: MIG flag on NVidia's GPUs is stored in the persistent writable memory of the GPU itself. Suppose your VM has moved to a different location and is connected to a different (but compatible) GPU -- you need to run the code that sets the flag again and then reboot the system in order to start working with the GPU, but the host may not be even aware of the fact that you need that particular setting.

The guest side of things needs to be prepared to move, to mitigate these problems.


I'm saying that at least GCP has supported VM migration for a while and it's generally not something people are currently worried too much about these days given they have attempted to mitigate issues like you've pointed out.


This is a level of support all software companies should aspire to. It's also what the enterprise likes to see and will give you money for.


I continue to choose AWS for the companies I work at not because their offerings are better, but because their support is so far superior.


I had an interview somewhere that was using Google Cloud. My responsibility would be to own all of that (the CTO was who was interviewing me) but I took the interview since I wanted to understand why and wether or not I actually had the power to change that (given I would own it).

I didn't, and didn't take the job and Google Cloud was a major reason for it. I did not want my job at the risk of Google's decisions. I just don't trust them.

AWS may not be perfect but I don't worry as much that a decision on their part is going to really screw me over.


> If you had launched an instance in 2006, an m1.small, you would have gotten a virtual CPU the equivalent of a 1.7 GHz Xeon processor with 1.75 GB of RAM, 160 GB of local disk, and 250 Mb/second of network bandwidth. And it would have cost just $0.10 per clocked hour.

> It’s quite incredible where cloud computing has gone since then, with a P3dn.24xlarge providing 100 Gbps of network throughput, 96 vCPUs, 8 NVIDIA v100 Tensor Core GPUs, 32 GiB of memory, and 1.8 TB of local SSD storage, not to mention an EFA to accelerate ML workloads.

They decided to omit the price but I think it's relatively good - around 31 USD per hour. Just remember to turn it off after you're done otherwise it will cost you over 7 grand.


Classic sales maneuver, give numbers but make your numbers as incomparable as possible.


I really wish pricing pages included both per-hour and per-month (or per-30-days) pricing.


Minor nitpick, 10.2.0.0/8 is not a valid subnet mask. The largest subnet you can make starting with 10.2.0.0 is 10.2.0.0/15. As far as I can tell everything was done in 10.0.0.0/8 but Amazon has took down EC2-Classic documentation.

https://web.archive.org/web/20150302235811/https://docs.aws....


I believe it is valid, the /8 masks it so the 2 might as well be a 0.


Was wondering when they'd finally shut that box down, I was keeping it on just to troll them.


> one giant network of 10.2.0.0/8

Huh? Do they mean 10.0.0.0/8, or 10.2.0.0/16 (/15 would also work), or is this a new CIDR notation? Something specific to AWS, maybe?


Technically, it doesn't matter what number you stick after the first stop in a /8 network. Might as well write it as 10.X.X.X/8, it's all the same. I'm not sure where I've seen this done, but >90% certain some of the ip commands output this kind of CIDRs by simply concatenating the IP of the node you are querying and the size of the network.


You encounter that all the time in the networking world - formally the bits outside of the prefix length should be 0, but informally you'll often see set bits. Often it is a way to say "the subnet that includes this ip address". So 192.254.33.12/16 or something. Could just be a typo, too. Regardless, it doesn't really matter what is in the bits outside of the prefix length, because they get zeroed out when used.


Yeah, but that makes sense -- it's a shorthand for "the IP address 192.254.33.12 which is in the /16 subnet".


I haven't spun up anything in the cloud in a while. I used to dabble in both AWS and Digital Ocean. Usually went with DO because it seemed simpler and my projects were super small (and they had a lot of relevant guides to spinning up X, Y, and Z service on their systems). What is the difference between EC2-classic and whatever is available today? Are you still able to just get a linux box with a public IP address?


Yes, but the reality is that EC2 is optimized for a target audience who wants to customize the deployment beyond a linux box with a public IP.

But iirc following the "Launch Instance" wizard and choosing defaults for everything, as well as a (default) public subnet gives the exact thing you're asking for.


I think there's Lightsail for such use now - https://aws.amazon.com/free/compute/lightsail-vs-ec2/


And I think that's been around for at least 8 years - I remember looking at it in 2015 - or maybe 2016(?). Was working with a small startup that needed a small bit of hosting, and they were trying to do it all at AWS because... name cachet. They were getting lost in trying to cobble together lambda, s3 and some other stuff to basically host some JS and images. I suggested Linode (had a working example) but "that's not enterprise" (this was a small pre-revenue startup testing ideas). I then suggested (and setup) a lightsail instance for $5/month doing what was needed, and it too was rebuffed as not enterprise(y) enough. Even being 'at AWS' wasn't enough to start with - they wanted to hit the ground running with lambdas, microservices, multi-region load balancing and complex IAM stuff.

Stated justification was "If we don't bake this in now, it'll be harder to do later", but it was mostly a couple fokls in charge wanted to learn new stuff (someone said that out loud later, confirming my concerns about resume-driven-development).

EDIT: FWIW, my couple small experiences with lightsail itself were fine. Seems like a decent onramp to ease in to the AWS ecosystem, if that's on your roadmap.


You have to create at minimum a vpc, internet gateway and at least one subnet/route table pointing to the igw


Oh man, I don't know any of these concepts (rather I've heard of all of them, but don't know how they apply in practice). I'm probably not the target customer for AWS any more haha. Been so long since I did anything web related. Time to find some networking for dummies youtube channels I guess.


Nah, it's just EC2 you are not a target of. Checkout AWS Lightsail


> You have to create at minimum a vpc

A default VPC is created automatically. For simple projects, that's good enough.

> internet gateway and at least one subnet/route table pointing to the igw

Only if you have a private subnet.


> So, I wouldn’t blame you if you were wondering what makes an EC2 instance “Classic”. Put simply, it’s the network architecture. When we launched EC2 in 2006, it was one giant network of 10.2.0.0/8.

And they kept this running for a decade after people agreed there were much better options with an announced 2 year sunset period.


I remember being hired for a short consulting gig, and my customer asked that the back end be hosted on an EC2 classic instance, I think while it was still in beta. The project was a simple App running on what I think was some sort of Google TV prototype box (it was a long time ago and it was a one or two day project so I really don’t remember the details).

I used to love AWS but after I worked a while at Google in 2013, I switched all my personal projects to GCP for nostalgia (but for work, just used whatever platform customers wanted).


Ec2 classic was dirt simple. You basically were on a big public lan it felt like. If I remember right there were some issues with folks running scans / abuse originating inside AWS that felt like got a touch slow of a response - then at some point it all cleared up? I remember hardening internal systems as if the they were public which was a good practice even as vpc arrived. They might have gotten a default public ip as well?

I used google app engine which was orphaned (at least version I tried) so that was a clear contrast w AWS.


[parent comment was edited, it originally mentioned SimpleDB]

Speaking of SimpleDB, we still use it. It's amusing how it's basically swept under a rug at AWS. It's never mentioned, barely documented, but continues to work. It's a pretty good product for what it is - a very simple key/value store where you don't need/want to manage provisioned throughput, costs, keys, etc.

The way they handle SimpleDB makes me respect AWS and feel more comfortable on some other services we also rely on that seem close to abandoned (like ElasticBeanstalk).

However, as a counter-point, they are killing OpsWorks with what feels like a fairly short notice, so I'm also a bit cautious about how long they'll maintain services.


Yeah, I didn’t want to distract but in my use case simpledb was even a fit after first dynamodb release for a reason I forget. Even more they took it totally off marketing after depreciating it but the hammer never dropped! Absolutely love this. The ec2 classic termination was actually a bit surprising in that context.


They added Python 3.11 last month. Why do you call it close to abandoned?


Re: ElasticBeanstalk - it's just a feeling, and hopefully not correct - it just doesn't feel like it's one of their primary focuses, and seems suspiciously stable overall. There's nothing I particular want them to add, though, so maybe it's "perfect".

I love it, though - it's been a great boon for our small team - allowing for a painless hands-off deployment strategy that's worked great (largely unchanged) for almost a decade.


The public IPs were the big part: if you had the default 0.0.0.0/0 rule allowing SSH, you’d see brute force attacks within a few seconds of launching a new instance.

VPCs gave a little more room to prevent that but the big thing was really better tooling - the average developer still doesn’t think about security enough to be trusted with the EC2 or GCP launch wizard.


I remember this. I don’t remember if it was cloud unit or something else like a pre hardened ami, but basically that - you got hammered in seconds after starting in default config so was good to take some steps right on launch.


Yeah, the official AWS AMIs have had password auth disabled for a very long time but I’m pretty sure I remember some third parties learning the hard way that setting a default password and telling people to change it isn’t good enough.


Note my one complaint was that it would have been nice to wrap the resource finder script into the gui / web interface


I was overbilled for this service in 2011 and Amazon insists to this day that they do not have records about this even if I forward them the emails :)


I setup an AWS account in college before I had ever worked and dealt with enterprise security. Didn’t use the account for anything. No 2FA and a weak password and all of a sudden I have a $15,000 bill from a crypto mining script. Worked with Amazon, got everything cleaned up, turned on 2FA and was only charged $100. Pretty generous considering it was entirely my fault


Someone else's fraud isn't your fault.

Amazon can damned well run password strength / compromise tests and validations.



kudos to aws team pulling off the migration. Also its nice to see a CTO of such a big company trying out the tech and doing hands on work.


It's likely a move to reorient their price points. Apparently, according to the economist, Amazon Aws is it's sole bread earner.


This is something Google Cloud should learn from. It doesn't matter if product XX still makes money or fits in your business model. There are people who rely on it. And they'll remember if the vendor kept it running trouble free for years. They'll also remember if it was arbitrarily shut down or the price was suddenly increased by 4x and they had to spend many man hours migrating to another option. (Google maps api?)

Then in the future when that same person is responsibly for choosing a vendor for a new project, they'll remember.


Google Cloud officially deprecated its EC2 Classic equivalent, legacy networks, years ago but they're still running just fine:

https://cloud.google.com/vpc/docs/legacy



I'm experiencing a bizarre sense of unreality reading comments on a story about Amazon killing EC2-Classic, where the first line is "Retiring services isn’t something we do", while having just read yet another email from Amazon warning me about all my files stored in the defunct Amazon Drive, and somehow 90% of the discussion is about a completely different company discontinuing products.


We had some classic ec-2. They gave plenty of warning and migration was easy. I think Google is being brought up so much because that isn't the typical Google experience when they kill things off.

I get your point, if AI were training on this content it might conclude that Google killed off its ec-2 product.


AWS is pretty separate from consumer Amazon.


Yup was going to say. AWS treats its customers a bit differently than retail Amazon. The latter cuts services faster than Google.


If I'm not mistaken, Amazon Drive is not part of AWS's services. Certainly Amazon's consumer cloud services have had some big changes over time.


Sounds to me like a rug pull is imminent


Google Reader. Never forget.


I miss Google Search being good and useful and not covered in intrusive ads much more than I miss something easily replaced with local software or Feedly.

Almost everything Google does outside of GCP, Maps, Search, and YouTube could evaporate for all I care. Google's problem is not that they cancel stuff, it's the everpresent need to grow revenue and embed annoying ads into more and more of everyone's daily lives. I'd love for them to cancel Gmail with a very short notice period.

The endgame for Google is every Google user loaded full of energy drinks watching ads continuously for 20 hours a day. Every lifestyle that's less profitable than that is something Google will eventually try to engineer away.


It's not really the ads that killed Google search for me. it's more the over agressive minmaxed SEO that makes most of results garbage.


For me it's when they stopped showing results that include your keywords and instead a random smattering of anything broadly related to the topic being searched for.



I’ve had google searches disregard my quoted query.


even quotes don't work 100% of the time- if there is strong signal to show another result, it will be shown even if it isn't a quote-match. Previous user clicks are the top ranking signal.

IIUC "verbatim mode" is supposed to do something like this. https://search.googleblog.com/2011/11/search-using-your-term...


When I use Google search-which is rare these days-I exclusively use verbatim mode.


This shifted from "can" to "absolutely have to" a long time ago for my use cases.


Often it's not even related at all.

For example, Google often prioritizes obscure music bands or albums even for generic or well-known terms.


Can you explain "minimaxed" in this context, please?


Not bad enough to be outright spam with just enough relevance to be shown in the top ten search results. Try finding product reviews, or product comparison articles. It will likely be LLM garbage that doesn't say anything, but uses the right keywords and enough coherency to be indexed.


It's obvious LLM tripe when you click into the article and it begins with several worthless paragraphs describing why someone would be interested in the topic and how things can sometimes go wrong...

Yes, I already knew that. That's why I'm here.

It might be an attempt to copy customer service 'empathy' but it has the opposite effect: angering me because my time is wasted with this crap, and I have to scroll several screens to find what I need.


Tangent: your use of the word "tripe" here is spot-on.

The secondary definition (nonsense) works, but I'm talking about the apt metaphor of its primary definition: offal that's technically edible but comes from low-quality parts of the digestive system which are literally filled with feces.

LLM-generated SEO spam should always be called "tripe".


Honestly, a lot of this isn't LLM stuff. It's "people being paid pittance".


Good point; maybe all clickbait and SEO spam, regardless of human provenance, counts as "tripe".


> It will likely be LLM garbage that doesn't say anything, but uses the right keywords and enough coherency to be indexed.

Or some site that "aggregates" Stackoverflow, Quora and whatnot. Pure hell and I wish everything bad possible on this planet to the people who have implemented this kind of scam.


I block them using uBlacklist in Firefox. My blacklist is growing.


And the content itself is just a reflection of the search. Whole web is turning to rot such that the only few remaining great sites don't even need indexing because well there's so few left.


Not OP.

I believe "minimaxed" in this context refers to optimizing profits while trying to keep search results useful to users.

The term comes from game theory where a player tries to maximize their gains while minimizing their losses:

https://en.wikipedia.org/wiki/Minimax


But Reader _wasn't_ easily replaced by local software. They centralised social usage of RSS, then killed it. Yes, you can still run an RSS reader, but RSS and blogs as a a model of social networking? Never came back.


That wasn't Reader, that was apps. Nobody uses the web anymore except as API transport for native apps. Blame Android and Apple for that.


I started to use Kagi instead of google search. Since, it's difficult to get back on google, the results are so much worse.


What's an example search that's better on Kagi?


"Radiator". I just picked a word at random. All the other ones are better too.


Funny enough, I just tried it and I agree. I have pinned Wikipedia in Kagi, so that came out on top rather than a link to Autozone at Google. Google's "Places" results were also (significantly) farther away yet no more relevant than Kagi's.

So...yes. Radiator.


> everything Google does outside of GCP, Maps, Search, and YouTube could evaporate for all I care

Honestly, even YouTube could evaporate and it really wouldn't make a difference. 99.9999% of YouTube is just mindless entertainment, which is 100% fungible with every other form of entertainment. The amount of actually unique, insightful, worthwhile content on YouTube is a rounding error, and will find other places to live.


AFAIK more than 95% of all email traffic is spam. I still consider email an absolutely vital tool, despite the efficiency below a steam engine.

Same with YouTube: the small sliver of content I care about is important enough for me to pay for YouTube premium.

Sometimes people compare something to a gold mine, to emphasize how rich that is. A typical gold mine extracts several grams of good per tonne of rock, that is, a few parts per million.

Don't cry about the Sturgeon's law; embrace it and celebrate what you can extract.


YouTube isn't analogous to email, it's analogous to Gmail. We were hosting video in the 90s, and the cost of storage, bandwidth, and compute have become orders of magnitude cheaper since then. Video hosting is not magic that only YouTube can pull off.


Unlike Gmail, YouTube is a large public repository of media.

Pulling your email account from Gmail affects you. Pulling a video from YouTube affects potentially huge numbers of people.


If the useful YouTube content scattered to multiple alternatives, it would immediately become far less useful. Discovery is a big part of YouTube’s value proposition for me.


What ADs? Install uBlock.


I am fucking pissed at how Google created tons of make work for me by killing old Universal Analytics and replacing it with an inferior product.

OTOH, I successfully skipped the whole AMP saga because I could tell from the start that it was bullshit.


Universal Analytics used third-party cookies. Support for them has been removed from all major browsers due to privacy issues.


So you adapt to that. Adapting to changes does not require any of the crap in GA4.


I see Reader as Google's attack on RSS' popularity. The product was killed off when its job was accomplished.


Apparently what actually happened is Reader was put together by a team that really cared about it, but always had to fight the corporation to keep it alive. They finally lost the political game to Google Plus, which stole a lot of their key people before it imploded:

https://www.theverge.com/23778253/google-reader-death-2013-r...

The article above is actually very interesting, because the story it tells is that Google's two highest-profile failures are actually one failure. Facebook freaked them out so much that they scrambled to build something comparable with Google Plus. Google Plus stole most of the company's mind share but was executed so poorly that it never went anywhere. The company got major egg on their face from suffocating Reader to make Plus, then again when Plus died after having been pushed so hard.


Three greatest failures. Google+ started the trend of having Google product strategy set by executives who were accountable to the CEO/CFO rather than users/customers, vs. the previous bottom-up culture of engineers who passionately wanted to serve the user. That culture change is the root of the issues we're talking about in this thread.


No because if anything (the effect might have been small) it reduced the power of the open web and many websites (which Google tied togeather) and encouraged people to go to walled gardens ( Facebook, Instagram, twitter, etc ) which are controlled by other companies.


A lot of heavyweight bloggers and aggregators used Reader in their toolchains - it was very good at surfacing trending content. I don't think the effect on the blogging ecosystem was small.

I could believe they were clearing a path for G+ and Discover on Android.


What's frustrating is that Reader would have been complementary to Google+. It could have served as a huge funnel by which users could discover content to share on Google+. That's how I used to use Reader (though at the time I found content that I shared other places, such as Facebook, Digg, or Reddit).


I think it just was never going to get the mass adoption it needed to justify the upkeep.


The upkeep was apparently 12 engineers and they had tens of millions of users. It doesn't take that many users to justify 12 engineers plus infra, so it sounds like it was more that Google doesn't care to operate any product unless it will have users in the hundreds of millions.


By the end it had an upkeep of one dude's 20% time. It was very reliable and the Google infrastructure didn't require much ongoing maintenance at that time, the servers just kept on trucking.


It doesn't take that many <revenue generating> users to justify 12 engineers plus infra,

Otherwise, those 12 engineers and infra are pure negative on the balance sheet


Having basically all their users (well, as much as for any of their revenue generating products anyway) be revenue generating would require practically no effort for google specifically. It fit their revenue model perfectly. It is trivial to put the exact same ads in there as they already had on search and gmail, and it is stickier than search or gmail.


That would seem a little rude wouldn't it?

Taking content from another site, maybe that has ads, and then displaying it in Reader next to your own ads instead?

I think it would have caused some issues in the RSS world that such a big player was using everyone's content for themselves.

I used Reader daily back then. It's strange to think my current method of manually checking sites for updates was solved so long ago.


LOL. the Googs worrying about being rude is charmingly funny in a webcomic kind of way. they removed their tag line of don't be evil. you think they are concerned about being rude?

however, this is precisely what Reddit was complaining about 3rd party apps doing to their content. everyone was trying to pivot to blaming AI scrappers, but that's just FUD.


Broadly, I think some business models are better suited for smaller companies than Google for sure.

Reader probably didn't have much B2B potential and was maybe profitable but yeah, they tend to swing for larger audiences.


I don't think they ever put ads on it, really wonder what the decision process on that was.


Many blogs have ads on them. So you'd be stripping ads and replacing them with your own - not cool.


Well, sure, but they could just not show ads for blogs that didn't use google ads.

The ones that do... Well, why not show ads for them as originally intended?


Obligatory reminder that RSS is alive and well. It remains popular. I both subscribe to RSS, and support it on my blog. You can too.


Still use it - I even read HN via RSS - but I've never found a tool with the critical mass of users you need to show trending content the way GR did.


That's true. I use Newsblur. It has social features, but the community is so small and the social aspect is so limited that it has little value. With that said, the people who engage with it tend to be authentic users interested in high-quality content and respectful discussion. Discovery of other users is terrible, though. You basically have to stumble upon them if they're interacting with an article already in your RSS feeds.


theoldreader, same problem.

Idle musing 1: A mechanism for all these small readers to federate their trending content? Out of many comes one?

Idle musing 2: These tools were mostly written a decade ago. It might be possible, with the current state of the art, to extract a more useful signal out of a smaller pool of users.


I agree with everything you wrote, except for the very last sentence. This is just wishful thinking. First of all "when the same person is responsible" is a very big if, and then no two buying decisions are the same and even if the person with decision-making power remembers it'll be just one point on a long list.


OTOH, I have a feeling that "nobody ever got fired for choosing AWS" will become the new "nobody ever got fired for buying IBM", so...


Sure but if you ask online and those people will be yelling "dont pick <company> they will pull a rug from under you" at every occasion...


Google clearly doesn't care about their reputation. Their shutdown of Pixel Pass before users got a chance to upgrade was just ridiculous[1]. It's hard to quantify the impact that this poor reputation has on their business. Their revenue is still growing, but it's definitely not where it could be.

[1]: https://www.theverge.com/2023/8/30/23851107/google-graveyard...


That's a good point. But I am curious how special this Classic offering was, compared to what came after. Would migration have been hard?


It's not so much that EC2 Classic offered any features that were difficult to live without. It's just that migraine away from it means migrating, period. You need to move all of your systems, including any data stored on those systems (in instance store or EBS), to effectively a new data center. Migrating a live production environment can be a pain and/or cause downtime.


It's also worth remembering that infrastructure stuck on EC2 Classic would have been built so long ago it may predate modern cloud tooling and even modern best practices around reproducibility and CI/CD.

(EC2 user since 2010)


This. Unless a written guarantee could be offered by the cloud provider that migration will be absolutely trouble free, why wouldn't customers just stick to what they know works?

And if they're not willing to provide a written guarantee, then that says a lot.


In this scenario, it's nothing to do with the cloud provider. Migrating a live production system is inherently difficult. You can make a reasonable analogy to moving houses – say, with two kids who are in school, and while you and your spouse are both working. No matter what guarantees you're given regarding the condition of the new house, simply moving all of your stuff (while you are using it) is a big hassle.


In this example, it wouldn't be the guarantees for the new house, which presumably would have been examined and accepted well beforehand, it would be guarantees for the moving process itself.


Exactly - and with fewer managed services back then they’d also be more likely to have hand-rolled servers doing things which you’d now try to hand off to a managed service. I remember entire servers running small tasks which you’d now have, say, CloudFront or an ALB invoking Lambdas or at least sending it to a container.


> It's just that migraine away from it means migrating, period.

Freudian slip?


Software engineering at Google even had a law for their

https://medium.com/se-101-software-engineering/what-is-the-h...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: