Hacker Newsnew | past | comments | ask | show | jobs | submit | ti_ranger's commentslogin

> Maybe AWS should put their dashboards on GCP

Then the status page would be almost entirely useless ...


> Yes but only if you initiate a claim and follow their steps. Check out these onerous terms:

There's most likely a reason for this.

Like, maybe in the past AWS customers have tried claiming for SLA credits for incidents that didn't impact them, in order to reduce their bill.


This is backwards thinking. Why require customers to file a claim for what are obvious outages? Instead, AWS should automatically apply credits to those accounts that have paid for guaranteed uptime without requiring this whole silly claims process.

The mechanism can be really simple. If AWS themselves posts an outage to their status page and/or some third-party service posts an outage then credits are immediately applied to the services where there are outages for those that paid for high level uptime guarantees without requiring any claims process. It can easily be done if they want to do it that way.

Of course from a business perspective I understand why they're doing it the way that they are. If they can make customers jump through hoops, then only those who really care will follow through. Meanwhile the uptime guarantee can continue as an empty promise.


When my ISP was unable to provide connectivity for an extended period, it automatically compensated me. I didn't have to do anything. The relevant system was being monitored, the ISP knew exactly when it was out of service, and I was credited accordingly with an apology and a note on my next bill showing the reduction. It doesn't seem unreasonable to expect the biggest name in the cloud to do something similar to support its customers when it screws up.


It's much more likely that the reason is someone going "well what if people want to abuse this?" without any evidence that they would.

Also: requiring your customers to ask for their money back when you know that you didn't deliver the service promised and all other billing is automated.. come on.


> It is kind of perplexing that AWS dogfoods its own status page.

> You'd think they would have learned from that.

They did.

The page has been updated numerous times since the start of this incident.


From the status page:

> This issue has also affected our ability to post updates to the Service Health Dashboard.

Just seems so ridiculous that they have trouble reporting the impaired status of their system due to... the impaired status of that same system.


it was 1.5 hours before the first service was put on yellow


> instead of spending 10x on (taxpaying) staff and employees within the UK, in order to create a new technology that may be useful and solve other problems, they are spending 1x on Amazon.

You imply that Amazon/AWS doesn't employ any staff in the UK, which is wrong.

> Amazon will not pay any tax within the UK

Amazon must certainly pay taxes in the UK, or at least the tax ends up being paid on Amazon share price increases from employees share vesting.


> For my own startup, I built a small cluster of 17 servers for just beneath $55K, and that had a month-to-month expense of $600 placed in a co-lo. In comparison, the same setup at AWS would be $96K per month.

Why would you build exactly the same setup in AWS as for on-prem, unless your objective is to (dishonestly) show that on-prem is cheaper?

Lift-and-shift-to-the-cloud is known to be more expensive, because you aren't taking advantage of the features available to you which would allow you to reduce your costs.


> Why would you build exactly the same setup in AWS as for on-prem...

It was far better to invest a little up front, and maintain at $600 my operations than the same for $96K a month, that's why.

I never "lifted and shifted", I built and deployed, with physical servers, a 3-way duplicated environment that flew like a hot rod. At a fraction of cloud's expense.


I think the point GP was making is that you could have likely started off much cheaper, eg. with 2k/month of AWS costs before needing to "simply" scale at eg. 12 months, especially so if using managed services and not just bare ec2 instances.

I personally think there's room for both, and I think hybrids between on-prem and cloud are the ideal for long running apps: you size your on-prem infrastructure to handle 99% of the load, and scale to the cloud for that one-off peak.

That's still pretty complicated due to different types of vendor lock in (or lock out in some cases). Google has invested in k8s to get people some value for moving away from AWS.


My application had (still would have) very high CPU requirements, and 2k/month would have got me spending more money than necessary. When I started I bought 1 server with the capacity I needed and put that in co-lo for $75 a month. That little puppie was equal to $10K a month at AWS, so why would I want to use AWS again? Just do the math, even 1 server out performs and is exponentially less expensive. The cloud has the majority of engineers looking like morons from a financial literacy perspective.


Are you claiming that you knew exactly how powerful you needed your machines to be, before you launched? Or are your machines running at 25% utilization which AWS would charge substantially less for?


I'm not making any such claim. I'm saying I built a 24-7 available physical 17-server cluster to operate my startup's needs. I had more capacity than I needed, but at the same expense thru AWS I'd not have enough to operate. At less than the expense of one AWS month, I had my entire environment owned outright. How is that difficult to understand?


> When determining what to use for development of my SaaS, I did a comparison of what you actually get from providers. The full article is at https://jan.rychter.com/enblog/cloud-server-cpu-performance-...

Your results (e.g. that z1d.xlarge with 4 vCPUs is only 10% slower than z1d.2xlarge with 8 vCPUs) shows that the "performance" you were testing was disk IO throughput (probably dominated by disk latency), not vCPUs.

> My takeaways were that many cloud provider offerings make no sense whatsoever, and that Xeon processors are mostly great if you are a cloud provider and want to offer overbooked "vCPUs".

> I haven't tested those specific setups, but I strongly suspect a dedicated server from OVH is much faster than a 4.16xlarge from AWS.

You seem to be implying that AWS/EC2 does CPU over-provisioning on all instance types; this is incorrect, only T-family instance types use CPU over-provisioning.


> the "performance" you were testing was disk IO throughput

In part, yes, but not entirely. I was very clear that my load isn't embarrassingly parallel, so it is not expected to scale linearly with the number of processors.

> You seem to be implying that AWS/EC2 does CPU over-provisioning on all instance types; this is incorrect, only T-family instance types use CPU over-provisioning.

If you think you are getting a Xeon core when paying for a "vCPU" at AWS, I have a bridge to sell you.


> Is this because ISP's can see DNS traffic? as it's in the clear over UDP...

Not necessarily.

It could very easily be done via IP address matching (think BGP communities that advertise specific subnets between one part of a network and another, as are typically used for optimal CDN routing etc.).


That's impossible.

In my country, our anti-ISP media claims that we have the most expensive internet in the world, but 10/5Mbps FTTH (in most cases with free upgrades to 20/2Mbps for the duration of COVID-19 WFH guidance mandated by the government) is quite commonly available at < $20-$25/month, and where it isn't, 10/1Mbps ADSL is available for $25-$30/month (including POTS voice service).

Thus, it is impossible that you, in the country of ~$20 1Gbps service, have worse and more expensive service.

</sarcasm>


In my opinion, there is really only one valid complaint in the article:

> We need more choices for our ISPs

If you fix this, e.g. by requiring all last-mile owners to offer the last-mile access at or below their (audited, sufficiently-profitable) input cost to their retail products, most of the remaining problems would sort themselves out, without having micro-managing of ISP features.

Unless you are going to start regulating OTTs in what features/value they can provide, I think it's unfair on (non-monopoly) ISPs to prevent them from providing innovative features because of "net neutrality should trump all" opinions.


There is two sides to this. The ISP basically becoming a monopoly (or duopoly in many cases). Your idea could have the effect of creating a third one. I think it would need to be something where all the ISPs have some sort of financial stake in making that company/gov org work correctly. That could however create more regulatory capture (which we already have).

The other side is the giant walled gardens these companies have created. To fix that would mean that each of the companies would have to want to be federated. I do not see that happening. These are the same companies that are heavily filtering everyone and they said the ISPs would do it. What is the point of having a 'freedom ISP' if the other end is not? We have to have the whole chain working.


> Also what can bash do that zsh can’t?

I have noticed that a lot of the features listed in https://www.gnu.org/software/bash/manual/bash.html#Major-Dif... aren't present in zsh, but I am not sure of all the ones that aren't in zsh.

Ones that I have used in bash that aren't in zsh (there may be many more, I stopped using zsh in many scenarios because of some of these):

* Some of https://www.gnu.org/software/bash/manual/bash.html#Shell-Par... (e.g at least ${LOGNAME^^}, `(FOO=BAR;echo ${FOO,,})`)

* -p option to read for the prompt, e.g. `read -s -p "Enter the DB password: " PW`


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: