Hacker Newsnew | past | comments | ask | show | jobs | submit | tekno45's commentslogin

how does this improve the driving experience?

Is "i wonder how hard they're breaking" really a problem?

People drive too close to each other already


When you're on a highway at 75-80mph in Phoenix towards the evening sun, and see a couple brake-lights ahead it can help understand the massive difference in terms of do I need to slam my own brakes or just take my foot off the accelerator alone.

Batteries don't dump half their capacity overnight because they're filled with the gas that doesn't want to be contained.

But a 1 Ton full battery is the same with a 1 aton empty battery.

putting stuff on the internet is dangerous. if you're not prepared to secure public endpoints stop creating them.

Putting stuff on the internet is dangerous, but the absence of hard caps is a choice and it just looks like another massive tech company optimizing for their own benefit. Another example of this is smartphone games for children, it's easier for a child to spend $2,000 then it is for a parent to enforce a $20/month spending limit.

Are you really comparing a software developer provisioning an online service to a child buying tokens for loot boxes?

More the "dark pattern" of empowering unlimited spending and then what keeps on unfolding from it.

No it isn't. There are many ways to put stuff on internet with guaranteed max costs.

blaming the victim? stay classy.

intentionally allowing huge billing by default is scummy, period.


Yes, you as a developer should know something about how the service works before you use it. I first opened the AWS console in 2016 and by then I had read about the possible gotchas.

Well, people get informed by reading these stories. So let's keep informing people to avoid AWS.

Yes I’m sure large corporations and even startups are going to leave AWS because a few junior devs didn’t do their research.

You do know that large corporations and startups employ junior devs as well, right?

All else being equal, would you rather choose the platform where a junior dev can accidentally incur a $1M bill (which would already bankrupt early startups), or the platform where that same junior dev get a "usage limits exceeded - click here to upgrade" email?


Well, first I wouldn’t give a junior dev with no experience admin rights to an AWS account and would I have tight guardrails around what they can do - like I’ve done now with over a dozen implementations for clients since I’ve been in consulting for five years and the four years before that as an architect for product companies.

I also wouldn’t give a junior dev access to production databases.

Also from working with AWS from both the inside (Professional Services) and the outside at a third party consulting companies, I know how aggressively AWS is about keeping startups and they would never risk losing the continuing revenue of a company like that.


> All else being equal, would you rather choose the platform where a junior dev can accidentally incur a $1M bill

If a junior dev has the access to do that, then there is a big failure (probably more than one) by someone who isn't a junior dev after choosing AWS that was necessary to enable that.


stop putting stuff on the internet you don't understand.

its usually more complicated than that.

Repairing becomes a different kind of nightmare.

https://www.youtube.com/watch?v=z-wQnWUhX5Y


its now just a way for people to say all their favorite racist things but "its just robots"

cligger and rosa sparks are the two that made it obvious its not just "haha fuck robots" anymore.


what goes through my mind is the fact plants aren't low maintenance, the land has to be tended.

growing the fuel plant is probably easy.

How do you get it OUT of the plant?

Solar panels just sit there (they do need cleaning i admit) and produce electricity that we can manipulate very cheaply already.

What machine collects diesel from plants? Can you safely dispose of the plant matter?


Biodiesel is an oil plus an alcohol (usually 80% vegetable oil + 20% methanol) reacted using an alkaline catalyst like lye.

Methanol is also known as "wood alcohol", and can be made at ~40% yield by cooking down wood ("destructive distillation") in a specific fashion, or made from too-cheap-to-meter natural gas if you've got it. Anything you can do with natural gas can also be done with anaerobically fermented methane. You can also use ethanol (fermented from any carbohydrate crops) instead of methanol, creating a biodiesel with slightly different but still usable properties.

...

Sunflower, rapeseed, and soybean oil have very well-established agricultural workflows which require very little labor input.

Palm oil is substantially higher yield, but more labor intensive and is associated with tropical rainforest destruction.

...

You don't necessarily even need to react your vegetable oil. The original Diesel Cycle demonstration engines ran on straight peanut oil, and there are some truck engines out there (like the 12 valve Cummins) that will happily run on filtered waste fryer oil all day long. It's just a matter of tuning, viscosity, compression ratios, seal materials, and the like, being slightly different from petrochemical diesel fuel. Reacting vegetable oils into fatty acid esters ("biodiesel") does attain some modest engine benefits, but mostly it's to match compatibility with petrochemical diesel grades so that you don't, eg, need to replace your fuel lines & pumps with different diameter fuel lines & pumps.


Thanks! very interesting space that i barely understand lol. hope it didn't come off as know it all, just questions.


we wouldn't call anyone else who cheated at their job an incredible professional.

but if you illegally slurp up data and make tons of money, you're the best at what?


To host it in an orchestrator your cluster has to be more available than your DB.

you want 3 9s of availability for your DBs maybe more.

Then you need 4 9s for your cluster/orchestrator.

If your team can make that cluster, then it makes more sense to put all under one roof then develop a whole new infrastructure with the same level of reliability or more.


This is a persistent myth that is just flat out wrong. Your k8s cluster orchestrator does not need to be online very often at all. The kube proxies will gladly continue proxying traffic as last they best know. Your containers will still continue to run. Hiccups, or outright outages, in the kubi API server do not cause downtime, unless you are using certain terrible, awful, no good, very bad proxies within the cluster (istio, linkerd).


Your CONTROL PLANE doesn't immediately cause outages if it goes down.

But if your workloads stop and can't be started on the same node you've got a degradation if not an outage.


What alternatives do you have? No matter which system you are using, database failovers will require external coordination. We are talking about PostgreSQL, so that normally means something like Patroni with an external service (unless you mean something manual). I find it easier to manage just one such service, Kubernetes, and using it for both running the database process as well as coordinating failovers via Patroni.


Yes, but that's workloads || operator, not workloads && operator - you don't need four nines for your control plane just to keep your workloads alive. Your control plane can be significantly less reliable than your workloads, and the workloads will keep serving fine.

In real practice, it's so cheap to keep your operator running redundantly, that it's probably going to have more nines than your workloads, but it doesn't need to be


You're assuming a static cluster.

In my world scaling is required. Meaning new nodes and new pods. Meaning you need a control plane.

Even in development, no control plane means no updates.

In production, no scaling means im going to have a user facing issue at the next traffic spike


I am 100% certain I live more in that world than you; You can check my resume if you want to get into a dick waving contest.

What I'm saying is that the two probabilities are independent, possibly correlated, but not dependent. You need some number of nines in your control plane for scaling operations. You need some number of nines in your control plane for updates. These are very few, and they don't overly affect the serving plane, so long as the serving plane is itself resilient to the errors that happen even when a control plane is running, like sudden node failure.

Proper modeling of these failure conditions is not as simple as multiplying probabilities. The chance of failures in your serving path goes up as the time between control plane readiness goes up. You calculate (Really, only ever guesstimate, but you can get some good information for those guesses) the probability of a failure in the serving plane (incl. increases in traffic to the point of overload) before the control plane has had a chance to take actions again, and you worry about MTTF and MTBR of the control plane more than the "Reliability" - You can have a control plane with 75% or less "uptime" by action failure rate but that still takes actions on a regular cadence and never notice.

You can build reliable infrastructure out of unreliable components. The control plane itself is an unreliable component, and you can serve traffic at massive scale with control planes faulty or down completely - Without affecting serving traffic. You don't need more nines in your control plane than your serving cluster - That is the only point I am addressing/contesting. You can have many, many less and still be doing right fine.


people always want manufacturing jobs that manufacture widgets.

If we spent time expanding and heck just properly maintaining our infrastructure we'd have plenty of blue collar jobs that pay well.

but infrastructure is an unspoken sin in modern america.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: