Hacker Newsnew | past | comments | ask | show | jobs | submit | VirusNewbie's commentslogin

Microsoft is one of the least likely large companies to benefit from an AI boom. They don’t have the capacity to support OpenAI and their own foundational models, they aren’t providing a compelling story for wrapping OpenAI, windows continues to suck…

OpenAI signed an agreement with GCP , that should say a lot.


OpenAI also signed an agreement with AWS. And Anthropic has signed on with Microsoft as well as GCP.

The underlying dynamic is that no single cloud provider has the capacity required to host all the demand, so the frontier AI labs have no choice but to diversify for their infrastructure needs.


can you elaborate on this? diversifying compute doesn't create more compute - is it that the different LLM vendors have different peak times and so spreading themselves over more compute vendors spreads peak load?

Huh? If I need 10 bananas and my local shop only has 5 bananas available I need to go to multiple stores to satisfy my ravenous banana craving.

Yes but if there are three banana shops around and there are five banana addicted people living nearby the number of bananas available on average for every person is not 15.

In other words, if all ai companies need more compute that a single provider can provide, then there's just not enough of it. So the question "why everyone partners with everyone" must have a different answer.


It's not really "creating more compute" it's just a natural outcome of everyone desperately grabbing whatever becomes available. The dynamics make sense for all parties involved.

Firstly, it's very clear now that everyone is seriously crunched for capacity (like, each of the hyperscalers' backlogs -- i.e. capacity for which payment is committed, but as yet unsatisfied -- are in the double-digit billions.)

So as the compute providers bring more capacity online, everyone with demand wants to get a slice of that. Like, why would anyone NOT dive in and try to secure some capacity for themselves? Especially when the rate of capacity growth is constrained by the availability of GPUs and energy and data center buildouts, which is measured in years.

On the flip side, why would the compute providers NOT want multiple customers? It creates competition and drives prices up.

There are likely other forces at play too. For one, none of the parties - the model providers and the compute providers, with some of them like Google being both -- wants to get too dependent on any of the other parties, but they also want to secure a slice of each others' future growth, so they're all partnering with each other. Obviously, Google wants Gemini to win and Microsoft wants Copilot to win, but as a hedge, they'll be happy hosting their competitors' products and taking a cut.

This is partly the origin of the "circular investments" concerns. The scale at which this industry is growing, all these players have enormous mountains of money that they must invest to secure their future, but they are also the only players that can operate at this scale, and so the only place they can invest that money in is each other.


Once we are able to run language models on any consumer hardware with good T/s Windows will become an absolute powerhouse just like it did in gaming with DirectX. Any application will be able to be AI infused and the API to do so will be consistent and free to the business offering it.

>checking for Y chromosomes doesn't do it

Lol why does this not do it?



I am going to try to keep my response apolitical to try to avoid fanning a culture war. That Wiki is the exact reason we are in this situation because we are bringing up points for 1 in 20000 or 0.005% of the population. Any system designed around 0.005% edge cases is going to be so complex that it is functionally impossible to do in practice. That is why one side says the solution is "obvious" because we have a simple rule that covers 99.9% of cases and the other 0.1% is unfortunately effectively barred from high level competition. Note, high level competition already bars 99.9% of people. Even though the opposing side is correct in pointing out these edge cases, it does nothing to advance an actual solution.

There are statistically around 15 women AFAB with XY chromosomes in the NCAA by those numbers (assuming no correlation between Swyer syndrome and athletic performance).

There are currently around 10 openly transgender women in the NCAA.

Small numbers either way.


Sure, it covers 99.9% of cases, but top elite athletes are the genetic exceptions, they are the genetic freaks. They are the top 0.0001%. You don't get to compete at the most elite levels without your body being exceptionally gifted and almost specifically shaped for the relevant sport, which inevitably means funky genetic traits and disorders, higher testosterone levels etc.

I mean the word freak in the most loving and caring way possible, mind you.

What does fairness mean in that context?


I am not sure what point you are trying to make. When it comes to the Olympics, it was decided a long time ago that having both men and women's events was beneficial for societal progress to have both sexes represented. This was at a time when sex=gender. Now, we recognize the difference between sex and gender but one side thinks the split of events was always based on gender whereas it was almost surely based on sex. This ruling confirms that view point.

Except I proposed a solution, which you ignored (I'm assuming here that I'm your "opposing side".)

Also, there are a significant number of these sorts of arguments in high-level sports, probably precisely because these "0.1%" cases are exactly the ones that result in exceptional ability relative to norms. It's also curious that there is such obsession about naturally occurring genetic outliers with respect to females or gender but absolute silence about naturally occurring genetic outliers among men unrelated to gender. And surprise surprise the top athletes often have such outlier genetics!

If you're drawing a distinction between natural genetic difference related to only gender and no other factors then sadly it's exactly a culture war, not a war based in science or fairness.


> naturally occurring genetic outliers among men unrelated to gender

This is just not true. Many sports are categorized by weight for the most obvious example.


Yes. Which is what I proposed for all differences. Note that classifying by weight is not banning athletes like is happening in the olympics.

Heavy weight boxers are banned from competing against feather weight boxers.

This isn't a novel problem.

Because in a specific minority of the population it disagrees with the gender assigned at birth for obvious reasons. There are plenty of resources you could read on intersex instead of lol at something you don’t understand

I think you're overall point is correct, but the specifics about twitter are backwards. Twitter would have been way better off using Cassandra (but it wasn't built yet!), instead you're 100% right, they did their own bespoke stack, trying to replicate what google had internally, and they didn't have the $$$ to do it.

It was Facebook/Meta that later open sourced Cassandra and a buncha other great open source stuff.


Someone else suggested I was misremembering the Twitter Cassandra timeline so I went back and checked. No, Twitter droped MySQL for Cassandra in 2010 [1], after Facebook open-sourced it in 2008 [2].

Twitter claimed they were using Cassandra (or at least planned to) for storing tweets [3] but had rolled something else entirely (called Manhattan) by 2014 [4][5].

So yes it was originally released by Facebook but it was Twitter who spent a massive effort trying to make it work in production. And failed.

[1]: https://www.informationweek.com/it-infrastructure/twitter-dr...

[2]: https://en.wikipedia.org/wiki/Apache_Cassandra

[3]: https://highscalability.com/so-why-is-twitter-really-not-usi...

[4]: https://blog.x.com/engineering/en_us/a/2014/manhattan-our-re...

]5]: https://news.ycombinator.com/item?id=7515995


Certainly not true of FAANG engineers, most people are wanting to stay for their initial vest.

I think 3-4 years is a much better signal.


My advice is to earnestly try your best without compromising your health or mental health, and then mercilessly advertise doing so.

What I mean by that is, try to peek at emails/chat when you can, and send a message. Let folks know when you're having a rough day but ALSO when you're going to power through and show up to a meeting.

Try to do a good job and go overboard on advertising you are working to make up the times you can't be around due to your medical treatments.

You want people to root for you, both for your health, and for your success at the company. Sometimes it doesn't take much for a boss to recognize "oh, they're trying to do a little extra".


Anyone Google has hired in the last ~8 years was hired onto a team that is growing and has a culture of shipping and producing. Google regularly weeds out low performers, be it new grads or long timers who started doing the rest and vest thing.

Now, I don't think most people at google are literally driving to the office or sleeping there most of the time, you'll certainly have more WLB than xAI.

I'd even say, Google is much better at calibrating the right amount to push people than some other companies.


GenAI at fault, and nothing to do with amazon laying off 30k people and having an overall shitty culture where people mostly don’t want to stay?


> GenAI at fault, and nothing to do with amazon laying off 30k people

GenAI is literally the direct reasoning they used for laying off 30k people.

> “As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” [Amazon CEO Andy Jassy] bluntly admitted.


It's not, and in the latest round of 14k people laid off they were more transparent that it was a result of previously having overhired.


> previously having overhired

Funny way of saying that Jassy told people he doesn't like the culture of a larger amazon.

Also, if we overhired in 2020-2022, why the hell are we still correcting it in 2026? Did none of the layoffs in 2023 on do the job?

Just an all around failure of leadership with no ownership.


> Did none of the layoffs in 2023 on do the job?

No, because the calculus of layoffs shifted. Briefly, there is always a natural attrition rate A%, but whenever companies do an X% layoff they expect a smaller Y% additional attrition (due to morale etc.) So they expect an overall (A + X + Y)% reduction in headcount within a few months of the layoffs.

However, the job market swung so rapidly from pro-employee to pro-employer in that timeframe that the Y% never happened, and in fact there was even a drop in A%. And so companies still ended up with more employees than planned and had to scramble to achieve their headcount goals using other means (RTO mandates, shifting headcount offshore, further layoffs with AI washing, etc.)

A bit more detail on the calculus in this comment: https://news.ycombinator.com/item?id=46142948


Oh believe me, I am not defending the prick.

We are in a thread of Amazon holding engineering meetings after AI-related outages after laying off 30k people.

If anything this highlights gross incompetence of a moronic leadership. It should be them being laid off.

If overhiring indeed happened, it is also a failure of leadership. Hiring too many people and then firing a bunch of people causes friction, loss of knowledge, decreased morale, etc.


But like, what if we did the layoffs bit by bit and tell people each time there will be more and stay tuned. Surely that's a sign of strong leadership. Just like "muscle confusion" for workouts! Can't let people feel to safe or stable.


There is a long history of people blaming AI for not being able something totally unfair and me and I do believe quite a lot of probably somewhat older ML practitioners are seriously tired of that constantly happening. Amazon is prioritizing investment into data center expansion over paying employees. And ML ... is present in the building, and about as involved in the firings as the cleaning staff is, only people are scared of AI and so it gets blamed for everything. The firings are driven by imho misguided financial engineering, and it sure as hell is not being being done by ML.

But what is reported? Management firing people? ML. Engineering screwing up the uptime? ML. Someone screws up their job? ML.

Don't you know? ML is killing people in Iran today. Not mullahs. Not the military. ML. Obviously that's where the responsibility lies ...

Usually blaming ML is like suddenly coming up with conspiracy theories like here, or impossible suddenly added requirements, and usually utterly ridiculous ones (like criticizing Deep Blue for not being able to play poker, yes I realize I'm old, but it's a bit like criticizing the very best competition canoe on the planet for it's disappointing spaceflight capabilities)

Like here: large blast radius AI-assisted outages ... we've all written software, and we all know the problem here: THEIR TESTS SUCK. Probably because they fired all the good SREs for insisting software teams spend time on tests, or demanding integration test failures are fixed before shipping the software.

By the way: I'd like to point out that in most/all industries where jobs are lost on a large scale the situation is like the Amazon situation: ML is not even remotely involved. So while I get the criticism, it doesn't work like that. The Auto industry first got blasted with very traditional engineering, which worked and depended on very old style mathematics. What's happening in factory automation is 99.9% 3d geometry (to the point that ML, is actually a simplification of the problem). Then the auto industry got blasted with what every industry got blasted with: stuck in demand-limited markets. Every car company can easily build 10x more cars next year, but there's no point: nobody will buy them. So the only thing worth doing for these companies is to produce cheaper ... and that means getting rid of people (when end-to-end taxes on income in Europe are 60-85% and actually rising). With only a few exceptions, these companies find ML too expensive for projects.

So while I understand "we're defending our jobs", it's misguided ... the big job losses in the west have nothing to do with ML. MAYBE those are coming, but large job losses have been predicted in the last 50 AI "revolutions". 49 times that was wrong. And the actual problem is really a return to 99.9% of history: when it comes to doing what is needed to keep society going 10%, maybe even 1% of people can do it. That means you need something for the other 90% or 99% to do.

The solution is the only thing that has helped in the past: having the government put on huge public works. From building the pyramids to the Sagrada Familia (and yes, wars. But let's please not do that), or ridiculous engineering projects like Europe and America's rail networks. There's a stable in the Italian alps that has a private rail connection. So fix the problem. I don't know: build a large cathedral in Washington or something. Hell, hire people to make sure it has a depiction of the last supper where every square micrometer of the painting was designed by an AI with 1000-member engineering team, so people can spend their entire life looking at the painting with a microscope and find new details every day. Let's do something "great", in the sense of an enormous effort. Fly 100 missions to Alpha Centauri. Fix the demand-limited issue the economy has. "Do more with more". And stop blaming ML. Hell, I'm currently in an old European city filled with 200-year old buildings. Quaint. Cool. Except ... not really. 90% of these buildings suck. Can we just rebuild 95% of ... all European capitals? Every building that is way too old and has no reason whatsoever to be preserved other than it's currently slightly cheaper ... can we please just rebuild them better? Do stuff like that.


Also, managers are incentivised to force AI onto the remaining staff to “boost productivity” but of course they won’t accept any of the responsibility or blame for that decision.


Just tell the employees to make AI fully adopted in SDLC and make it secure and reliable. Don't make mistakes.

If it works for models, why not humans? /s


Absolutely correct. Now let's drop anothet few billions to make AI better and avoid such mistakes in the future. And we might lay off some more folks to make room in a budget for more AI


Those two facts are not mutually exclusive. Laying off 30K people and pushing the remaining engineers to use the ensloppenator for everything, this is the expected result.


Maybe both, and possibly other causes too, but allow us a moment to revel in the schadenfreude of AI code slop at hyperscale, will you?


Definitely interesting idea given how lockstep parties operate. Ostensibly representatives should be uh, representing local interests, over party, but we've seen this isn't the case.

So if one side is claiming something, yes, would absolutely love to see how much of their net worth is tied into betting that something would come to fruition (like lowering inflation, or medical care, or increasing jobs, etc)


I don't have a CS degree. I was born in 1982. Work at Google now as a L5 SWE.


Yes, it's a lot of fun. I'm working on a core team at Google, and honestly i'd keep doing it even if I had 10M stashed away.

It is also very stressful and frustrating at times.

However, early in my career I had challenging and stressful jobs with shitty managers who always tried to crack the whip, where I made 80k a year.

Now at least my stressful job has me pampered between stresses and mostly my boss telling me I did a good job (with occasional critical feedback on how to improve).

It's not an easy job though. I see a fair amount of coworkers counting their pennies so they can quit. Honestly I think you have to be a little bit crazy and enjoy tangly stressful problems to like this job.

If you don't like messing with tech, digging in, feeling confused and lost for hours and days before an 'aha' moment, then its' going to seem like a slog.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: