Hacker News new | past | comments | ask | show | jobs | submit login

A few things from the Dwarkesh interview with Satya:

* He sees data center leases getting cheaper in near future due to everyone building

* He’s investing big in AI, but in every year there needs to be a rough balance between capacity and need for capacity

* He doesn’t necessarily believe in fast takeoff AGI (just yet)

* Even if there is fast takeoff AGI, he thinks human messiness will slow implementation

* It won’t be a winner-take-all market and there will be plenty of time to scale up capacity and still be a major player




> Even if there is fast takeoff AGI, he thinks human messiness will slow implementation

Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?

I found it very amusing that at the turn of the decade "digitalisation" was a buzzword as Amazon was approaching its 25th anniversary.

Meanwhile huge orgs like the NHS run on fax and was crippled by excel row limits. Software made a very slow dent in these old important slow moving orgs. AI might speed up the transition but I don't see it happening overnight. Maybe 5 years if we pretend smartphone adoption is indicative of AGI and humanoid robot rollout


I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.

You click a button on Microsoft Teams and hire “Bob” who joins your team org, gets an account like any other employee, interacts over email, chat, video calls, joins meetings, can use all your software in whatever state it’s currently in.

It has to be a brownfield solution because most of the world is brownfield.


Completely unusable in any bank, or almost any organization dealing with data secrecy. You have complex, often mandatory processes to onboard folks. Sure, these can be improved but hiring some 'Bob' would be far from smooth sailing.

Big enough corps will eventually have their own trusted 'Bobs' just like they have their own massive cluster farms (no, AWS et al is not a panacea and its far from cheap&good solution).

Giving any form of access to some remote code into internal network of a company? Opsec guys would never ack that, there is and always will be malice coming from potentially all angles.


> Giving any form of access to some remote code into internal network of a company? Opsec guys would never ack that

Solarwinds.

Local agent machines for cloud CI/CD pipelines.

Devs using npm and pypi and such.

To be a bit reductive, `apt-get update` or equivalent.


I have worked at a place with serious opsec and none of that was allowed. Everything pointed at private mirrors containing vetted packages. Very few people had the permissions to write to those repos.


When serious money is on the line the previously hard rules can become soft quite fast. 1 major escalation away in my experience.


Not to mention that if Bob works as the current overhyped technologies do, it will be possible to bribe him by asking him to translate the promise of a bajillion dollars into another language and then repeat it back looking fort deciding on his next steps.


> I think you don’t have widespread adoption until AI takes the form of a plug-and-play “remote employee”.

Exactly. The problem with the AGI-changes-everything argument is that it indirectly requires "plug-and-play" quality AGI to happen before / at the same time as specialized AGI. (Otherwise, specialized AGI will be adopted first)

I can't think of a single instance in which a polished, fully-integrated-with-everything version of a new technology has existed before a capable but specialized one.

E.g. computers, cell phones, etc.

And if anyone points at the "G" and says "That means we aren't talking about specialized in any way," then you start seeing the technological unlikeliness of the dominoes falling as they'd need to for AGI fast ramp.


Honestly, I think the mode that will actually occur is that incumbent businesses never successfully adopt AI, but are just outcompeted by their AI-native competitors.


Yes this is exactly how I see it happening - just like how Amazon and Google were computer-native companies


And Sears had all the opportunity to be Amazon


Sears also did everything it could to annihilate itself while dot-com was happening.

their CEO was a believer of making his departments compete for resources leading to a brutal, dysfunctional clusterfuck. rent seeking behavior on the inside as well as outside.


Sounds kinda like Amazon..


And some, both new and old, will collapse after severely misjudging what LLMs can safely and effectively be used for.


it looks like a variant of Planck's principle https://en.wikipedia.org/wiki/Planck%27s_principle


Hah, that isn’t a brownfield solution.

These orgs could hire someone who could solve these issues right now (and for the last decade) if they would allow those people to exist in their org.

The challenge is precisely that those people (or people with that capability) aren’t allowed to exist.


"Bob" in this example is just some other random individual contributor, not some master of the universe. E.g. they would have the title "associate procurement specialist @ NHS" and join and interact on zoom calls with other people with that title in order to do that specific job.


Right, but these jobs are inefficient mostly because of checks and balances. So unless you have a bunch of AIs checking one another's work (and I'm not sure I can see that getting signed off) doesn't it just move the problem slightly along the road?

There's an argument here something like.. if you can replace each role with an AI, you can replace multiple with a single AI, why not replace the structure with a single person?

And the answer is typically that someone has deemed it significant and necessary that decision-making in this scenario be distributed.


Yup. If we ignore all the ‘people’ issues (like fraud, embezzlement, gaming-the-system, incompetence when inputting data, weird edge cases people invent, staff in other departments who are incompetent, corruption, etc), most bureaucracies would boil down to a web form and a few scripts, and probably one database.

Better hope that coder doesn’t decide to just take all the money and run to a non extradition jurisdiction though, or the credentials to that DB get leaked.


It's weird edge cases all the way down.

Just look at names. Firstname, lastname? Jejeje, no.

Treating them as constants? laughs in married woman.

If you can absolutely, 100% cast iron guarantee that one identity field exactly identifies one living person (never more, never less), these problems are trivial.

If not? Then its complexity might be beyond the grasp of the average DOGE agent (who, coincidentally, are males in their early 20s with names conforming to a basic Anglo schema).

And that's just the NAME field.


> those people (or people with that capability) aren’t allowed to exist

I'm not sure what personal characteristics or capabilities you're referring to, FWIW.


The ability to use that tech effectively to optimize the organizations internal processes. Or do the job of a person without actually being a person with a name that can be held accountable.

Most of those orgs have people in key positions (or are structurally setup in such a way) that isn’t desirable to change these things.


There hardly are any plug and play human employees.


As a first order of business, a sufficiently advanced AGI would recommend that we stop restructuring and changing to a new ERP every time an acquisition is made or the CFO changes, and to stop allowing everyone to have their own version of the truth in excel.

As long as we have complex manual processes that even the people following them can barely remember the reason why they exist, we will never be able to get AI to smooth it over. It is horrendously difficult for real people to figure out what to put in a TPS report. The systems that you refer to need to be engineered out or organisations first. You don't need AI for that, but getting rid of millions of excel files is needed before AI can work.


I dont know that getting rid of those wacky Excel sheets is a prerequisite to having AI work. We already have people like Automation Anywhere watching people hand carve their TPS reports so that they can be recreated mechanistically. Its a short step from feeding the steps to a task runner to feeding them to the AI agent.

Paradigm shifts in the technology do not generally occur simultaneously with changes in how we organize the work to be done. It takes a few years before the new paradigm backs into the workflow and changes it. Lift and shift was the path for several years before cloud native became a thing, for example. People used iPhone to call taxi companies, etc.

It would be a shame to not take the opportunity to tear down some old processes, but, equally, sometimes Chesterton's fence is there for good reason.


But why are these sort of orgs slow and useless? I don't think it is because they have made a conscious decision to do so - I think it is more than they do not have the resources to do anything else. They can't afford to hire in huge teams of engineers and product managers and researchers to modernize their systems.

If suddenly the NHS had a team of "free" genuinely phd-level AGI engineers working 24/7 they'd make a bunch of progress on the low-hanging fruit and modernize and fix a whole bunch of stuff pretty rapidly I expect.

Of course the devil here is the requirements and integrations (human and otherwise). AGI engineers might be able to churn out fantastic code (some day at least), but we still need to work out the requirements and someone still needs to make decisions on how things are done. Decision making is often the worst/slowest thing in large orgs (especially public sector).


It's not a resource problem; everyone inside the system has no real incentive to do anything innovative; improving something incrementally is more likely to be seen as extra work by your colleagues and be detrimental to the person who implemented it.

What's more likely is a significantly better system is introduced somewhere, NHS can't keep up and is rebuilt by an external. (Or more likely it becomes a inferior system of a lesser nation as the UK continues its decline).


I think this is where the AGI employee succeeds where other automation doesn’t. The AGI employee doesn’t require the organization to change. It’s an agent that functions like a regular employee in the same exact system with all of its inefficiencies, except that it can do way more inefficient work for a fraction of the cost of a human.


Assuming we get to AGI and companies are willing to employ them in lieu of a human employee, why would they stop at only replacing small pieces of the org rather than replacing it entirely with AGI?

AGI, by most definitions at least, would be better than most people at most things. Especially if you take OpenAI's definition, which boils it down only to economic value, a company would seemingly always be better off replacing everything with AGI.

Maybe more likely. AGI would just create superior businesses from scratch and put human companies out if business.


Extrapolating this, I cannot help but imagine a dystopian universe in which humans' reason for existence is to be some uber AI's pets.


This is a huge selling point, and it will really differentiate the orgs that adopt it from those who don’t. Eventually the whole organization will become as inscrutable as the agents that operate it. From the executive point of view this is indistinguishable from having human knowledge workers. It’s going to be interesting to see what happens to an organization like that when it faces disruptive threats that require rapid changes to its operating model. Many human orgs fall apart faced by this kind of challenge. How will an extra jumbo pattern matcher do?


Don't forget that the executive is also a (human) employee. If AGI is working so well, why would the major stakeholders need a human CEO?


What you are describing is science fiction and is not worthy of serious discussion.


IMO it comes from inertia. People at the top are not digital-native. And they're definitely not AI-native.

So you're retrofitting a solution onto a legacy org. No one will have the will to go far enough fast enough. And if they didn't have the resources to engineer all these software migrations who will help them lead all these AI migrations?

Are they going to go hands off the wheels? Who is going to debug the inevitable fires in the black box that has now replaced all the humans?


And many of the users/consumers are not digital-native either. My dad is not going to go to a website to make appointments or otherwise interact with the healthcare system.


In fact most of the industries out there are still slow and inefficient. Some physicians only accept phone calls for making appointments. Many primary schools only take phone calls and an email could go either way just not their way.

It's just we programmers who want to automate everything.


Today I spent 55 minutes in a physical store trying to get a replacement for two Hue pendant lights that stopped working. The lights had been handed in a month ago and diagnosed as "can't be repaired" two weeks ago. All my waiting time today was spent watching different employees punching a ridiculous amount of keys on their keyboards, and having to get manual approval from a supervisor (in person) three times. I am now successfully able to wait 2-6 weeks for the replacement lights to arrive, maybe.

When people say AI is going to put people out of work, I always laugh. The people I interacted with today could have been put out of work by a well-formulated shell script 20 years ago.


Nonsense. They wouldn't be out of work, their jobs would just be easier and more pleasant. And your experience as a customer would be better. But clearly their employer and your choice of store isn't sufficiently incentivized to care, otherwise they would have done the software competently.

The hilarious thing is there is absolutely no improvement AI could possibly make in the experience you've described. It would just be a slower, costlier, more error prone version of what can easily be solved with a SQL database and a web app.


Yeah. I hope software engineers slow down a bit. We are good enough. There is no need to push ourselves out of jobs.


In some ways, we're always putting ourselves out of the job, anytime you write some code and then abstract it away in a reusable form.


“ only takes phone calls for appointments” is a huge selling point for a physicians office. People are very tired of apps.


I’d far prefer a well done app. It’s so frustrating doing a back and forth of dates when trying to make an appointment.


Fair, but I'd prefer phone over a poorly done app.

And given the state of most apps...


You obviously don't live in the UK, where the mad dash at 8:00am on the dot to attempt to secure an appointment happened, and the line would be busy until 8:30am when they ran out of appointment slots, if you were unlucky on the re-dial/hangup rodeo.

Apps (actually a PWA) mean I can choose an appointment at any time in the day and know that I have a full palette of options over the next few days. The same App(PWA) allows me to click through to my NHS login where I can see my booked appointments or test results.


Yeah maybe we programmers should start doing that too. Why do we use Teams, Slack or even emails?

We should submit our code to a mainframe and everyone is going to improve their skills too.


On punched cards.


Given how bad some of the apps and websites are I am not sure phone calls are any worse! They are also less prone to data breaches and the like.


This. Thank you.

People stop caring about optimising or improving stuff. Even programmers are guilty of it. I haven't changed my vimrc in over 5 years


> Five years ago during covid all these clunky legacy businesses couldn't figure out a CSV let alone APIs but suddenly with AGI they are going to become tech masters and know how to automate and processize their entire operations?

The whole point of an AGI is that you don't need to be a tech master (or even someone of basic competence) to use it, it can just figure it all out.


Technical people don't write code, they (along with product people) specify things exhaustively. While in theory a super AGI can do everything (therefore deprecating all of humanity) in reality I suspect that given existing patterns in orgs of layers of managers who don't like to wade into the details specifying to the nth degree there will be a need for lots of SMEs except that AI will probably be a leaky abstraction and you'll still need technical people to guide the automation efforts


Random non-technical people dealing with CSVs is probably one of the best usecases for current AI tools too.


> Meanwhile huge orgs like the NHS run on fax

I thought this was a German-only thing?


Not convinced.

In 2018:

https://www.gov.uk/government/news/health-and-social-care-se...

> Matt Hancock has banned the NHS from buying fax machines and has ordered a complete phase-out by April 2020.

The NHS is quite federated. Hell many parts of it are independent companies. Some trusts have decent modern systems though - I had to go for a test just before christmas - I'd phoned up my GP in the morning got an appointment for half an hour later, he ordered a test, said go to one of these 8 centres, so I went to one about half an hour away (I live a fair way away from a major town). Had the test, by the time I'd had lunch and driven back home I had another call from the GP asking me to come in that evening, the appointment was created by the GP and read seconds later at the hospital, the test was done there and results reported back again at the click of a system at the GP.

But that's just my local trust. Go 10 miles west and it's another trust and they have different systems. And I had to go to one of the test centres in the trust, I couldn't go to one in a neighbouring trust as they have different systems and there's no/limited interconnects.


It's all pretty much a central system as of this year, your trust will have localised PWAs or Apps, but it all connects via provider IDs and transfer codes to the same NHS backend.


The healthcare system I largely use in the US was bought up by another larger system. They are now primarily on an electronic system (which for me is welcome) but some of the offices still use faxes as well.


I had a stint in healthcare software and can confirm the fax thing is real. That said, I had a hunch it's all cloud-hosted fax sending systems delivering to cloud-hosted fax-to-pdf systems. I challenged by colleagues to show me an actual fax machine and never had any takers.


Counterpoint: the PC.

The minicomputer revolution was meant to replace all the big iron in the basement with computers shared by teams. This lasted a decade until we got computers meant for one person and we've used that format of computation for nearly 50 years now.

Everyone is talking about how AI will replace people.

It won't.

AI without HI is useless. However, HI with AI is much better at handling all the bullshit office work that needs to get done—anyone who doesn't understand this is either an idiot, a troglodyte, or already has an expendable army of minions doing their work.

The surge in AI demand won't be top-down but bottom-up. That stupid office birthday wish card being passed around? I can Shakespeare the motherfucker. And that email to the big boss telling him the project is doomed and that he needs a compass to find his own ass? Let's ask R1.

>My boss is a moron who can't find his ass without a compass. His latest tech initiative will cost us millions and make the best staff leave. Write a professional letter that covers those points without being harsh.

---

Subject: Request for Review of Upcoming [Tech Initiative]

Dear [Recipient Name/Team Name],

I hope this message finds you well. I wanted to share some thoughts regarding our upcoming [Tech Initiative Name], as I believe open dialogue will ensure its success while aligning with our team’s strengths and long-term goals.

While the initiative’s intent to innovate is commendable, I’ve observed several potential challenges that may impact its effectiveness:

Financial Impact: Based on preliminary analysis, the projected costs appear to be significantly higher than initially estimated. A detailed cost-benefit review or phased rollout might help mitigate financial risk while allowing us to test feasibility.

Team Retention Concerns: Many of our top performers have expressed reservations about the initiative’s current structure, particularly around [specific pain point, e.g., workflow disruption, lack of clarity]. Retaining their expertise will be critical to execution, and their insights could refine the plan to better address on-the-ground needs.

To ensure alignment with both our strategic objectives and team capacity, I respectfully suggest:

Conducting a collaborative risk assessment with department leads.

Piloting the initiative in a controlled environment to gather feedback.

Hosting a forum for staff to voice concerns/solutions pre-launch.

I’m confident that with adjustments, this project can achieve its goals while preserving morale and resources. Thank you for considering this perspective—I’m eager to support any steps toward a sustainable path forward.

Best regards,


To be honest, that kind of sounds like a dystopian hell: chatGPT writing memos because we can't be arsed, and the reading the same memos because neither can the recipient. Why even bother with it?


It is heaven.

With a well working rag system you can find the reason why any decision was made so long as it was documented at some point somewhere. The old share point drives with a billion unstructured word documents starting from the 1990s is now an asset.


Since I don't have an expendable army, I must be either an idiot or a troglodyte. Where my understanding falters is finding a domain where accuracy and truth aren't relevant. In your example you said nothing about a "phased rollout", is that even germane to this scenario? Is there appreciable "financial risk?" Are you personally qualified to make that judgement? You put your name at the bottom of this letter and provided absolutely no evidence backing the claim, so you'd best be ready for the shitstorm. I don't think HR will smile kindly on the "uh idk chatgpt did it" excuse.


"Here is the project description: [project description] Help me think of ways this could go wrong."

Copy some of the results. Be surprised that you didn't think of some of them.

"Rewrite the email succinctly and in a more casual tone. Mention [results from previous prompt + your own thoughts]. Reference these pricing pages: [URL 1], [URL 2]. The recipient does not appreciate obvious brown nosing."


If I was sending it as a business email I'd edit it before sending it off. But the first draft saved me between 10 to 30 minutes of trying to get out of the headspace of "this is a fucking disaster and I need to look for a new job" to speaking corporatese that MBAs can understand.

This is where the HI comes into the equation.


Is that really a problem most people have in business communication? I can't recall a time sending a professionally appropriate email was actually hard.. Also, consider the email's recipient. How would you feel if you had to wade through paragraph upon paragraph of vacuous bullshit that boils down to "Hey [name], I think the thing we're doing could be done a little differently and that would make it a lot less expensive, would you like to discuss?"


This is a very refreshing take.

Our current intellectual milieu is largely incapable of nuance—everything is black or white, on or off, good or evil. Too many people believe that the AI question is as bipolar as every other topic is today: Will AI be godlike or will it be worthless? Will it be a force for absolute good or a force for absolute evil?

It's nice to see someone in the inner circles of the hype finally acknowledge that AI, like just about everything else, will almost certainly not exist at the extremes. Neither God nor Satan, neither omnipotent nor worthless: useful but not humanity's final apotheosis.


Whoa there, hold yer horses ;) Let's wait and see until we have something like an answer to "will it be intelligent?" Then we might be ready to start trying to answer "will it be useful?"

I agree completely though, it's nice to see a glimmer of sanity.

EDIT: My best hope is this bubble bursts catastrophically, and puts the dotcom crash to shame. Then we might get some sensible regulation over the absolute deluge of fraudulent, George Jetson spacecamp bull shit these clowndick companies spew on a daily basis.


> human messiness will slow implementation

If by "messiness" he means "the general public having massive problems with a technology that makes the human experience both philosophically meaningless and economically worthless", then yeah, I could absolutely see that slowing implementation.


Picking on "philosophically meaningless" a bit...

I'm a hobby musician. There are better musicians. I don't stop.

I like to cook. There are better cooks. I don't stop.

If you are Einstein or Michael Jordan or some other best-of-the-best, enjoyment of life and the finding of worth/meaning can be tied to absolute mastery. For the rest of us, tending to our own gardens is a much better path. Finding "meaning" at work is a tall order and it always has been.

As for "economically worthless," yes if you want to get PAID for writing code, that's a different story, and I worry about how that will play out in the scope of individual lifetimes. Long term I think we'll figure it out.


Is this the transcript of the interview (podcast) with Dwarkesh?

https://www.dwarkeshpatel.com/p/satya-nadella

Because if so,

> He doesn’t necessarily believe in fast takeoff AGI (just yet)

the term "fast takeoff AGI" does not appear in the transcript.


Some paraphrasing on my part, but what he does say is at odds with fast takeoff AGI. He describes AI as a tool that gradually changes human work at a speed that humans adapt to over time.


Given that you invented the term in your paraphrase, what do you mean by "fast takeoff AGI"?


I expect data centers will become more expensive precisely because everyone is building at the same time. Supply chain crunch


Temporary. During their operating and depreciating long tail phase the over supply will drive down costs for users. Like fiber cables.


> * He doesn’t necessarily believe in fast takeoff AGI (just yet)

This is so based... I would probably have given slow take off a 1% chance of happening 10 years ago, but today I'd put that somewhere like 30%.


In fact I don't think he believes in the takeoff of AGI at all, he just can't say it plainly. Microsoft Research is one of the top industry labs, and one of its functions (besides PR and creating the aura of "innovation") is exactly this: to separate the horseshit from things that actually have realistic potential.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: