Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't disagree exactly, but the AI that fully replaces all the programmers is essentially a superhuman one. It's matching human output, but will obviously be able to do some tasks like calculations much faster, and won't need a lunch break.

At that point it's less "programmers will be out of work" as "most work may cease to exist".



Not sure about this. Coding has some unique characteristics that may it easier even if from a human perspective it requires some skill:

- The cost of failure is low: Most domains (physical, compliance, etc) don't have this luxury where the cost of failure is high and so the validator has more value.

- The cost to retry/do multiple simulations is low: You can perform many experiments at once, and pick the one with the best results. If the AI hallucinates, or generates something that doesn't work the agent/tool could take that error and simulate/do multiple high probability tries until it passes. Things like unit tests, compiler errors, etc make this easier.

- There are many right answers to a problem. Good enough software is good enough for many domains (e.g. a CRUD web app). Not all software is like this but many domains in software are.

What makes something hard to disrupt won't be intellectual difficulty (e.g. software harder than compliance analyst as a made up example), it will be other bottlenecks like the physical world (energy, material costs, etc), regulation (job isn't entirely about utility/output). etc.


> The cost of failure is low: Most domains (physical, compliance, etc) don't have this luxury where the cost of failure is high and so the validator has more value.

This is not entirely sensible, some code touches the physical / compliance world. Airports, airplanes, hospitals, cranes, water systems, military they all use code to different degrees. It's true that they can perhaps afford to run experiments over landing pages, but I don't think they can simply disrupt their workers and clients on a regular basis.


I did say "not all software is like this, but many domains are". So I agree with you.

Also note unlike say for physical domains where it's expensive to "tear down" until you commit and deploy (i.e. while the code is being worked on) you can try/iterate/refine via your IDE, shell, whatever. Its just text files after all; in the end you are accountable for the final verification step before it is published. I never said we don't need a verification step; or a gate before it goes to production systems. I'm saying its easier to throw away "hallucinations" that don't work and you can work around gaps in the model with iterations/retries/multiple versions until the user is happy with it.

Conversely I couldn't have an AI build a house, I don't like it, it demolishes it and builds a slightly different one, etc etc until I say "I'm happy with this product, please proceed". The sheer amount of resource waste and time spent in doing so would be enormous. I can simulate, generate plans, etc maybe with AI but nothing beats seeing the "physical thing" for some products especially when there isn't the budget/resources to "retry/change".

TL;DR the greater the cost of iteration/failure; the less likely you can use iteration to cover up gaps in your statistical model (i.e. tail risks are more likely to bite/harder to mitigate).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: