I think our jobs are threatened not because the LLMs will be much better than us, but because dirt cheap tooling will be developed on top of them that will make things which are “good enough” and are a fraction of the price.
I think outstanding software will still require well-paid, competent people orchestrating and developing a lot of complex systems for a while yet… But there’s a ton of bad software out there that will be able to be maintained for far less, and I suspect a lot of companies will be drawn to creating cookie cutter products generated by LLMs.
Just as people have turned to stores and blogs generated on templated systems, I think all of that and more will continue but with even more of it handled by LLM-based tooling.
I don’t think it’ll be next week, but I suspect it’ll be less than 10 years.
Some people expect that’ll lead to more software existing which will inevitably require more develops to oversee, but if that’s the case, I suspect they will be paid a lot less. I also expect that once AI tools are sophisticated enough to do this, they will largely make that level of oversight redundant.
Soon they could potentially patch the bugs in the software they generate by watching Sentry or something. Just automatically start trying solutions and running fuzz tests. It would be way cheaper than a human being and it would never need to stop working.
I think outstanding software will still require well-paid, competent people orchestrating and developing a lot of complex systems for a while yet… But there’s a ton of bad software out there that will be able to be maintained for far less, and I suspect a lot of companies will be drawn to creating cookie cutter products generated by LLMs.
Just as people have turned to stores and blogs generated on templated systems, I think all of that and more will continue but with even more of it handled by LLM-based tooling.
I don’t think it’ll be next week, but I suspect it’ll be less than 10 years.
Some people expect that’ll lead to more software existing which will inevitably require more develops to oversee, but if that’s the case, I suspect they will be paid a lot less. I also expect that once AI tools are sophisticated enough to do this, they will largely make that level of oversight redundant.
Soon they could potentially patch the bugs in the software they generate by watching Sentry or something. Just automatically start trying solutions and running fuzz tests. It would be way cheaper than a human being and it would never need to stop working.