I'm not sure what the answer is - but I will say that LLMs do help me wrangle with essential complexity / real-world issues too.
Most problems businesses face have been seen by other businesses; perhaps some knowledge is in the training set or perhaps some problems are so easy to reason through that a LLM can do the "reasoning" more-or-less from first principles and your problem description.
I am speculating that AI will help with both sides of the No Silver Bullet dichotomy?
So in other words it's helping you race to the median. It can give your business the advantage of moving always in a direction that's average and uninteresting. Nobody will need to lead anymore, so nobody will have the skill of a leader anymore.
It sounds to me like a corporate equivalent of a drug-fueled rager. They want everything good now while deferring all the expenses to tomorrow.
Yeah, I give it about two years until we get to "Hey AI, what should we do today?" "Hi, I've noticed an increase in users struggling with transactions across individual accounts that they own. It appears some aspect of multitenancy would be warmly received by a significant fraction of our userbase. I have compiled a report on the different approaches taken by medium and large tech companies in this regard, and created a summary of user feedback that I've found on each. Based on this, and with the nuance of our industry, current userbase, the future markets we want to explore, and the ability to fit it most naturally into our existing infrastructure, I have boiled it down to one of these three options. Here are detailed design docs for each, that includes all downstream services affected, all data schema changes, lists out any concerns about backwards compatibility, user interface nuances, and has all the new operational and adoption metrics that we will want to monitor. Please read these through and let me know which one to start, and if you have any questions or suggestions I'll be more than happy to take them. For the first option, I've already prepared a list of PRs that I'm ready to commit and deploy in the designated order, and have tested e2e in a test cluster of all affected services, and it is up and running in a test cluster currently if you would like to explore it. It will take me a couple hours to do the same with the other two options if you'd like. If I get the green light today, I can sequence the deployments so that they don't conflict with other projects and have it in production by the end of the week, along with communication and optional training to the users I feel would find the feature most useful. Of course any of this can be changed, postponed, or dropped if you have concerns, would like to take a different approach, or think the feature should not be pursued."
Yeah, PM, data science, compliance, accounting...all largely automatable. You just need a few directors to call the shots on big risks. But even that goes away at some point because in a few months it'll have implemented everything you were thinking about doing for the next ten years and it simply runs out of stuff for humans to do.
What happens after that, I have no idea.
Seems like OpenAI (or whoever wins) could easily just start taking over whole industries at that point, or at least those that are mostly tech based, since it can replicate anything they can do, but cheaper. By that point, probably the only tech jobs left will be building safeguards so that AI doesn't destroy the planet.
Which sounds niche, but conceivably, could be a real, thriving industry. Once AI outruns us, there'll probably be a huge catastrophe at some point, after which we'll realize we need to "dumb down" AI in order to preserve our own species. It will serve almost as a physical resource, or maybe like a giant nuclear reactor, where we mine it as needed but don't let it run unfettered. Coordinating that balance to extract maximal economic growth without blowing everything up could end up being the primary function of human intelligence in the AI age.
Whether something like that can be sustained, in a world with ten billion different opinions on how to do so, remains to be seen.
Well, after some more thought, I realized that doesn't account for things that are truly innovative. I imagine humans will have a lock on that for a while.
But it does raise the question, how much of our work is true innovation? When I reflect back on my previous projects, most of them are just copying some feature that other services already have. For these, an AI may be even better than humans in the near future, with less appetite to put a personal stamp on the feature, and perfectly happy to copy a boring but predictable standard. How much of tech industry employment does that type of stuff account for? Because it's going to disappear.
There were other projects that were maybe slightly innovative, but that innovation could have been summed up in a couple sentences, and everything else derives from that. And I imagine that's about what the future of these kinds of projects will look like. Sum up your innovation in a couple sentences, and let the AI figure out the rest. And slowly, AI can start proposing its own slightly-innovative things based on what users want or what it understands about the world, and start running this e2e as well.
I've only worked on a couple projects that were what I'd call cutting-edge. But even these, it's kind of like a combination of new technology, new trends, new products, new user needs, all coming together in a way that, is it really innovative? Or when you're aware of all these things, is the "innovation" obvious? And if so, can an AI be made to maintain awareness of such things, and identify the "obvious" innovations?
I guess that's the trillion-dollar question. And we'll probably find out the answer sooner than we'd like.
You're right but I think we will be among the first to take the hit, we don't have the regulatory protections many doctors, accountants and lawyers have.
Most problems businesses face have been seen by other businesses; perhaps some knowledge is in the training set or perhaps some problems are so easy to reason through that a LLM can do the "reasoning" more-or-less from first principles and your problem description.
I am speculating that AI will help with both sides of the No Silver Bullet dichotomy?