Like what economic changes? You can make a case people are 10% more productive in very specific fields (programming, perhaps consultancy etc). That's not really an earthquake, the internet/web was probably way more significant.
The LLMs are quite widely distributed already, they're just not that impactful. My wife is an accountant at a big 4 and they're all using them (everyone on Microsoft Office is probably using them, which is a lot of people). It's just not the earth shattering tech change CEOS make it to be , at least not yet. We need order of mangitude improvements in things like reliability, factuality and memory for the real economic efficiencies to come and its unclear to me when that's gonna happen.
Not necessarily, workflows just need to be adapted to work with it rather than it working in existing workflows. It's something that happens during each industrial revolution.
Originally electric generators merely replaced steam generators but had no additional productivity gains, this only changed when they changed the rest of the processes around it.
I don't get this.
What workflow can have occasional catastrophic lapses of reasoning, non factuality, no memory and hallucinations etc? Even in things like customer support this is a no go imo.
As long as these very major problems aren't improved (by a lot) the tools will remain very limited.
We are at the precipice of a new era. LLMs are only part of the story. Neural net architecture and tooling has matured to the point where building things like LLMs is possible. LLMs are important and will forever change "the interface" for both developers and users, but it's only the beginning. The Internet changed everything slowly, then quickly, then slowly. I expect that to repeat
So you're just doing Delphic oracle prophecy. Mysticism is not actually that helpful or useful in most discussions, even if some mystical prediction accidently ends up correct.
Observations and expectations are not prophecy, but thanks for replying to dismiss my thoughts. I've been working on a ML project outside of the LLM domain, and I am blown away by the power of the tooling compared to a few years ago.
> What workflow can have occasional catastrophic lapses of reasoning, non factuality, no memory and hallucinations etc?
LLMs might enable some completely new things to be automated that made no sense to automate before, even if it’s necessary to error correct with humans / computers.
There's a lot of productivity gains from things like customer support. It can draft a response and the human merely validates it. Hallucination rates are falling and even minor savings add up in these areas with large scale, productivity targets and strict SLA's such as call centres. It's not a reach to say it could already do a lot of Business process outsourcing type work.
I use LLMs 20-30 times a day and while it feels invaluable for personal use where I can interpret the responses at my own discretion, they still hallucinate enough and have enough lapses in logic where I would never feel confident incorporating them into some critical system.
Think of having a secretary, or ten. These secretaries are not as good as an average human at most tasks, but they're good enough for tasks that are easy to double check. You can give them an immense amount of drudgery that would burn out a human.
As one example, LLMs are great at summarizing, or writing or brainstorming outlines of things. They won't display world-class creativity, but as long as they're not hallucinating, their output is quite usable.
Using them to replace core competencies will probably remain forbidden by professional ethics (writing court documents, diagnosing patients, building bridges). However, there are ways for LLMs to assist people without doing their jobs for them.
Law firms are already using LLMs to deal with large amounts of discovery materials. Doctors and researchers probably use it to summarize papers they want to be familiar with but don't have the energy to read themselves. Engineers might eventually be able to use AI to do a rough design, then do all the regulatory and finite element analysis necessary to prove that it's up to code, just like they'd have to do anyway.
I don't have a high-level LLM subscription, but I think with the right tooling, even existing LLMs might already be pretty good at managing schedules and providing reminders.