Around the time when bitcoin started to get serious public attention, late 2017, I remember feeling super hyped about it and yet everyone told me that money spent on bitcoin was wasted money. I really believed that bitcoin, or at least cryptocurrency as a whole, would fundamentally change how banking and currencies would work. Now, almost 10 years later, I would say that it did not live up to my believe that it would "fundamentally" change currencies and banking. It made some minor changes, sure, but if it weren't for the value of bitcoin, it would still be a nerdy topic about as well known as perlin noise. Although I did make quite a lot of money from it, though I sold out way too soon.
As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.
The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.
Crypto is a really interesting point, because even the subset of people who have invested in it don't use it on a day to day basis. The entire valuation is based on speculative use cases.
> By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Enough to cause the next financial crash, achieving a steady increase of 10% global unemployment in the next decade at worst,
Something can be useful and massively overhyped at the same time.
LLMs are good productivity tools. I've been using it for coding, and it is massively helpful, really speeds things up. There's a few asterisks there though
1) I does generate bullshit, and this is an unavoidable part of what LLMs are. The ratio of bullshit seems to come down with reasoning layers above it, but they will always be there.
2) LLMs, for obvious reasons, tend to be more useful the more mainstream languages and libraries I am working with. The more obscure it is, the less useful it gets. It may have a chilling effect on technological advancement - new improved things are less used because LLMs are bad at them due to lack of available material, the new things shrivel and die on the vine without having a chance of organic growth.
3) The economics of it are super unclear. With the massive hype there's a lot of money slushing around AI, but those models seem obscenely expensive to create and even to run. It is very unclear how things will be when the appetite of losing money at this wanes.
All that said, AI is multiple breakthroughs away of replacing humans, which does not mean they are not useful assistants. And increase in productivity can lead to lower demand for labor, which leads ro higher unemployment. Even modest unemployment rates can have grim societal effects.
On a side note, I do worry about the energy consumption of AI. I'll admit that, like the silicon valley tech bros, there is a part of me that hopes that AI will allow researchers to invent a solution to that -- something like fusion or switching to quantum-computing AI models or whatever. But if that doesn't happen, it's probably the biggest problem related to AI. More so even than alignment, perhaps.
As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.
The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.