Meta's language models, GH Pilot, real life car auto-pilot. When it fails, it fails big. And the "we were 10+ years early to market" is just a big lie that bought them plenty of VC money. Good for them.
At least partially over. It's one of those things though; when you first see what's possible with neural networks it does get your hopes up. When you later realize the limitations, it's hard to walk back your old claims. Even Elon Musk has to realize that FSD is never going to happen by now. Google with all their learning and training data, still can't correctly find and smudge license plates or faces correctly on Google maps. If that much processing power cannot correctly identify two classes of objects, what chance do these cars have to classify tons more objects + adapt how they steer based on that information in real time?
The internet was a hype cycle, which ended with the dot com bubble burst. But some of the companies that came out of the bubble came out strong. AI has had multiple hype cycles, like every washing machine with "fuzzy logic" in the 90s, they usually end, but they do usually leave us with more than we had. This AI hype cycle is ending now, and we have seen a lot of progress on image detection, video editing, etc. but the highest targets haven't been reached.
It's kind of the explore-exploit dichotomy. You have some new technology (internet), in the first few years you have exploration and all the low hanging fruit are implemented, then everybody just starts iterating on similar ideas, which lead to less and less gain. The Uber/AirBnb/Amazon for X pitches. If you hear those you're in the late phase of the hype cycle. Because Y for X just means it's not a really new idea and plenty of people have thought of those.
Similarly you have some new technology like fuzzy logic, then some people thought of some good applications. But because the hype train was running it was put everywhere where it didn't make sense.
Or deep learning which was the first to have useful image processing. Now most research is tuning some parameters, adding compute, and hoping for better results.
But in the end we'll be left with some technological advances, and maybe in ten or twenty years somebody has a new idea which beats deep learning in learning efficiency.
The Google Maps smudging point is an interesting one and definitely worth considering, but the incentives at play are very different. While I'm sure they want to be seen as making an effort, Google isn't rewarded in any way for achieving high accuracy in their smudging. It just has to be "good enough" to the point that they aren't getting in trouble for deliberately neglecting it. For this reason, I'd imagine the resources they devote to it are quite limited. It's not having billions poured into it like self-driving AI is--while I have no inside knowledge, I'd guess the budget is orders of magnitude less.
> I'd imagine the resources they devote to it are quite limited
That's a problem inherent to AI or neural networks. We cannot spend our way out of these problems; they are underlying to the technology itself. Nobody knows what "doesn't work" when a neural network does something unexpected. AI is not a technology we have invented, it's a technology we've copied from nature. Nobody knows anything about what really goes on, this is why we're not getting anywhere.
Google could spend their entire budget on that smudge-bot, and it still wouldn't get any better using AI; they would have to go back to regular image analysis to make any improvements at this point. Google has trained it to be so good at finding faces; it started smudging faces on billboards/ads/shop-windows, but when these are flagged as errors, then it starts showing regular faces in shop windows un-smudged. The problem is that we have no idea of what's going on, so all we can do is to add new layers to the neural network, or give it more training data, neither of which gives an accurate result typically, making it impossible to use for self driving cars etc.
Meta's language models, GH Pilot, real life car auto-pilot. When it fails, it fails big. And the "we were 10+ years early to market" is just a big lie that bought them plenty of VC money. Good for them.