> Now, basically every new "AI" feature feels like a hack on top of yet another LLM.
LLM user here with no experience of ML besides fine-tuning existing models for image classification.
What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?
Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?
Well one of the inherent issues is assuming that text is the optimal modality for every thing we try to use an LLM for. LLMs are statistical engines designed to predict the most likely next token in a sequence of words. Any 'understanding' they do is ultimately incidental to that goal and once you look at them that way a lot of the shortcomings we see become more intuitive.
There's a lot of problems LLMs are really useful for because generating text is what you want to do. But there's tons of problems which we would want some sort of intelligent, learning behaviour that do not map to language at all. There's also a lot of problems that can "sort of" be mapped to a language problem but make pretty extraneous use of resources compared to a (existing or potential) domain specific solution. For purposes of AGI, you could argue that trying to express "general intelligence" via language alone is fundamentally flawed altogether -- although that quickly becomes a debate about what actually counts as intelligence.
I pay less attention to this space lately so I'm probably not the most informed. Everyone seems so hyped about LLMs that I feel like a lot of other progress gets buried, but I'm sure it's happening. There's some problem domains that are obviously solved better with other paradigms currently: self-driving tech, recommendation systems, robotics, game AIs, etc. Some of the exciting stuff that can likely solve some problems better in the future is some of the work on world models, graph neural nets, multi modality, reinforcement learning, alternatives to gradient descent, etc. I think it's a debate whether or not LLMs are a local maxima but many of the leading AI researchers seem to think so -- Yann Lecun recently for e.g. said LLMs 'are not a path to human-level AI'
> They’ve dropped the ball over the past five years. Part of me thinks it was the war in Ukraine that did them in.
I'm also a subsciber for over a decade, and came here to say the same thing. I don't know how their teams were distributed across eastern Europe and Russia but the war is when I pinpoint quality declining.
I've kept my subscription for now as for PHP and Symfony nothing comes close, but I'm actively looking to move away.
I have not work with Polars, but I would imagine any incompatibility with existing libraries (e.g. plotting libraries like plotnine, bokeh) would quickly put me off.
It is a curse I know. I would also choose a better interface. Performance is meh to me, I use SQL if i want to do something at scale that involves row/column data.
This is a non-issue with Polars dataframes to_pandas() method. You get all the performance of Polars for cleaning large datasets, and to_pandas() gives you backwards compatibility with other libraries. However, plotnine is completely compatible with Polars dataframe objects.
> if they call their employees monkeys, certainly.
It seems to have decreased in the last 10 years but calling us code-monkeys was a common derogatory reference to the software department. I didn't like being compared to a monkey randomly bashing a typewriter but that's how things were.
"code-monkey" is a bit different, I've seen people use it to call themselves that in a positive way. Maybe Andrew meant "code-monkey" in a more positive way instead of "monkey"? But i just re-read it and to me it sounds like an insult to their intelligence, to mean as if it was one of those studies where they train a monkey to hit keys to see what happens? Like they were so dumb it was the equivalent of monkeys hitting keyboards and accidentally creating something that works?
Either way, can we at least agree that it is an insult to those people at a personal level, it attacks who they are instead of what they did?
Like i mentioned, I've had myself/coworkers compared to monkeys in the same way. I didn't think much of it at first, but coworkers were really demoralized and kept mentioning it, and it coincided with all sorts of other hosilities from people in power.
My whole goal here wasn't to demonstrate some internet rage, but to do my part in making sure other people don't get treated like crap, especially in their work place. If this was at my work, I'd probably just quietly look for other places to work at, because I'd be afraid for my job. In this case it's not like Microsoft employees can publicly respond in like to Andrew and not lose their jobs either. I see someone with some level of authority and a public figure abusing that to harass others.
There is no asshole-badge that is granted to people when they achieve positions of authority, a louder voice or great success in life. Those of us who can implement some sort of an adverse response to this behavior, must.
LLM user here with no experience of ML besides fine-tuning existing models for image classification.
What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?
Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?
reply