This is what I've been talking about all year. It is such a relief to see it actually happen.
In summary: The search for AGI is dead. Intelligence was here and more general than we realized this whole time. Humans are not special as far as intelligence goes. Just look how often people predict that an AI cannot X or Y or Z. And then when an AI does one of those things they say, "well it cannot A or B or C".
What is next: This trend is going to accelerate as people realize that AI's power isn't in replacing human tasks with AI agents, but letting the AI operate in latent spaces and domains that we never even thought about trying.
Generated contents without filtering/validation are worthless.
I predict some kind of testing, validation or ranking be developed to filter out generated contents. Each domain has its own rules - you need to implement validation for code and math, fact checks for text, contrasting the results from multiple solutions for problem solving, and aesthetic scoring for art.
But validation is probably harder than learning to generate in the first place, probably a situation similar to closing the last percent in self driving.
In summary: The search for AGI is dead. Intelligence was here and more general than we realized this whole time. Humans are not special as far as intelligence goes. Just look how often people predict that an AI cannot X or Y or Z. And then when an AI does one of those things they say, "well it cannot A or B or C".
What is next: This trend is going to accelerate as people realize that AI's power isn't in replacing human tasks with AI agents, but letting the AI operate in latent spaces and domains that we never even thought about trying.