> I think the chance they're going to create a "superintelligence" is extremely small.
I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.
Most forecasters on prediction markets are predicting AGI within a decade.
Why are you so sure that progress won't just fizzle out at 1/1000 of the performance we would classify as superintelligence?
> that progress on AI will just stop for some reason
Yeah it might. I mean, I'm not blind and deaf, there's been tremendous progress in AI over the last decade, but there's a long way to go to anything superintelligent. If incremental improvement of the current state of the art won't bring superintelligence, can we be sure the fundamental discoveries required will ever be made? Sometimes important paradigm shifts and discoveries take a hundred years just because nobody made the right connection.
Is it certain that every mystery will be solved eventually?
Aren't we already passed 1/1000th of the performance we would classify as superintelligence?
There isn't an official precise definition of superintelligence, but it's usually vaguely defined as smarter than humans. Twice as smart would be sufficient by most definitions. We can be more conservative and say we'll only consider superintelligence achieved when it gets to 10x human intelligence. Under that conservative definition, 1/1000th of the performance of superintelligence would be 1% as smart as a human.
We don't have a great way to compare intelligences. ChatGPT already beats humans on several benchmarks. It does better than college students on college-level questions. One study found it gets higher grades on essays than college students. It's not as good as humans on long, complex reasoning tasks. Overall, I'd say it's smarter than a dumb human in most ways, and smarter than a smart human in a few ways.
I'm not certain we'll ever create superintelligence. I just don't see why you think the odds are "extremely small".
I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.
Most forecasters on prediction markets are predicting AGI within a decade.