I found this competition with humans as a benchmark more than disturbing.
By that measure gpt-4 already topped a lot of the average humans.
But how can it be interpreted as a "gift" or "good product" to have AI that is human-like or super-human?
Should we cheer? Sending contratulation mails?
Invest?
Hope for a better future?
Try better?
Self-host?
What is the message in these benchmarks. Tests that have been designed for humans now get broken by computers for what outcome to be expected?
Oscar Wilde said "Progress is the realization of Utopias." I don't think any utopia anyone can think of with regard to this technology is really thought through.
I'm going to wait for the AGI to be realized and then ask it whether the sacrifices on the way were worth making it. Should be more salient than everything I read about it these days.