Hacker News new | past | comments | ask | show | jobs | submit login

I don’t think we can declare a plateau just based on this. Actually, given that we have nothing but benchmarks and cherry picked examples, I would not be so quick to believe GPT-4V has been bested. PALM-2 was generally useless and plagued by hallucinations in my experience with Bard. It’ll be several months till Gemini Pro is even available. We also don’t know basic facts like the number of parameters or training set size.

I think the real story is that Google is badly lagging their competitors in this space and keeps issuing press releases claiming they are pulling ahead. In reality they are getting very little traction vs. OpenAI.

I’ll be very interested to see how LLMs continue to evolve over the next year. I suspect we are close to a model that will outperform 80% of human experts across 80% of cognitive tasks.




> It’ll be several months till Gemini Pro is even available.

Pro is available now - Ultra will take a few months to arrive.


How could you possibly believe this when the improvement curve had been flattening. The biggest jumps were GPT-2 to GPT-3 and everything after that has been steady but marginal improvements. What you’re suggesting is like people in the 60s seeing us land on the moon and then thinking Star Trek warp drive must be 5 years away. Although people back in the day thought we’d all be driving flying cars right now. I guess people just have fantastical ideas of tech.


It is hard to quantify, but subjectively (and certainly in terms of public perception), each GPT release has been a massive leap over the previous model. Maybe GPT-2 to GPT-3 was the largest, but im not sure how you're judging that a field is stagnating based on one improvement in a series of revolutionary improvements being slightly more significant than the others. I think most would agree the jump from GPT-3 to GPT-4 was not marginal, and I think i'll be borne right when the jump from GPT-4 to GPT-5 isn't either. There may be a wall, but i dont't see a good argument that we've hit it yet. If GPT-5 releases and is only marginally better that will be evidence in that direction, but i'm pretty confident that won't happen.

Your analogy is odd because you're just posing a situation that is analgous to what the situation would look like if you turned out to be right. From the rate of improvement recently, i'd say we're more at the first flight test stage. Yes, of course the jump from a vehicle that can't fly to one that can is in some sense a 'bigger leap' than others in the development cycle, but we still eventually got to the moon.


I hope you're right, because it would be far more entertaining. More realistically, if you look at people's past predictions of the future, well.. You already know. AI people in the 60s also thought AGI was just around the corner, especially when they started playing chess and other games. Maybe we're not better than them at predicting things, but every generation thinks they're right.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: