Hacker News new | past | comments | ask | show | jobs | submit | more Davidzheng's comments login

Capability today and next year will probably be very different in reliability


As someone who uses LLMs to write code every day, I don't see a huge progress since last year, so I'm also not that sure about next year.


It's a bit strange to talk about stuck when the most recent breakthrough is less than a year old.


I’m not sure what you mean by breakthrough, but if you’re talking about Deepseek, it’s more of an incremental improvement than a breakthrough.


I feel like Google intentionally don't want people to be as excited. This is a very good model. Definitely the best available model today.


On initial thoughts, I think this might be the first AI model to be reliably helpful as a research assistant in pure mathematics (o3-mini-high can be helpful but is more prone to hallucinations)


Have you tried o1-pro?


Honestly someone should scrape the algebraic topology Discord to AI it'll be a nice training set


Also there's no clear way to verify the solution. There could be easily multiple rules which works on the same examples


Probably openai will be >60% in three months if not immediately with these $1000/question level compute (which is the way tbh we should throw compute whenever possible that's the main advantage of silicon intelligence)


Their own admission that intelligence is a meaningless metric without bound on compute is one of the main reasons AI will overpower human intelligence soon. Simple scaling is very effective.


Tbh such a big jump from current capability would be ASI already


Disagree, now that there are great open models are available I think there's less need of huge training data--can just post train


i mean they have a verifier, so can't they even get to 90% just by random generation by the net and testing against verifier until it's numerically correct? I think the end solve rate is less important and the generality of approach is maybe more important


No, they specifically test for this (the "RL" case). They most particularly can not do this with random generation, which is very interesting.


but i mean it depends on how many attempts you let it generate. the right comparison is to use the test time rl compute to just do generation and compare success rates. (if you gen for long enough you eventually will hit the answer by chance)


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: