Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can't find info which of these new features are available via the API


> Developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo. We plan to launch support for GPT-4o's new audio and video capabilities to a small group of trusted partners in the API in the coming weeks.


[EDIT] The model has since been added to the docs

Not seeing it or any of those documented here:

https://platform.openai.com/docs/models/overview


It is not listed as of yet, but it does work if you punch in gpt-4o. I will stick with gpt-4-0125-preview for now because gpt-4o is majorly prone to hallucinations whereas gpt-4-0125-preview isn't.

Update: gpt-4o-2024-05-13 is listed now.


What gave you the impression that it's prone to hallucinations so quickly? Do you have a series of test questions?


Yes, I actually do, and I ran multiple tests. Unfortunately I don't want to give them away, as I then absolutely risk OpenAI gaming the tests by overfitting to them.

At a high level, ask it to produce a ToC of information about something that you know will exist in the future, but does not yet exist, but also tell it to decline the request if it doesn't verifiably know the answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: