Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even if we had a 100% private ChatGPT instance, it wouldn't fully cover our internal use case.

There is way more context to our business than can fit in 4/8/32k tokens. Even if we could fit the 32k token budget, it would be very expensive to run like this 24/7. Fine-tuning a base model is the only practical/affordable path for us.



You can retrieve information on demand based on what the user is asking, like this: https://github.com/openai/chatgpt-retrieval-plugin




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: