Even if we had a 100% private ChatGPT instance, it wouldn't fully cover our internal use case.
There is way more context to our business than can fit in 4/8/32k tokens. Even if we could fit the 32k token budget, it would be very expensive to run like this 24/7. Fine-tuning a base model is the only practical/affordable path for us.
There is way more context to our business than can fit in 4/8/32k tokens. Even if we could fit the 32k token budget, it would be very expensive to run like this 24/7. Fine-tuning a base model is the only practical/affordable path for us.