Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you can just feed that output into another call, to have the next call continue it, since you have more than 28k extra context. The output per token is faster anyways right, so speed isn't an issue. It's just slightly more dev work (really only a couple lines of code)


How do you know it will have the same state of mind? And how much does that cost.


Because the state of mind is derived from the input tokens.


Is there a study or anything that that is guaranteed adding an incomplete assistant: response as the input and the API taking off exactly the same way on the same position?


It’s how LLMs work - they are effectively recursive at inference time, after each token is sampled, you feed it back in. You will end up with the same model state (not including noise) as if that had been the original input prompt.


LLMs sure. My question is whether it is the same in practice for LLMs behind said API. I found no official documentation that we will get exactly the same result as far as I can tell.

And no one here touched how high a multiple the cost is, so I assume its pretty high.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: