Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The words it autocompleted were the most likely words (chains of word fragments) to come next in a theoretical conversation when the question as context was taken into consideration, given the rest of the context it's been trained on up to EOY 2021.


I wonder what an LLM would say if it had, say, 2 orders of magnitude more parameters and 46 years of training like I do. And if it was trained up to the millisecond with new inputs from millions of individual sensors. It might say:

> I wonder what an LLM would say if it had...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: