Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is pre-training in FP8 new?

Also, 10M input token context is insane!

EDIT: https://huggingface.co/meta-llama/Llama-3.1-405B is BF16 so yes, it seems training in FP8 is new.



Deepseek v3 was FP8




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: