Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

FTR LLM embeddeds are a lot less efficient than ASCII for communication and are likely to stay that way.


absolutely not. Transformer layers already communicate using embeddings, and ASCII would be absolutely less efficient there.


And how many bits are in an embedded vector?


12k for gpt3.


It is not bits, but weights


So somehow ascii is less information dense than 12k 32-bit floats per token?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: