Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
Der_Einzige
65 days ago
|
parent
|
context
|
favorite
| on:
Lossless LLM compression for efficient GPU inferen...
4 but quants of DeepSeek or llama3 405n already fit on those GPUs and purported to have almost 0 loss compared to the full model. Doesn’t seem like that big of a deal given this
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: