Hacker Newsnew | past | comments | ask | show | jobs | submit | fromlogin
LLM Inference at the Memory Wall (lamini.ai)
1 point by gdiamos on Sept 7, 2024 | past
How to evaluate performance of LLM inference frameworks (lamini.ai)
18 points by matt_d on Sept 7, 2024 | past | 2 comments
Lamini Memory Tuning: 10x Fewer Hallucinations (lamini.ai)
128 points by galeos on June 13, 2024 | past | 57 comments
Faster LLM Inference: Lamini Inference with 52x more RPM than vLLM (lamini.ai)
1 point by dpflan on May 23, 2024 | past | 1 comment
LLMs hallucinate in critical enterprise scenarios (lamini.ai)
6 points by gdiamos on March 28, 2024 | past | 1 comment
Training LLMs in the Wild Wild West – AMD multi-node (lamini.ai)
2 points by gdiamos on March 15, 2024 | past
Lamini LLM Finetuning on AMD ROCm: A Technical Recipe (lamini.ai)
2 points by mariuz on Oct 28, 2023 | past
Lamini LLM Finetuning on AMD ROCm: A Technical Recipe (lamini.ai)
6 points by dhruvdh on Oct 25, 2023 | past | 4 comments
Lamini and AMD: Paving the Road to GPU-Rich Enterprise LLMs (lamini.ai)
2 points by danzheng on Oct 12, 2023 | past
Lamini and AMD: Paving the Road to GPU-Rich Enterprise LLMs (lamini.ai)
1 point by hasheddan on Sept 28, 2023 | past
1.109B times faster serving of finetuned LLMs (lamini.ai)
2 points by gdiamos on Aug 17, 2023 | past | 1 comment
Lamini – LLM Engine – Connect Your Enterprise Data Warehouse (lamini.ai)
1 point by rinesh on April 29, 2023 | past
Launch Lamini: The LLM Engine for Rapidly Customizing Models as Good as ChatGPT (lamini.ai)
123 points by sharonzhou on April 28, 2023 | past | 70 comments

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: