Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
happyPersonR
13 days ago
|
parent
|
context
|
favorite
| on:
TScale – Distributed training on consumer GPUs
Pretty sure llama.cpp can already do that
TYMorningCoffee
13 days ago
[–]
I forgot to clarify dealing with the network bottleneck
reply
moralestapia
12 days ago
|
parent
[–]
Just my two cents from experience, any sufficiently advanced LLM training or inference pipeline eventually figures out that the real bottleneck is the network!
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: