Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
accrual
3 months ago
|
parent
|
context
|
favorite
| on:
Show HN: Shoggoth Mini – A soft tentacle robot pow...
I wondered similar. Perhaps a local model cached in a 16GB or 24GB graphics card would perform well too. It would have to be a quantized/distilled model, but maybe sufficient, especially with some additional training as you mentioned.
jszymborski
3 months ago
|
next
[–]
If Qwen 0.6B is suitable, then it could fit in 576MB of VRAM[0].
https://huggingface.co/unsloth/Qwen3-0.6B-unsloth-bnb-4bit
numpad0
3 months ago
|
parent
|
next
[–]
or on a single Axera AX630C module:
https://www.youtube.com/watch?v=cMF6OfktIGg&t=25s
otabdeveloper4
3 months ago
|
prev
[–]
16Gb is
way
overkill for this.
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: