Hacker News new | past | comments | ask | show | jobs | submit login

I have a few private “vibe check” questions and the 4 bit QAT 27B model got them all correctly. I’m kind of shocked at the information density locked in just 13 GB of weights. If anyone at Deepmind is reading this — Gemma 3 27B is the single most impressive open source model I have ever used. Well done!



I tried to use the -it models for translation, but it completely failed at translating adult content.

I think this means I either have to train the -pt model with my own instruction tuning or use another provider :(


Try mradermacher/amoral-gemma3-27B-v2-qat-GGUF


My current architecture is an on-device model for fast translation and then replace that with a slow translation (via an API call) when its ready.

24b would be too small to run on device and I'm trying to keep my cloud costs low (meaning I can't afford to host a small 27b 24/7).


Have you tried Mistral Small 24b?


My current architecture is an on-device model for fast translation and then replace that with a slow translation (via an API call) when its ready.

24b would be too small to run on device and I'm trying to keep my cloud costs low (meaning I can't afford to host a small 24b 24/7).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: