Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey, Gemma engineer here. Can you please share reports on the type of prompts and the implementation you used?


Hello! I tried to show it Redis code yet not released (llama.cpp 4 bit quants and the official web interface) and V3 can reason about the design tradeoffs, but (very understandably) Gemma 3 can't. I also tried to make it write a simple tic tac toe Montecarlo program, and it didn't account for ties, while SOTA models consistently do.


Can you share the all the recommended settings to run this LLM? It is clear that the performance is very good when running on AI studio. If possible, I'd like to use the all the same settings (temp, top-k, top-p, etc) on Ollama. AI studio only shows Temperature, top-p and output length.


vibe testing, vibe model engineering...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: