Hello! I tried to show it Redis code yet not released (llama.cpp 4 bit quants and the official web interface) and V3 can reason about the design tradeoffs, but (very understandably) Gemma 3 can't. I also tried to make it write a simple tic tac toe Montecarlo program, and it didn't account for ties, while SOTA models consistently do.
Can you share the all the recommended settings to run this LLM? It is clear that the performance is very good when running on AI studio. If possible, I'd like to use the all the same settings (temp, top-k, top-p, etc) on Ollama. AI studio only shows Temperature, top-p and output length.