How does this benchmark against Reflection, which was fine-tuned to do the same thing-- provide a detailed Chain of Thought with self-corrections, then write out a final answer?
Pretty sure Reflection-70B was a complete scam. They did the ole bait and switch. The model that they uploaded was completely under-performing compared to their own benchmarks and the "secret API" was just a GPT-4 & Claude wrapper.
I'm aware of the issue with their purported benchmarks, in fact some testing had Reflection 70B performing a bit worse than plain Llama-3.1 70B. Does G1 do any better?
They seem to have a fine-tune of Llama 3 70B that's available for download, so obviously "real" in that sense. That ought to be better behaved than a pure system prompt approach.