Hey, feel free to draw your own conclusions. AI quality is technically a subjective topic. For what it's worth though, Google's open models have benched quite competitively with GPT-3 for multiple years now: https://blog.research.google/2021/10/introducing-flan-more-g...
The flan quantizations are also still pretty close to SOTA for text transformers. Their quality-to-size ratio is much better than a 7b Llama finetune, and it appears to be what Apple based their new autocorrect off of.
The flan quantizations are also still pretty close to SOTA for text transformers. Their quality-to-size ratio is much better than a 7b Llama finetune, and it appears to be what Apple based their new autocorrect off of.