Like most LLM's today, Grok-1 was pre-trained by xAI on a variety of text data from publicly available sources from the Internet up to Q3 2023 and data sets reviewed and curated by AI Tutors who are human reviewers. Grok-1 has not been pre-trained on X data (including public X posts)"
Right, because its not like the training dataset was built off comments posted by all of us in the first place.
How ungrateful we are, to demand the ability to access what was unconsentually built off our hard work in the first place.