It's an open source language model with seven billion parameters (a measure of its complexity), and a longer than typical sequence length of 8K, which allows the provision of more context when querying the model. For example, it allows you to better generate text in someone else's voice by providing a longer example of their work.
The number of parameters seems meaningless when the training sets are dogshit and hardened old gum chipped from the shoes of Gregslist and FleaBay.
OTOH, there ought to be a construction in the form of an web app that can pinch out nonrepetitive, coherent ~100 page trashy romance novels in the style of any author given name with open source or specific text(s) or transcripts with enough original input volume: Churchill, The Unabomber, psycho happy kindergarten child development IEP manual writer, The Dude, Walter (agro gun nut), Bob Ross, Grace Hopper, Ayn Rand, LBJ, The Dalai LaMa%, Hitler, Kanye (Ye), Bhad Bhabie, and the King James Bible. Ethical and generational safety features be damned; it'd be generating fucking^2 art for hilarious entertainment purposes. How does one stretch training input to something that might involve human/computer output validation to discard sticking on repetitive nonsense?
https://en.wikipedia.org/wiki/Foundation_models