Surely it is a question of prompting some context(in UI mode) or with additional kicker of temperature (if using API)?
At the very least some set up prompt such as "Give me 5 scenarios for text adventure game" would break the sameness?
There have always been theories that OpenAI and other LLM providers cache some responses - this could be one hypothesis.