LLMs should only be used where hallucinations can be tolerated (either low stakes, like a video game; or where competent human review exists). What you describe is neither of these scenarios. You may be trying to memorize something that's not low stakes (and if it's low stakes why bother memorizing? Just make it up as you go), and if you're using study tools you're probably not (yet) competent enough to check for mistakes.
Yeah, my immediate thought when reading this was that it's great they've replaced a formula with a better formula, but couldn't it be replaced by something smart? An LLM could in theory be content-aware, taking into account that this card reinforces these associations, which makes this other card easier to recall but this third card harder to recall. And it may even be able to say, this card is related to a more central concept, it should get priority because that will help in the longer run.
In the long run, the appeal of machine learning to me is that we can move up the stack of what we optimize for, so instead of targeting a proxy metric, say a certain recall ratio, we might actually be able to target directly the thing we care about, say ability to use these things in practice. Moving upwards in the teleological hierarchy.