I agree. Rather than (what I assume is) E2E text -> video/audio output, it seems like training a model on how to utilize the community fork of manim which 3blue1brown uses for videos would produce a better result.
[1] https://github.com/ManimCommunity/manim/
I agree. Rather than (what I assume is) E2E text -> video/audio output, it seems like training a model on how to utilize the community fork of manim which 3blue1brown uses for videos would produce a better result.
[1] https://github.com/ManimCommunity/manim/