Several years ago I implemented some very basic rules outlining some distance metrics between two chords and ran some multiobjective evolutionary algorithms to generate, say, a 16 bar progression while trying to minimize these distances between any two subsequent jumps. I added a couple or three other objective functions for judging the progressions by my idea of structure (i.e., starting and ending on the same chord), and found the results to be very promising.
With enough of a sophisticated rule system (which could be built from existing music) an AI should be able to optimize for tension building or storytelling quite easily. Of course it will only optimize for the definition of tension building or storytelling that it understands, via statistical methods or being told by the programmer explicitly. In the latter case the programmer is just doing one of the things composers always do, while in the former, the generated content is interesting - or not - in much the same way (IMHO) as transformer-based language generators like GPT-3.
Unfortunately I don't have the full working codebase anymore. Although I think I have enough I could recreate it, it was pretty rudimentary and I've always intended to revisit the idea one day and flesh it out as at least a blog post or something (at which point I will submit it here). I only have a couple of the progressions but they don't mean much on thier own since I hand-picked them out of the result set based on personal taste rather than any formal algorithm.
With enough of a sophisticated rule system (which could be built from existing music) an AI should be able to optimize for tension building or storytelling quite easily. Of course it will only optimize for the definition of tension building or storytelling that it understands, via statistical methods or being told by the programmer explicitly. In the latter case the programmer is just doing one of the things composers always do, while in the former, the generated content is interesting - or not - in much the same way (IMHO) as transformer-based language generators like GPT-3.