If anyone's interested in more examples, I spent last month doing a "thing a day" bunch of procedural music experiments with Web Audio. A couple of the ones that seemed to work:
Thank you! I wish I could say there was a cohesive thing with tunable parameters inside those demos, but basically each of them is a separate blob of ad-hoc ways of choosing scales, rhythmic patterns, etc.
The modular bit, which is reused across all the demos, is the bit that actually plays noises through Web Audio, and might be worth checking out: https://github.com/andyhall/soundgen
(If it's not obvious, there aren't any pre-made audio samples in the demos, all the noises are built at runtime out of Web Audio oscillators and filters and so on, inside that library.)
Thanks! You are spot on about day 4 - most of the demos began with trying to imitate an existing track, especially structural stuff like how many voices there are, how often they enter and leave, how often the chords change, etc.
I've recently been messing with very simple generative music with sonic-pi[0], a Ruby DSL which makes things very easy. It's been great fun, and you can get up and running very quickly. Here's the source code for some pretty listenable (IMO) generative music of infinite length[1]. Being able to read source-code comments to something while listening to it is strangely satisfying.
"I'm yet to see a system that produces music that I would call 'good'."
This is really in the eye of the beholder. One man's trash is another man's treasure.
There's plenty of incredibly popular music created by humans that I absolutely detest, compared to which computer-generated music (as boring, random, or predictable as it might be) can't possibly be any worse.
For my own personal taste, it's rare that completely computer-generated music sounds "good" to me, but it happens. Here's an example, from one of David Cope's EMI (Experiments in Musical Intelligence): [1]
When we depart from strictly and completely computer-generated music to music that contains some generative elements which are "curated" or guided by humans, to my taste the music gets much better. Then we get in to the realm of what could be termed human-computer collaboration.
I like Brian Eno's analogy of "composer as gardener", in which the composer may plant musical seeds and then guide the music to fruition, while not controlling or being the "author" of every nuance as happens with more traditional views of composition.[2]
I actually have a musical project I call Intercal which is based on the idea of obfuscating well known musical works algorithmically. It generally takes a lot of attempts to find something that works, but I've had some promising results.
Eno’s analogy is good PR, but makes no sense at all when you consider the rigidity of the technology involved, and the fact that it’s literally incapable of evolving new forms.[1]
In fact algorithmic music is exactly what it appears to be - the design of relatively trivial mechanistic automata that generate notes and/or sound.
[1] Yes, genetic algos have been used in algorithimc music. No, they haven’t been any more succesful aesthetically than other approaches - possibly because all algo approaches seem to have a simplistic view of the domain, and if you’re stuck in that local minimum it doesn’t matter which algos you use.
I felt like this for a long time, but I've come to appreciate it as background/zoning-out music, especially when I'm familiar with the algorithm. I often become quite amazed how a small amount of PRNG can generate interesting melodies, and things which feel emotionally salient.
I enjoyed the presentation but genuinely hated the website. Like, on a deep and personal level. I'm a little bit mad about how bad the experience of using this website is. There were several pages where I had no idea if I was waiting for something to finish loading, or if I was supposed to hit space to continue (I'm on a slow/unreliable 4G connection...I often have this problem, but I can usually figure out what's happening based on surrounding content and commmonly used affordances that hint at what actions are available and when). I know it's one-key navigation, very simple and seemingly impossible to get wrong, but because it's frequently loading large files, I frequently was in limbo about what to do (though by the end I stopped waiting and just started trusting it would do something if it had something to do...this ended up skipping at least a couple of things that I think would have been musical or interactive or something had I waited for them).
If this subject weren't really of interest to me, I would have noped out within a couple of pages.
Additionally to what you said, I find the prezi-style panning and movement between pages extremely distracting and jarring to the point where I did nope out.
Yeah, I generally dislike this style of site. I say it all the time: "Ease of use is often just what you're used to." And, nobody is used to a site working like this.
It's slides for a presentation that the author gave at an event. It's not meant to work like a regular website.
Also: sites on the front page of HN sometimes load slowly. Sure it can be frustrating, but hating the site on a deep and personal level over it seems a bit much.
I hate slides like this too. As I said in my previous comment, the movement is jarring and distracting. It breaks my concentration and focus. The worst offenders often leave me feeling almost dizzy. In my personal opinion, Prezi and the like has set presentations back quite a bit in terms of readability and comprehension. In a presentation, its even worse, because the slides often distract me enough that I miss some of what the speaker is saying too. Maybe my attenion span isn’t that good, but I’m sure I’m not the only one.
The website was unusable for me. I find the topic super interesting, but it’s not worth the extra time to parse. Not the least of which is that on mobile the font sizes were strange and the text poorly placed. Edit: typos.
Aside: I'm glad to see that (so far) no one's referred to it as Krell music.
This is a reference to the 1956 science fiction movie 'Forbidden Planet' (https://en.wikipedia.org/wiki/Forbidden_Planet), which was the first film to use an entirely electronic score, created by husband-and-wife team Louis and Bebe Barron.
The timbres ('electronic tonalities') were generated using rather fascinating vacuum tube circuits. Bebe Barron used these as the basis for her compositions.
The soundtrack superficially appears to consist of weird, unstructured, synthetic sounds. The film's music and sound effects intermingle. But there certainly is a musical structure; it's not just randomly generated sound.
Unfortunately, some electronic musicians generate random sequences or noises and call it 'Krell' music. It's a fundamental (lazy?) misunderstanding of what the Barrons achieved without the use of synthesizers.
* tries to create a semi-randomized fugue: http://aphall.com/2017/12/advent-18/
* jazz combo improvising over the title theme from Metroid: http://aphall.com/2017/12/advent-16/
* generates a chiptune over the progression of Autumn Leaves: http://aphall.com/2017/12/advent-17/
Caveat, they're likely too CPU-intensive for mobile devices.