Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a great project, well done. Algorithmic music is fascinating to me, I feel like there's a lot of untapped potential in the concept particularly with more 'complex' genres where higher levels of composition and structural continuity are required. This tends to work much better with loop-based genres but I'd personally love to do a side project on something that could write melodies from some sort of generative grammar for example. A lot of these exist but the results are mixed very often.

For me the interesting part is not just in getting a computer to do all the work but using it as a compositional aide. You can think of it at writing music at a higher level of abstraction. Rather than writing a score you can write the rules of the score and use a stochastic process to spit out permutations for example. Eg imagine writing a program that could spit out Eric Satie-esque melodies.



As someone who writes mainly loop based genres I agree and have thought about much the same stack of abstractions. Acid techno is a great starting point as it's not known for complexity, being traditionally produced with a 909+303+303, devices that support only a 1 bar loop (a genre with origins that predate wide availability of digital audio workstations). But like yourself I'm interested in more complex genres.

One example of more general EDM levels of abstraction for me might go

1a. this tune needs to play with space (stereo/reverb) more 1b. we need some interesting background sounds

therefore

2. we need an interesting, wide background sound to start at bar 32 and end at 64

Dropping in something from a sample library to achieve that wouldn't satisfy me as every step of constructing that sound is following other rules of abstraction e.g. how should it fit with the existing tonal balance? with the existing rhythm? etc. But at the same time the process is 1% inspiration 99% perspiration - it's following unwritten almost-rules which would be fascinating to capture in an algorithm if I could. They wouldn't have to be completely general rules, just my own personal ones.

A related (light hearted) thing I wrote on EDM abstractions a while back https://omnisplore.wordpress.com/2017/11/25/kolmogorov-compl...


> 909+303+303, devices that support only a 1 bar loop

Ha, no, you can input a great deal more than a single bar into them. They were designed to let users store a variety of patterns, which can then be assembled into a traditional intro-verse-chorus song structure, if you so wish.


Even with a single pattern, you can add a lot of variation just by changing the pattern lengths to do polyrhythms. Say you do:

    909: 16 steps
    303 bass: 5 steps
    303 lead: 3 steps
Now you have a sequence that only repeats every 240 steps or every 15 bars, even though each device never uses more than a single 16 step pattern.


Doh I stand corrected :) Though still nowhere near as much potential for refining the end result as you get with a DAW. That's not to say live performance doesn't have value of course!


Yep I've done it. Suuuper tedious. Which is why I suspect even though you can do 2 or more bar phrases, single bar loops (vamps?) are pretty typical, instead varying the filter envelope and distortion over time.


Off topic but I found my preferred method of 'live' performance (for dnb) was to write a tune in the DAW with the intention that a few of the lead parts be played live, then switch off those parts and render what's effectively a backing track. Meant I could sort the mixing/mastering beforehand and keep in the flow of rocking out on synths when it came to the gig. I did use ableton but only to auto load synth patches/effect chains when each backing track was triggered.

I tried many other approaches involving 5 people driving a live looping rig but the above was worked best for me, which I guess made my act a 2010 equivalent of the guy wearing the burgandy velvet suit in the hotel bar ;-)


I get why people want machine-aided composition. But I'd love to see a project like this go in the opposite direction.

Maybe put a row of buttons at the top: "I am [Loving it] [Grooving] [Fine] [A Bit Bored] [About to Quit]". Then use that (plus browser data and window-close data) to train ML models to maximize engagement. Or perhaps to hook it up to facial expression recognition.

When I watch friends DJ, there's this great feedback loop between how the crowd's behaving and what they're putting on. They're clearly using the music to achieve certain mental states among listeners. I'd love to see how well that can be done algorithmically.


That would be a fun project for sure!

So many of the selections that great DJs make are curveballs and surprising choices. This is less true in more samey genres, but true pioneering selectors will tend to surprise you with their taste and juxtapositions in a way that's hard or maybe even impossible to really capture in an algorithm.

Put another way, even in something as simple a song selection, the weirdness and contingencies in human creative decision making is a feature not a bug.

The efficiency and cleverness of algorithms can create create great powerful recommendation engines but there's nothing like the idiosyncrasies of a person as a curator.


I'm not entirely convinced by your argument. Some years ago there was an episode of the Gadget Show where they were testing an app for suggesting food ideas. I can't remember how it was supposed to work, it was too long ago, but it was something to do with having profiles of different food types programmed into it and then it used some sort of algorithm to compare taste profiles and come up with combinations that should work.

The combination it came up with that they then tried selling on a food stall was chocolate on pizza. It wasn't to everybody's taste but some liked it, with one describing it as "weirdly delicious".

It doesn't seem completely ridiculous that if an AI could be trained to recognise patterns of music that worked well together that it could analyse a large corpus of recorded music and come up with surprising mixes that you wouldn't think would work, but do.


I don't doubt it's possible to come out of leftfield for an AI DJ but I still wonder if it could be a tastemaker like a great DJ. I don't just want great individual mixes, I want consistent, surprising yet tasteful selections that define an idiosyncratic style.


Chocolate on pizza doesn't seem very novel.

Many restaurant chains sell chocolate on their pizza base as a dessert; chocolate is also known to complement many savoury dishes as a 'secret ingredient'.


There's a restaurant in London that incorporates cacao in some form or other in every dish. It's a bit of a gimmick in that in a lot of them it's not really noticeable, or just used very sparingly. It's probably easier to do with the constraint of "some form of cacao" than chocolate, though.


For sure! I'll always appreciate the deeper stuff that only humans can do. I just think it would be interesting to see how much of it we could automate. E.g., could it get to the level of decent background music?

Of course, there's always the possibility algorithmic composers could do even better than humans. Clarke told a story about that in 1957: https://en.wikipedia.org/wiki/The_Ultimate_Melody

Text here: https://archive.org/details/1957-02_IF/page/n71/mode/2up?vie...


Put a camera on the crowd, and motion-track the bouncing of heads. That'll give you two banger metrics: one, how far they're all bouncing/jumping/nodding, and two, how consistently they are.

That would be enough to construct a robot DJ that could track the variations it was making, and rate them for 'better/worse'. Then you just make a cool-looking puppet to store the camera in, that can move around and possibly wave an actuator like a proper DJ, and the rest is machine learning.


The loudness war has ended, prepare yourself for the bobbing war.

There was an interesting youtube video I watched, "All my homies hate Skrillex" that said that the evolution of agressive Skrillex-style "dubstep" over the older, more introspective dubstep was a result of smoking being banned indoors in UK clubs.

This made the smokers have to leave every now and then, which forced the DJs to have to play more agressive, instantly impactful music to draw their attention when they returned.

https://www.youtube.com/watch?v=-hLlVVKRwk0

I guess the video maker knows what he's talking about, he's the one who was at the clubs and clearly has great knowledge about the genre.

He said that he didn't care either way about the smoke, but personally I'd be much more likely to stay in a club with clear air than one filled with horrible smoke.


I love clubs, but hate it when I smell like a ash tray afterwards, despite not smoking.

Another theory about the changing for more aggressive beats: maybe the dj's themself were affected and got angry having to wait for their next cigarette?


I wonder if you could do the feedback mechanism in a more subtle way, for example by pairing the musical composition component to an interactive activity like playing an action game, where movements and so on in the game trigger a response in the music.

There are lots of games that do this kind of thing at a rhythm level, like Necrodancer and more recently BPM [1], but I think there'd be the potential to do a lot more with it than just rewarding beat-alignment with a pre-cooked solo that blazes over top of the base track.

[1]: https://www.youtube.com/watch?v=V684o6wBaSQ


Agreed, but I'd like to take it a step further. Rather than fitting new compositional ideas into the patterns and structures of our existing music theories, I'd like to use computation to create new theories.

As an example, I've become really interested in the fact that tempo and meter are taken as constant, grid-like structures in pretty much all existing music practices (even within, say, Gamelan, though there you do get a lot of pushing and pulling). Typically, you have polyrhythms and polymeters to fit more interesting patterns into the fixed grid, and accelerando and ritardando to adjust the rate of grid traversal, but that's about it. Hardly anyone applies algorithmic/geometric thinking to the grid itself — likely because more complex rhythmic foundations would make human performance nigh impossible.

To explore this space, I created my own little generative music system that takes a handful of simple motifs (a la Riley or Reich) and stacks them into a recursive temporal structure, which is then pushed logarithmically toward a tempo of 0 or infinity. There are some rules so that the "performers" only play at comprehensible scales, and that pitch modulation keeps the piece interesting to a listener. You can listen to it here, if you'd like: https://ivanish.ca/diminished-fifth/

I'd love to see more folks working on tools to make these sorts of theory-stretching ideas easier to access and explore. For instance, I've really been struggling with how to hand-compose music that can fit within a nonlinear/recursive structure of time. Existing tools like Tidal or Max/Pd were built to support the existing theory. I think we need new tools that allow you to design the theory itself.


People create new theories every time they make a piece of music. Computer music tools allow them to work with those theories. Still lots of work to do. The idea of there being "a music theory" is perpetuated by music schools obsessed with the music of a handful of dead white guys.

> tempo and meter are taken as constant, grid-like structures in pretty much all existing music practices

I don't think that's remotely true.


Looks like a fascinating project. Worth sharing as a HN thread in and of itself.


> I feel like there's a lot of untapped potential in the concept particularly with more 'complex' genres

Well, you're in good company. This goes back for many decades. Here is a summary of many attempts during this time: https://www.amazon.com/Algorithmic-Composition-Paradigms-Aut.... Really convincing composition started to appear only recently using transformers, e.g. https://openai.com/blog/musenet/. The present solution is a rather primitive one in comparison.


Awesome thanks. Yeah I'm vaguely familiar with the history but certainly not the state of the art. Will take a look at this open AI project.


Wholeheartedly agree using computers as a compositional aide is a fantastic thing, it's really quite satisfying to define some rules by which the program can modify the inputs (melodies, chords, drum patterns) and have it output interesting results.

I coded up some jungle music not long ago: https://www.youtube.com/watch?v=GPan4gRSwZs&t=79s

I'm working on a procedure to modify the drum breaks in a conventional way, meaning I have to think less about keeping them interesting while live coding.


Well you just inspired me to switch on my monitors :)


Autechre have been doing that for quite a while now using Max et al., including computer based composition. Irlite [0] (and Bladelores) both feature some form of compositional changes throughout the track.

0: https://www.youtube.com/watch?v=SmwW5kBfltk&list=PL1yYEMwtFZ...


This kind of algorithmic music has been a thing for decades. I have a funny memory of going to a university open day as a child in the late 80s/early 90s and a presentation by some boffin with a synthesizer was what made me want to grow up and become a computer programmer.

Back in the early days, I think CSound[0] was the big software, not because it can make especially interesting sounds compared to more "musical" software synths, but because it's a proper programming language which gives people the freedom to do these higher level abstractions.

In the hardware world, a lot of this developed out of arpeggiators and analog sequencers. I remember years ago I had a "P3" sequencer[1] which implemented a lot of these sorts of algorithmic pattern generaton tools - you could do things like quantize to a scale, then set a sequence of percentages that themselves impacted the likelihood of a particular note being played. I see there is a new version of this sequencer[2] too.

Lots of analog sequencers provide similar features, and if you have a modular setup you can ramp the tempo way down and have them control chord progressions or swells instead of 16th note patterns. Pretty sure this is how a bunch of live ambient music was done back in the day.

That was a fairly niche corner of electronic composition, but even in the mainstream of 20 years ago stuff like the Emu Proteus sound modules featured pretty advanced programmable arpeggiators where you could essentially write the skeleton of a musical sequence and then modify what notes of it actually ended up getting played by deciding which original notes to start from. I always came at this more from that minimal/techno/sequence-based side, but then I went to a wedding and saw a wedding singer playing an "accompaniment" keyboard which showed the other side - entire chord sequences and backing instrumentation getting generated in real-time based just on what chord the keyboard player chose to hit with their left hand. It's surely only a small step forward from there to being able to input a higher level algorithm that could develop a whole song.

[0] https://csound.com/

[1] https://www.soundonsound.com/reviews/sequentix-p3

[2] https://www.sequentix.com/shop/cirklon-hardware-sequencer


In the same vein as this, David Bowie created an algorithm that generated lyrics and gave him ideas for songs https://www.vice.com/en/article/xygxpn/the-verbasizer-was-da...


Excellent project, also serves to provide some quick audio latency test for devices. Quality of playback seems to vary between browsers and devices (DDG/FF -Android); I presume it's consistent and better in iOS devices as it's known for better audio latency(Don't have one nearby to test).

If the author is reading this, Would love to hear about the quirks related to audio playback in browsers/devices you've found so far in developing this.


This reminds me of the "generative design" trend in architecture, which has been attempted numerous times with typically underwhelming results. See here for an interesting essay (with nice illustrations) arguing that it's a doomed effort:

https://www.danieldavis.com/generative-design-doomed-to-fail...


The question is whether the use of AI could be elevated above a crutch like autotune to actually being used as an instrument like a synthesizer. In other words: is it being used to amplify the ability of the musician or compensate for their lack of skill?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: