I've just added a wav export feature. Currently it only exports with the knob positions as they are when the pattern first generates. You can choose how long the exported audio is.
It's a bit of a hack that re-opens the app in an iframe in the background using an offline audio context.
I'll come back to it at some point and make the export pick up the knob positions but I don't have time right now.
Thank you! It's been a few years so I can't remember exactly without reading through the code but it's something like this:
It uses notes from the selected scale and octave (from the dropdowns).
If the pattern is of an even length, say 16, it will split it into 4 chunks of 4, then randomly decide if it should generate new data for the chunk or copy the previous chunk. It uses the repeat slider for the probability on this.
It randomly applies the 303 modifiers (up, down, accent, slide) using probability set with the sliders on the pattern tab.
There's also an 'empty' slider which sets the probablity of an empty note appearing in a chunk.
Awesome, love it! You could consider adding some randomness from random.org so that natural electromagnetic phenomena (or a supreme being) influence the output - for the pro service perhaps ;-)
For a while I have been curious about the intended uses for xAtTime functions (like cancelAndHoldAtTime) in Web Audio. As far as I understand it, calls to them suffer from lag due to main JavaScript thread and audio thread communication, which makes sample precision unachievable—and precision is quite important in music.
Is it mostly for emulating slow-moving changes on fixed timelines, a la automation tracks in traditional DAWs like Logic and Ableton? Is design rationale documented somewhere?
Those methods are sub-sample accurate, granted you call them a bit in advance to account for the cross-thread communication, as you say. But yes, in general this was designed (prior to me becoming an editor) with scheduling in mind, not with low-latency interactivity. That said, it goes quite far.
Other systems go further, such as Web Audio Modules (that builds on top of AudioWorklet) implement sample-accurate parameter change from within the rendering thread, using wait-free ring-buffers. That requires `SharedArrayBuffer` but works great, and is the lowest latency possible (since it uses atomic loads and stores from e.g. the main thread to the rendering thread).
This is really lush. Instantly it brightened up my evening. This kind of experimentation is always amazing to see.
As many seem to have mentioned below, it brings back memories of Rebirth in some ways. What it also reminds me of is the beautiful results you could have by plugging some simple modules together to create soundscapes. The limits are the things that provide some semblance of freedom and this is no different. Greetings from a fellow UK acid (techno) head! :P
I've just updated this to make it a little bit easier to use on a phone.
The knobs are now a bit chunkier and should respond better to touch and the instruments sit vertically instead of horizontally.
this is awesome. would suggest not randomizing the tempo on regenerate, and if it was already playing, when hitting regenerate, keep it playing. that would make it easy to quickly audition loops at a given tempo with a single click
A scale is randomly selected at the start and then notes are randomly selected from that scale in the pattern generation, plus the root note number is added to each one.
So if you had the 'Darkness' scale selected and had the root dropdown set to 0, the notes in this scale would be C, C#, D# which is 0, 1, 3 if you count the keys on a keyboard. If you changed the root to 2, then it would become D, D#, F (2, 3, 5).
Currently, all you can do is save the url which contains all of the initial randomisation settings when a pattern generates. It doesn't update when moving sliders or anything, it's just the intial settings.
I created this scheduler library which can be used to play a sequence of notes, create a metronome, drum machine etc.
https://github.com/errozero/beatstepper
The quality is ok but their update policy is terrible.
I have a jelly one but it never got the promised upgrade. They were just like 'hey it's very difficult due to the limited storage so we're only going to upgrade the pro version' :(
And later models have been getting 1 upgrade at best too.
Typing this on Asus Zenfon 8. The best phone I have had in years, I have it for almost 2 years now. No bloat, small, records voice calls. There's nothing more I want from a phone.
Battery isn't great but afaik it improved in the newer models.
Sony Xperia 5v and 10v are two current phones that reasonably sized and still have a headphone jack and memory card slot. Not the flagship Xperia 1v though, I think that one's a bit bigger.
I was very against losing headphone socket but just accepted the iPhone SE 2020 and actually using thunderbolt to headphone jack adapter is not that bad. But the ultimate realization is when you finally try Airpods Pro and you are converted.
I made a library based on this article which I use as a starting point for all my music app projects. It's useful as the main timing code for things like drum machine and sequencers etc.
https://github.com/errozero/beatstepper
The way I understand it, Web Audio API only lets one schedule audio source nodes with start and stop methods. As I needed to schedule something that was not audio related, I ended up creating silent oscillators of almost zero length and relying on the 'ended' event.
But running a scheduled callback is better! It's not immediately clear to me how you do it. Can you maybe explain your code a little?
Hey, sure I'll try... if you mean usage of the library:
It runs your callback every 16th note, slightly before it is actually due to play, this can vary by a few milliseconds each time but that doesn't matter as the actual audio context start time for the note is passed in to the callback, and you use that to actually schedule the events for that 16th note, eg: osc.start(time).
You can schedule 32nd notes etc too by using the stepLength property that is also passed in, time + (stepLength/2) would be a 32nd note.
Hope that makes sense? I do need to write a better description on the github page of what it actually is.
The inner workings of the library itself are mostly just as described in the article with a few tweaks.
It's a bit of a hack that re-opens the app in an iframe in the background using an offline audio context.
I'll come back to it at some point and make the export pick up the knob positions but I don't have time right now.