Hacker News new | past | comments | ask | show | jobs | submit login

Well sure, you can bring any real-time sound system to its limits if you want to - some of my additive synthesis experiments do that very quickly even on pure C++.

But as far as practical use goes, I run 16 algorithmic sequencers implemented completely in s7 in real-time, inside Ableton Live, at an output latency of 8ms, and do so for long enough for compositions. This is while Live does a ton of software synthesis and FX dsp too, and all of the sequencers can be altered on the fly without audio dropouts! This is on the cheapest M1 you can get too. So it's absolutely practical for real time work. This is also without even a ton of attention to real-time GC in s7 - I haven't dug into that yet, but Bill has told me that while he did a lot of work to make it fast, it doesn't use an implementation specifically targeted at lowest possible pause times.

Whether the tradeoffs of Scheme vs other options are worth it for a particular composer/producer/performer varies of course, but it really is time to put to rest the notion that we can't run a Scheme interpreter for real-time music generation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: