There are three places to accumulate latency, the input from the instrument to the computer, the processing of filters, the output to the monitor (headphones/speakers)/recording. Sometimes you can get away with 2 or 3 ms of latency, but anything over 5 ms is super frustrating. Remember, you're fighting the latency between plucking a string or hitting a key and when the computer acknowledging the data and sending it back out to the monitor which you're using as your guide. Best case you go and "massage" the new track to line up with the existing tracks, worst case it sounds like an out of sync high school marching band.
EDIT: The concerns here are primarily with input latency. Between plucking a string and heading it in your monitor it has to go through: your input hardware, the USB interface, the OS, the browser (which doesn't have explicit low latency capabilities), and JS. Most platforms support ASIO which is a low-level driver for reading audio data from devices. About as close to reading the ADCs yourself. Without a low-latency driver working with the OS there's so much latency overhead it's audible.
Not sure why you picked up downvotes on this comment because I think you’re right. 5 ms is not at all a lot of latency. I think even very exacting professional musicians would be hard pressed to detect it. At 128 samples and using the Push 3 as an audio interface on my M1, round trip latency is 13 ms, and even that is not a frustrating amount of latency.
ASIO was just one of the first audio APIs to provide high-quality, low-latency audio. Many people haven't realized that native audio APIs have moved on since then.
macOS has proper audio APIs out of the box, and arguably since the introduction of WASAPI exclusive mode and WaveRT in Windows Vista, Windows has all the needed tools as well. But most of the more "professional" DAW products (in particular those by Steinberg, the author of ASIO) seem to ignore the existence of those. REAPER is one of the exceptions. Even WASAPI shared mode latency is really usable (below 30ms), but not low enough for tightly synchronized real-time recording.
Linux audio can be set up to provide low-latency audio as well, but I cannot comment on the details there as I'm not using it for that purpose.
You're right, it's not natively on Linux, and you wouldn't use it on Linux today since the kernel supports lower latency IO and has better scheduling. Jack has gotten so much better. We didn't have that at the time and I was desperate to use the only interface card I had.
That said, there are plenty of open source implementations of ASIO drivers now that aren't hardware tied.
> you wouldn't use it on Linux today since the kernel supports lower latency IO
Actually you absolutely would use it, in the same way you did back then.
WineASIO is a layer that allows a Wine application to use the ASIO API. Since ASIO is not a part of Windows itself, anything that wants to use ASIO can't do so on "bare Wine", and Wine doesn't allow for the installation of a windows kernel driver layer like ASIO. Hence: WineASIO - an implementation of ASIO for use by Windows applications running inside Wine.
Also, Ubuntu 14 dates to 2014; JACK dates back to 2002. Very little, if anything has changed about JACK since 2014. AFAIR, WineASIO could or did use JACK itself at some point in its development history, since it was a pretty natural fit.
I don't know of any open source ASIO implementations. The only 3rd party one of, ASIO4ALL, is not open source. Then again, I don't track the Windows environment much at all.
Because when you want to record your instrument along with whatever else is in your project, timing is critical and everything needs to line up.
You cannot be performing to audio that you are hearing with any delay, especially if the monitoring of the live audio is also being routed through software.
At a certain point of latency it introduced delays and badly affects how you perform. In some circumstances it makes performance actually impossible.
There are ways around this, namely if the software knows exactly what the input and output latency is then the playback and recording can be compensated. For live monitoring though you really need that done in the audio hardware itself in hard real time.
Music is very latency sensitive. If you are recording any source you generally want to have overall latency < 5ms. Input and monitoring latency is usually either handled by using fancy DSP systems or a "hack" where input audio bypasses any internal processing and gets routed directly back for monitoring.
I think the other people got a little too technical.
The reasons are things like, if you want to play in time with a previously recorded track, or if you are using digital effects and need to be able to hear their effect on your instrument as you play it.
Both of these are less of an issue today than they were 15 years ago where a USB 2.0 audio interface added significant delay into audio and made it harder to get what you wanted out of the system.
it's pretty hard for me as a musician to record guitar tracks in sync if I'm not getting the lowest latency through my computer (since I try to use e.g. software amp sims & pedals, etc.). Past ~8ms I feel it when I play, past 15ms I can hear the less accurate playing in the recorded tracks.
Use direct monitoring of your interface or an mixer in front of your interface. Then you can get away with 0ms (an analog mixer) or 2-4ms (interface direct monitor). Your DAW will latency compensate the "record head". It is just a matter of signal routing. It took me 20 years to get to this simple solution. :-D
but that doesn't allow me to have my (software) effects chain when I record ? for a guitar solo where i'm going to play with say, delays and whammys that's a no-go