Of course it's main use is fingerprinting. Do you think WebRTC is instantiated for genuine reasons the majority of the time? That's real absurdity.
WebRTC is instantiated most often by ad networks and anti-fraud services.
Same thing with Chrome's fundamentally insecure AudioContext tracking scheme (yes, it's a tracking scheme), which is used by trackers 99% of the time. It provides audio latency information which is highly unique (why?).
Given Chrome's stated mission of secure APIs and their actions of implementing leaky APIs with zeal, I have reason enough to question their motives.
After all, AudioContext is abused heavily on Google's ad networks. Google knows this.
> It provides audio latency information which is highly unique (why?).
As someone who has worked with WebAudio extensively, and have opened and read many issues in the bug tracker and read many of the proposals... this is just not as nefarious as you are making it seem. I don't disagree that this _can_ be abused by ad tracking networks but I do disagree with the premise that it was somehow an oversight of the spec or implementation which led to this (or even worse, intentional). Providing consistent audio behavior across a wide variety of platforms (Linux, OSX, Windows, Android) along with multiple versions of all those platforms and the myriad hardware in the actual devices is actually just pretty hard. The boring answer here is that to provide low latency audio to support things like games, a lot of decisions have to made about what buffer sizes are appropriate for the underlying hardware and this is what ultimately exposes some information about audio latency on the system. Some of those decisions are limited by the audio APIs of the OS. Some are limited by the capabilities of the hardware. Some are workaround for obscure bugs in either layer. The point is that, as with most software, compromises are made to support an API that people actually need or want to use to make stuff. I also don't think audio latency information is really "highly unique". There are only a handful of buffer sizes which are reasonable based on the desired sample rate and are mostly limited by the OS, meaning at best you can probably identify a persons OS via the AudioContext. Furthermore, I have seen API "improvements" and requests rejected outright due to possibly exposing fingerprinting information. Things that would be really useful to applications which are building audio-centric software won't be implemented because the team takes this issue seriously.
AudioContext latency information can be retrieved without the user's consent or knowledge on websites that never ever use audio. It's a security disaster. I know for a fact that AudioContext is routinely abused on ad networks and by anti-fraud solution providers. Given its widespread use for purposes it wasn't designed for (in fact, this information is used primarily for tracking and spying), it's safe to say it's a tracking scheme.
The fact Google directly and knowingly benefits financially is a smoking gun. They don't give a damn it's not a secure -- in fact they profit on the fact it's a leaky sieve.
You said AudioContext is sometimes used for purposes which benefit the user. Well isn't that swell, the user is maliciously tracked by this security exploit 99% of the time and gets to reap the "benefits" 1% of the time.
WebRTC is instantiated most often by ad networks and anti-fraud services.
Same thing with Chrome's fundamentally insecure AudioContext tracking scheme (yes, it's a tracking scheme), which is used by trackers 99% of the time. It provides audio latency information which is highly unique (why?).
Given Chrome's stated mission of secure APIs and their actions of implementing leaky APIs with zeal, I have reason enough to question their motives.
After all, AudioContext is abused heavily on Google's ad networks. Google knows this.