> Wonder why they take this approach though, as it is clearly over-engineering (if I correctly understand that the goal is just to make vocals volume adjustable).
Depends what the other non-functional requirements were. i.e. if the NFRs were as follows:
* Cannot increase bandwidth / mobile data usage.
* Cannot impact music quality / bitrate.
* Has to work offline.
* Cannot increase on-device storage.
* Has to be responsive.
Then two audio streams might not work.
Another advantage of doing it on-device is that it doesn't actually change any of the backend architecture too. It might be a lot of change to a lot of systems for a feature which only adds a small amount of functionality - i.e. architecting your entire backend and streaming around seperating audio tracks might not be the right focus.
Depends what the other non-functional requirements were. i.e. if the NFRs were as follows:
* Cannot increase bandwidth / mobile data usage.
* Cannot impact music quality / bitrate.
* Has to work offline.
* Cannot increase on-device storage.
* Has to be responsive.
Then two audio streams might not work.
Another advantage of doing it on-device is that it doesn't actually change any of the backend architecture too. It might be a lot of change to a lot of systems for a feature which only adds a small amount of functionality - i.e. architecting your entire backend and streaming around seperating audio tracks might not be the right focus.