The OpenGL compute shader feature is not needed for doing GPU compute. It's just another type of shader that is not connected to other GL rendering that may be happening at the same time. People have been going GPGPU with the traditional types shaders for a long time. And WebGL 2.0 is a huge upgrade from 1.0 from a GPGPU point of view.
It is possible to perform some computations using OpenGL ES 3.0 / WebGL 2.0, but many types of operations (e.g. anything that involves random-access writes) are impossible, and many others (anything that normally requires shared memory) are very inefficient. Programming GPU through WebGL 2.0 is akin to programming desktop GPUs pre-CUDA: it is too intricate to take off.
Compute shader extension for WebGL 2.0 would be cool, but it would require to port a large part of OpenGL ES 3.1: OpenGL ES 3.0 / WebGL 2.0 doesn't include even random access buffers (SSBOs)
I agree that "too intricate" is the other main problem in WebGL uptake. But we're not even seeing WebGL versions of textbook GPU applications that are straightforward to implement with the tools WebGL 2 gives.
For example, here's WebGL compatibility stats from a site that has counters on technical web sites and graphics programming web sites: http://webglstats.com/ - As you can see, WebGL 2 compatibility is only at 40%, despite having been enabled in stable Firefox/Chrome for over a year. WebGL 1, a 7 year old standard, is now at 97%. (And even for the nominally WebGL-enabled browsers, users often report browser or OS crashes, so the percentages are upper bounds).
From an armchair quarterback position, if I wanted to effect GPU compute uptake, I'd work on compiler tech and tools targeting WebGL GLSL.