In the short term, you wouldn't want to run databases in WASM. You could, but it's not really worth the effort, as long as the WASM runtime allows TCP connections, you can just connect to any hosted DB as usual.
For performance, we ship three compiler backends: LLVM (fast execution, but slowest compilation), cranelift and singlepass (our own compiler, very fast compilation and you can compile untrusted code - but slowest execution). There is a slowdown, but the goal is to keep that at a minimum (proper performance tracking is on the bucket list). We are pre-compiling the python.wasm (which was already pre-optimized when the compilation to WASM happened) with LLVM, so you should get assembly that is very close to the native execution, with the exception of the necessary VM overhead. The goal is to make it so that the interoperability gains are worth the performance hit.
Sure, maybe it's not as fast to execute as another implementation, but having ANY implementation is far more useful than none at all.
Here's an example where performance wouldn't matter as much:
Let's say I'm the author of a programming language and I want to add a db layer to my standard lib.
Via WASM I can use a working backend while I'm protyping the API for my db layer. If the performance isn't fast enough I can start implementing my own "native" version with feature parity.
I can then use some of the tests from the original library (via WASM) to make sure my implementation has feature parity to the original.