If you are running 1-2 of copies or applications that very large amount of memory while being pretty small themselves, not much (and that's admittedly pretty common).
However, if you, for example, need to run 20x copies of 50MB application that uses 100MB memory at runtime in somewhat isolated environment, you would only provision machine with 20x100MB+50MB=2050MB + whatever OS needs (~100MB?) to keep everything in memory. If you made VMs for each of them, you would need 20x100MB+20x50MB+20xOS overhead=5000MB'ish, 150% increase in this case.
Also, starting up a container is a lot faster than starting a VM. If image is in the local cache, your application likely starts loading within 500ms. VM startup times are usually measured in minutes.
However, if you, for example, need to run 20x copies of 50MB application that uses 100MB memory at runtime in somewhat isolated environment, you would only provision machine with 20x100MB+50MB=2050MB + whatever OS needs (~100MB?) to keep everything in memory. If you made VMs for each of them, you would need 20x100MB+20x50MB+20xOS overhead=5000MB'ish, 150% increase in this case.
Also, starting up a container is a lot faster than starting a VM. If image is in the local cache, your application likely starts loading within 500ms. VM startup times are usually measured in minutes.