As an alternative, if you containerize everything in the stack, you can simply spool up another seperate and isolated stack of containers for each given customer. Then it's also trivial if they want it 'on premise' somewhere or 'in the cloud'. No need to try and add complexity at the schema level and make a monolith support multiple tenants.
Also, this list is quite literally supabase (https://supabase.com/) - I cannot recommend enough, especially if OP is solo, which it sounds like is the case.
I like this idea a lot, personally. Separately deployed customer instances as an approach for “multi-tenancy” also eliminates the need for sharding databases at scale, since most of your scaling will just be new deployments. Overall, this suggestion has an appealing set of trade offs if you have the DevOps chops to pull it off.
Just watch out for really really big customer instances (but then they should be paying more than enough to spend time on their particular scaling issues).
I realized after commenting if it is specifically a product where you actually need / want these customers' data to "talk" to eachother, then actually your architecture might be a bit simpler - then you can expose what you need between all the tenants. I can't speak to this approach much more though, since I haven't done a project like that yet.
While supabase could be considered a "technology" it's really more or less fancy wrappers around a postgres database. And even then, the wrappers it uses are rather well known:
- gotrue for auth, 3.2K GitHub stars
- postgREST to expost postgres as REST API, 19.8K GitHub stars
Also, this list is quite literally supabase (https://supabase.com/) - I cannot recommend enough, especially if OP is solo, which it sounds like is the case.