Nutshell does this! We have 5,000+ MySQL databases for customers and trials. Each is fully isolated into their own database, as well as their own Solr "core."
We've done this from day one, so I can't really speak to the downsides of not doing it. The piece of mind that comes from some very hard walls preventing customer data from leaking is worth a few headaches.
A few takeaways:
- Older MySQL versions struggled to quickly create 100+ tables when a new trial was provisioned (on the order of a minute to create the DB + tables). We wanted this to happen in seconds, so we took to preprovisioning empty databases. This hasn't been necessary in newer versions of MySQL.
- Thousands of DBs x 100s of tables x `innodb_file_per_table` does cause a bit of FS overhead and takes some tuning, especially around `table_open_cache`. It's not insurmountable, but does require attention.
- We use discrete MySQL credentials per-customer to reduce the blast radius of a potential SQL injection. Others in this thread mentioned problems with connection pooling. We've never experienced trouble here. We do 10-20k requests / minute.
- This setup doesn't seem to play well with AWS RDS. We did some real-world testing on Aurora, and saw lousy performance when we got into the hundreds / thousands of DBs. We'd observe slow memory leaks and eventual restarts. We run our own MySQL servers on EC2.
- We don't split ALBs / ASGs / application servers per customer. It's only the MySQL / Solr layer which is multi-tenant. Memcache and worker queues are shared.
- We do a DB migration every few weeks. Like a single-tenant app would, we execute the migration under application code that can handle either version of the schema. Each database has a table like ActiveRecord's migrations, to track all deltas. We have tooling to roll out a delta across all customer instances, monitor results.
- A fun bug to periodically track down is when one customer has an odd collection of data which changes cardinality in such a way that different indexes are used in a difficult query. In this case, we're comparing `EXPLAIN` output from a known-good database against a poorly-performing database.
- This is managed by a pretty lightweight homegrown coordination application ("Drops"), which tracks customers / usernames, and maps them to resources like database & Solr.
- All of this makes it really easy to backup, archive, or snapshot a single customer's data for local development.
Webhooks would address this challenge for a certain class of applications (those with their own server component), but if you imagine building a solid desktop email app (APNS/GCM not used), then you have a problem.
Yep, you can set up a webhook with `*` filter criteria, but I think OP was talking about a more specific notification system for the delta sync endpoint.
We'll definitely be trying to do this in a few months, when we get an idea for what kind of general bump we saw in trials / signups. But a larger part of this endeavor was beyond direct sales.
Personally, I met a NYT reporter who's prepping a story on small-business CRM, I met the person in charge of CRM for Sony Music, and . Several of our larger customers stopped by and saw us as the mature, growing company that we are.
We had a great conversation with the CTO of one of our competitors at a meetup of The Small Business Web.
We're prepping to launch a rebooted Zendesk integration, and we bumped into their sales team at a bar one night.
It'll be several months before we can put a harder figure on those interactions, but we'd never be able to buy them on Adwords.
I'd love to hear from our peers on the value they're able to gain from this kind of thing!
We became profitable at http://nutshell.com/ about 2.5 years after we launched. Building something as expansive as a CRM takes a lot of time. We made the decision to begin at the small end of the market (with pricing to match), and started with a lot of small customers as we scaled the product to bigger companies.
We also have higher expenses in terms of our support team (3 full-time) — there are higher expectations for sharp, fast support in an industry like CRM.
If you don't mind sharing, I'd love to know some more. According to what you've written plus your website: founded in 2010 (FOWD), so broke even sometime in 2013, and currently supporting 14ish people + is profitable. Kudos. That is really cool.
- How many founders do you all have?
- How did you get your initial set of (small) customers?
- How are you all expanding to new customers (bigger customers): via advertising? word of mouth? initial customers increasing seats? other? salesmen ?
- Why did you all pick CRM initially? (isn't it pretty crowded already: SugarCRM, ZOHO, Oracle, et al.). I'm genuinely curious as I don't know much about the market.
There are four of us on the founding team — three of whom have some other obligations, and myself. I spend 90% of my day coding or working on product with our designer.
> - How did you get your initial set of (small) customers?
Primarily through Google Adwords. Initially we had an inside sales rep who worked hard to talk to every trial, even if it meant spending a few hours walking a 4-user shop into signing up.
We were also one of the early CRMs to be in the Google Apps Enterprise Marketplace (i.e. integration w/ Google Apps). We got some free front-and-center placement there, and this brought in a lot of customers. This was in summer 2011, and for a while it doubled our monthly trials.
> - How are you all expanding to new customers (bigger customers): via advertising? word of mouth? initial customers increasing seats? other? salesmen ?
We’re definitely spiraling into more organic growth. We’re also doubling down on some more marketing efforts like a booth and sponsorships at SXSW. The product has also matured quite a bit into a place where it’s a genuine competitor with the lower end of the Salesforce market.
And we’re doubling down on integrations with other services, which (in my mind) is the “partner / channel program” of the SaaS world.
But this is an area of interest to me, and one we’ve been talking about a lot. I’m interested to hear what others are doing in the heavy growth area.
> - Why did you all pick CRM initially? (isn't it pretty crowded already: SugarCRM, ZOHO, Oracle, et al.). I'm genuinely curious as I don't know much about the market.
It was borne out of the sheer ineptitude of the market. Salesforce is ugly, expensive and nobody likes it. Sugar is an OSS clone of Salesforce. Zoho is trying to be Microsoft Exchange as well as CRM. </hyperbole>
We focused on bringing beautiful design and ease of use to CRM: something that a lot of smart people are doing to various components of business software (Freshbooks, MailChimp, etc.)
Great article, Compass. As a CRM (I'm a co-founder @ nutshell.com), we're pretty excited about the growth in our sector. This also meshes with our strong beliefs in building APIs and integrating with a ton of third parties.
While the goliaths do build platforms, they subsequently tend to slowly tighten their hold on those platforms (cf. LinkedIn's API shutdowns, Salesforce's extra charges for API usage). It's great working with other small upstarts that are interested in making software play nicely with each other (something that Compass obviously does quite nicely).
Check us out at http://nutshell.com — we've got a one-click import tool from Highrise. We're an established, profitable company focused on CRM with gorgeous UI.
We've done this from day one, so I can't really speak to the downsides of not doing it. The piece of mind that comes from some very hard walls preventing customer data from leaking is worth a few headaches.
A few takeaways:
- Older MySQL versions struggled to quickly create 100+ tables when a new trial was provisioned (on the order of a minute to create the DB + tables). We wanted this to happen in seconds, so we took to preprovisioning empty databases. This hasn't been necessary in newer versions of MySQL.
- Thousands of DBs x 100s of tables x `innodb_file_per_table` does cause a bit of FS overhead and takes some tuning, especially around `table_open_cache`. It's not insurmountable, but does require attention.
- We use discrete MySQL credentials per-customer to reduce the blast radius of a potential SQL injection. Others in this thread mentioned problems with connection pooling. We've never experienced trouble here. We do 10-20k requests / minute.
- This setup doesn't seem to play well with AWS RDS. We did some real-world testing on Aurora, and saw lousy performance when we got into the hundreds / thousands of DBs. We'd observe slow memory leaks and eventual restarts. We run our own MySQL servers on EC2.
- We don't split ALBs / ASGs / application servers per customer. It's only the MySQL / Solr layer which is multi-tenant. Memcache and worker queues are shared.
- We do a DB migration every few weeks. Like a single-tenant app would, we execute the migration under application code that can handle either version of the schema. Each database has a table like ActiveRecord's migrations, to track all deltas. We have tooling to roll out a delta across all customer instances, monitor results.
- A fun bug to periodically track down is when one customer has an odd collection of data which changes cardinality in such a way that different indexes are used in a difficult query. In this case, we're comparing `EXPLAIN` output from a known-good database against a poorly-performing database.
- This is managed by a pretty lightweight homegrown coordination application ("Drops"), which tracks customers / usernames, and maps them to resources like database & Solr.
- All of this makes it really easy to backup, archive, or snapshot a single customer's data for local development.