With CouchDB, you front-load all of your disappointment. In exchange, everything that CouchDB can do has compelling big-O performance. For example, all queries finish in logarithmic time, including one-to-many, one-to-one, and merge-joins. Map-reduce is not a job you run; it is a living data set that always exists and always reflects the latest changes to your data. (Updating a map-reduce result takes linear time for the number of updates, if I recall.)
Plus, the BigCouch builds allow you to specify your redundancy needs. The preceding paragraph still holds true. Nothing has changed. You just get to throw hardware at the problem to guard against machine failures.
CouchDB is slow. Its VM is pokey. Its disk format is bulky. Its protocol is bloated.
CouchDB is fast. Everything that you can do, you can do in logarithmic time.
CouchDB is neither slow nor fast, but predictable. Fun fact: the entire CouchDB Erlang code base is almost the same size as the NodeJS standard library (20k apples vs. 15k oranges).
To answer your question, snappy compression and view optimizations will be a welcome boost for the other speed question: speed of development, time to market. If you think the compile step is time sink, rebuilding an index on all of your data is just untenable. So, the optimizations will improve day-to-day experience, but they will not change its fundamental value proposition.
Exactly. Restrictive query model and data rw patterns required all tech decisions to be bounded by couch requirements (which is a no no normally) but in 2 years, 40TB and going from 1 to 14 nodes seamlessly, there was no lost data or crippling cluster events.
I prefer to think of it as a kernel, especially in the context of all the systems built in top of it. Not just Couchbase and Cloudant but also various users who have custom wrappers (Meebo, Bing back in the day, new stuff like Refuge and mobile versions...)
This is a good way to look at it, I think. People often ask if I'd use CouchDB again if I was building another similar system - I would, but I'd restrict it to a subset of the problem that was best-suited to CouchDB's strengths/quirks.
Thanks, I'll add a FAQ about location, latency, etc. after some sleep.
It's EC2 us-east at this time although we'll probably be evaluating alternatives soon. But moving off EC2 is a big step because so many people are there.
I wasn't suggesting moving away from EC2, but you might consider also going to places like rackspace, joyent or local/regional providers to offer their customers low-latency connections
I'm in a similar situation. My favorite podcast is The Skeptic's Guide to the Universe, which follows in the footsteps of Houdini and Sagan, applying critical thinking to life.
Still, even though No Agenda is basically the complete opposite, I just can't stop listening. There's something about it.
Also, Dvorak is a columnist, people, not a journalist. There is a difference.
We would need a place that can seat several people comfortably in a discussion format which has drinks.
My wife and I are wracking our brains but I think the upshot is there is no venue in Bangkok which is easy to find, serves drinks, and is quiet. Also, wireless Internet access would be nice. I'm thinking about a cafe.
Starbucks on Lungsuan road is nice. If you'd prefer to support the Thai Man instead of the American Man then the True coffee shop in Siam Paragon is also nice. The nice thing about Paragon is, for YCers who have already made your millions, the Ferrari dealership is conveniently located right there in the mall.
There is the semi-weekly Beercamp meetup, which vaguely overlaps. It is largely composed of the English-speaking Barcamp community. Not sure if that counts.
With CouchDB, you front-load all of your disappointment. In exchange, everything that CouchDB can do has compelling big-O performance. For example, all queries finish in logarithmic time, including one-to-many, one-to-one, and merge-joins. Map-reduce is not a job you run; it is a living data set that always exists and always reflects the latest changes to your data. (Updating a map-reduce result takes linear time for the number of updates, if I recall.)
Plus, the BigCouch builds allow you to specify your redundancy needs. The preceding paragraph still holds true. Nothing has changed. You just get to throw hardware at the problem to guard against machine failures.
CouchDB is slow. Its VM is pokey. Its disk format is bulky. Its protocol is bloated.
CouchDB is fast. Everything that you can do, you can do in logarithmic time.
CouchDB is neither slow nor fast, but predictable. Fun fact: the entire CouchDB Erlang code base is almost the same size as the NodeJS standard library (20k apples vs. 15k oranges).
To answer your question, snappy compression and view optimizations will be a welcome boost for the other speed question: speed of development, time to market. If you think the compile step is time sink, rebuilding an index on all of your data is just untenable. So, the optimizations will improve day-to-day experience, but they will not change its fundamental value proposition.