Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't value the underlying database as much anymore since I have been working with Rails apps. The database is just an API. I want to write my code in such a way that I could unplug POSTgres and plug in SQL Server tomorrow without skipping a beat. Using AREL you should largely be able to do that.


1. No app of significant size will actually allow this. If you've ever tried to switch over a large data set with a high query volume you'll know that it never "just works".

2. Just because there's an API behind it, doesn't mean that it doesn't matter which one you use. As a trivial example, Linux and OSX are both POSIX but it's hard to make an argument it doesn't matter which you use.


I bet after you switched your data over you wrote the conflicting SQL lines in a much more generic way.


No, the issue is not conflicting SQL. An ORM solves that problem extremely easily.

It's the fact that query planners and performance optimizations implemented by each database are different, and thus directly porting your current schema and queries doesn't work.


> No app of significant size will actually allow this.

Never heard of ORM ? In the Java world it is very, very common to have database independence.

And most enterprise deployments (i.e. significant size) are Java.


Of course I've heard of ORM.

Theoretical "Database independence" is easy to achieve: just follow the SQL standard. All your queries will execute. There's nothing java-specific about this at all. ORMs are extremely common across most languages.

Practical "Database independence" is very hard to achieve, because the query planners and performance optimizations differ between databases.


I agree with you. I'd like to expand a little:

The main people who can get a benefit from database independence are those who ship software to a client site to be executed. Imagine if you wrote some kind of accounting software package that you handed off to a corporation's IT group to manage: supporting more databases is probably more important than supporting more operating systems (mostly because the number of OSes requiring support has collapsed for most new applications). There will be installations running on different databases all the time.

The second most beneficial effect is if you are planning to change databases frequently (why?). Personally I don't think this is usually worth it for technical reasons, you may as well accept it'll suck when it comes time to do this and take whatever shortcuts make life easier and more correct before that. However, it can be useful as a hedge against your database management system vendor trying to play the role of an extortionate gangster or, alternatively, lying down and dying.

One thing I like about Postgres is that it is a project, and not a product. There is no overarching database vendor to extort you or die off suddenly: project death is conditional on a lack of interest -- both financial and personal -- in the project's maintenance and improvement. Truly, it will have to be completely obsolete in the marketplace to die. PGCon getting hit by a meteor would be a major setback, though.

The downside is that a project can't often make strategic or speculative investments in whizbang features: there has to be some gestalt agreement that a new feature is both worthwhile and implemented very well (both in terms of correctness and maintainability: the odds of you being able to ask another programmer in real time what is going on in a module are very low), and that means quite a bit of humming and hawing before committing to a feature or approach.

It's not for everyone: some whizbang feature other may enable your business grow fast enough and survive long enough to deal with the potential pitfalls of having a potentially extortionate or dead database vendor (the old generation: the usual proprietary-RDBMS suspects. Probably the new generation: database-implementation-by-wire only). But it's something to put on the scales, at least.


There's still some advantages to the database you choose. For example ActiveRecord will support the JSON datatype which will come with Postgres 9.2. While they may attempt to make this work in some form for other databases, the performance differences between the two will be quite vast.


The JSON data type is nothing more than a self validating varchar field i.e. fairly pointless.

Better off just validating the JSON yourself in the application layer and maintaining cross database compatibility. To be honest it is rare that you would even need to explicitly validate as when you deserialize into objects it will just fail at that point.


Sure but combined with plv8 you trivially can have indexes on the json


"The database is just an API."

APIs matter a lot. In my opinion, software engineering could almost be defined as the practice of designing good APIs.

I happen to think postgres, and it's brand of SQL, are great APIs. It has a lot to offer beyond a lowest-common-denominator API that works over any database.


> The database is just an API.

Tell me how that leaky abstraction is working for you when it sinks.


Oh, downvote for what would be a good business decision in most companies? (unless super high performance is necessary?)


Didn't downvote you but read the guidelines:

Resist complaining about being downmodded. It never does any good, and it makes boring reading.

http://ycombinator.com/newsguidelines.html


Good link, I'll not complain next time :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: