Hacker News new | past | comments | ask | show | jobs | submit login
Give PonyORM a chance (jakeaustwick.me)
51 points by kozlovsky on May 26, 2014 | hide | past | favorite | 63 comments



I do not really understand the desire for ORMs to try and recreate the experience of writing SQL. Why not just use SQL, then?

I understand that the ORM is trying to smooth over differences in implementation, providing the possibility of change from one DB access layer to another.

I have yet to see anyone do that on a real project, which makes me really wonder at the point of using an ORM at all.

I cannot quantify the amount of time I've lost figuring what the hell I must do to generate a relatively simple SQL statement.


Well, ORMs are useless if you're of the opinion that string interpolation and concatenation are a good way of composing logical units. Many of us prefer a proper API for doing that, and good ORMs like SQLAlchemy provide us with one.


PDO in PHP has parameter binding and a way of loading a row from a resultset into a class. Safe, simple, and flexible. That's good enough for me. YMMV.


"foo > 123 or foo IS NULL and bar < 12"

Now write this in SQLAlchemy without strings, without concatenation, and let's see which provides you with more clarity.


(FooBar.foo > 123) | (FooBar.foo == None) & (FooBar.bar < 12)

No strings, and I get an actual expression object instance that I can, for example, store in a variable and reuse in any number of queries.


Yes! Plus if you have a syntax error Python will catch it for you. Peewee works the same way.


LINQ. Composable queries rock.


LINQ is a black-box that is often disastrous for application performance. The more LINQ in a project, the more likely the project is headed to failure.


First of all, are you aware that LINQ is not only about database access?

Second, how is it a black box? It is the particular query provider implementation that is a black box.

Third, there's absolutely no correlation between the amount of LINQ and success of a project.


are you aware that LINQ is not only about database access

Yes, I am aware. How is this relevant at all to the context of this thread?

The rest of your comment: You are simply wrong. A heavy use of LINQ is one of the surest sign that one needs to run from a project. It is almost always used and embraced by people who have no concept of the consequences, imagining that the conciseness of LINQ = programming goodness, when the opposite is generally true.


Absolutely. In my latest project, I just have a very thin facade between the application and the DB client library, just enough to simplify the process of getting a connection from the pool, fishing the required data out of a response, handling errors etc. The application doesn't build any queries dynamically; they're all string literals with parameters that get filled in by the DB client library.

I also wrote a little utility to handle migration. Each migration is a file with SQL in it; forward migrations only, no going back. The tool makes it easy to apply a given migration to a given database (dev, staging, production or whatever) and does the book-keeping to ensure they're only applied once. Couldn't be simpler.

I'm really happy with this set-up. It gives access to all the features of the database instead of just the lowest common denominator, and makes it much easier to read the code and know exactly what's going on.

The only times I've seen an ORM work well in a non-CRUD application is when it was designed and written from scratch to work with one database to supply exactly the operations and semantics that the application needs.


ORM is most supposed to save you from repeating the same load-db-put-data-into-objects over and over. It is also supposed to do caching and other similar optimizations for you.

If we insist on it, SQL itself was supposed to smooth over differences in db implementation. Ehm.

If all you need in one query, then figuring out how to make ORM running may be waste of time. If you need gazimilion of them and you need results cached reasonably and so on and so forth, ORM starts being very useful.


A SQL statement will return the results as plain data.

Data never goes out of fashion, it's easy to use, it's usually text, and it's usually returned in convenient lists.

If you want to select a gazmillion rows, you're probably doing something wrong if you're doing it with an ORM.

The overhead of storing all that metadata and the object structure overhead for each row would make it a poor choice for massive volumes.

Caching results from a database seems a bit pointless to me, since databases do that already anyway.

Rather figure out how to optimize your SQL so that you don't have to build a cache. The database is almost never the bottleneck with well-written SQL.


If texts and lists were easy to use, nobody would have invented structures, objects and the other thousands of abstractions available on modern languages.

An ORM will give you a description of the database you can use by reflection, and an abstraction layer that avoids using hard to use lists and tables. The first one is way more usefull than the later, but both are good things.

Also, application servers are normaly much more numerous than database servers, and much easier to scale up. Thus, anything that offloads work from the database to the application will simplify your environment when you grow.


Of course, there's things like SQLAlchemy Core which give you a lot of the syntax of an ORM for building dynamic, complex queries (no more gluing strings together when the shape of your query is dynamic), while returning proper cursors which return arrays and dictionaries.


He is not talking about fetching a gazillion rows but about writing a gazilkion similar-but-not-the-same queries!


In an indie game's development that I'm familiar with they use to use an orm or something similar but switched to manual SQL after the orm was generating way too many redundant statements. realtime mmorpg


Surprised by number of negative comments about ORM-s. Twice I have had to migrate projects between different sql db engines. Once SQL was embedded in the application, second time perl-s DBIx ORM was used from the beginning. Guess which project was easier. Why would you lock yourself into using a specific db engine unless you really had to. And how hard is it to embed SQL for a few specific queries even when using ORM. Lately heavily using Django's ORM and think it is great. And if your data structures so complex that ORM does not do the job maybe something wrong with your schema. I like my data layer simple if possible, and complexity in the application.


> And how hard is it to embed SQL for a few specific queries even when using ORM

This is the thing that many people seem to forget. Every ORM I've used has allowed you to write your own SQL for those times where the DBAL is limited or not performing well. In my experiences, those times are rare. Some ORMs (like Doctrine) even have their own DSL so that you can write complex queries and still have the benefit of a DBAL.

I'm currently working with a team on a large project where all the SQL is written by hand, and results are hydrated into plain ol' arrays. The maintenance overhead for schema changes is huge due to the number of queries which need to be updated, and there are constant bugs caused by columns being left out of SELECT statements. On top of that, the testability of the entire app suffers because most methods talk directly to the DB. It's truly a nightmare. I've never experienced these problems with an ORM.


Probably most schemas out there in the wild are "wrong" or too complex. And we have to live with that sometimes, because cosmetical changes to database are more trouble than they are worth. ORMs do not help much in this regard, in fact they probably make the problem worst.

Sometimes a schema can even be correct and the ORM will have trouble with it, Hibernate has (or had, I don't follow nowadays) a problem with n-n relationship tables that have aditional information on them, which is a perfectly fine normalization thing to do.

In fact, I would go so far as to say that if my tool dictates my data modeling, than there's something wrong with my framework.


No chance. Sorry Pony, you are probably a great ORM, but I have been burned too often by ORMs and their "magic", which will "just work" (hahahahahahaha). Hibernate destroyed my trust in ORMs forever and even SQLAlchemy (which is great) wasn't able to fully restore it. I will write my queries alone. So that I know why they misbehave and I can change it instead of trying to chase through obscure layers of magic, read various loggings (if logging is available) and hope that my changes in the ORM layer will translate to the sql I want.


I have the same opinion.

I don't want magic on my database layer, because when it backfires, and it will, it might ruin the most precious thing that my application relies on.

I absolutely do not want a lack of understanding on my part to cause permanent damage to my data (already got burned by that). And since ORMs are usually complex beasts, I certainly don't want the burden of learning every single use case before even deciding how to model the thing.

The level of complexity does not balance the value added. It does not hide SQL, because you WILL have to learn both the framework and SQL to debug, and debug you will, often. It does not, in my opinion, increase productivity since I will lose many an hour figuring things out when things do not work as expected, and they won't. Simple bugs and performance issues become a great burden.

Simply put, I don't won't to forfeit control of the most sensitive part of the system, the data. Everything that goes on with data persistence should be really clear and simple, no magic.


If you can think of one of your most representative examples of a problem you had I'd be interested to hear.


(I'm not the OP, but I will butt in with my two cents.)

Anything that isn't CRUD or a simple aggregation. Real world applications tend to have: oddball joins, subqueries, window functions, case statements and all other sorts of crazy once you get into the value-add parts of the system. What I really need is a clever way to project the value or set into a structure that I can easily process or convert into UI elements.

On the subject of CRUD, I would be really impressed with an ORM that would detect my constraints and enforce them in the application layer. I am a big fan of DRY, but I am a bigger fan of bulletproof relational models that prevent bad data from ever getting to the system. So rather than declaring these sorts of things in the app code, where they won't be enforced in things like stored procs or ad hoc SQL, I'd much rather have the database be the one source of the data constraints. For really complicated things (e.g. validation triggers), I'd want a way to communicate a violation back up to the ORM layer.

Not sure if this is what you wanted, but for some reason I felt compelled to brain dump what I have been thinking on the subject as of late.


Seconded. The worst thing that happened to me was messing up lazy and eager loading and accidentally pulling the whole database into memory which went unnoticed for some time because there was never much data in the database during development. This happened using NHibernate and on my first project in my first job with my first exposure to an O/R mapper; no really bad experience ever since.


So even trivial `select id, name from bar where baz = @foo`-style queries are hand-rolled?


Yeah, why not? What problem is an ORM actually solving in that case?

Yes, an ORM saves you the time of writing out the code to map the fields on the row to the fields on the object. But it doesn't do that for free. Let's say the results of your query are, shortly thereafter, looped over and one property is checked. Well, in NHibernate, that property is lazy-loaded by default, so now that you're touching it, it generates an individual select query for every element in the list. That's simple enough to fix: change that property to not be lazily loaded. Now suddenly a completely different part of the system is getting an OutOfMemoryException because you're querying over an unfiltered dataset and now it's loading a huge table into memory.

Your trivial query in SQL is not a trivial query in an ORM. Let's keep trivial problems trivial.

There are tools that do this better, by essentially providing a domain-specific language that generates SQL queries (LINQ-to-SQL, parts of sqlalchemy, and, it looks like, PonyORM). These certainly result in more aesthetically pleasing code, but you're still essentially writing SQL, except that you don't have access to the entire SQL implementation. You want to remove constraints, apply an update, and then add the constraints back? Good luck doing that with anything other than SQL. This probably won't hurt you for trivial queries, but having the ORM in your code means that it's only a matter of time before someone on your team starts doing something non-trivial with it, and then the beast will raise its ugly head. And besides, while you can be careful with these DSL-style ORMs and they won't hurt you, they won't help you that much either. Typing a little more isn't the bottleneck.


You can always have a function "roll" the query string for you, but the point is you pick the function and you handle its output, instead of making a metaquery to some engine, which calls your SQL server itself with something, but unclear what, unless it produces a log for you to inspect.

In the latter case it's also unclear when your queries run in a transaction, which isolation level the transaction is, heck, in many ORM you barely know when the server gets called at all (it's usually not when you expect it's called), nor you have any guarantees about the order in which your updates are fed to the server.

You also have no control to supply direct SQL when you need to as you have no direct access to the connection, or the ORM relies on caching where direct access would cause inconsistent data.

The difference between an ORM, and a simple query generator under your control seems like a subtle detail, but as the wise ones said, the devil's always in the details.


Author of the article here (although not submitter). Pony actually has a way to log so that you can see every SQL query being ran against your DB. You can activate it with sql_debug(True). I monitored it for my first few Pony projects, although now I've just come to trust it will do the right thing.

You also can supply direct SQL to the DB via Pony, the documentation covers this: http://doc.ponyorm.com/database.html#using-parameters-in-sql...


That does not fit any ORM I've looked at (and, sorry, Java people, I don't care to look at your tools anymore).

ORMs normaly default to either "in transaction" or "not in transaction", and require explicit code to change that. They guarantee the order of commands that change data, and let you get into the database any time you want.

The only characteristic that fits my experience is that you don't know when SELECTs are executed. And, if this is a problem for you, you are doing something very strange.


Why you should not: it's inferior to SQLAlchemy and has a dual licensing model where one is the useless AGPL and the other is an expensive commercial license.

Also I find it's implementation to be very questionable.


...expensive?! Maybe when you compare it to free. But for such a crucial piece of infrastructure, this looks like very cheap, IF its friendlyness is backed by SQLAlchemy-class code quality.

The fact that there exists good free software out there doesn't mean that one should adjust his price expectations based on this. Take Linux, it's free, but this doesn't mean it's development was not fueled by up to $100M worth of contributions, so if Linux it's free, this doesn't mean that a team developing a new OS by themselves should not charge a price big enough to quickly amass a tens-to-hundreds-of-millions-$ class profit to recoup their investments.

You give something fully for free IF and AFTER you've recovered the development cost, or after you go bankrupt or pivot to another product and you no longer have any use for it. If you do it before, you at least make damn sure nobody else can make a profit from/based-on your not-yet-paid work without giving you at least some of it, because this is how it's fair to be, this is what the (A)GPL is for!


As much as AGPL annoys me the pony developers seems to understand it and use it in a reasonable way.

What really annoys me is

1. when people stamp a AGPL license on something (that was previously licensed under GPL or even less restrictive) and claim that it is still just GPL and everyone could still continue using their system as before.

2. when people make something brilliant and only offer it under AGPL without providing a commercial licensing option.


3. when the code is AGPL, receives contributions, then the original author offers a commercial license including said contributions without any contributor agreement.


I'm the author of a lightweight python ORM (peewee) and have a healthy respect for the work that goes into building this type of tool. PonyORM is quite a feat of software engineering: it's developer(s) have done a great job.

I would never use it, though, because decompiling Python generator expressions just feels so unpythonic.


> I would never use it, though, because decompiling Python generator expressions just feels so unpythonic.

Agreed but it really left me thinking that it'd be extraordinarily useful if Python gained some supported path for doing this – a standard way to get the AST for a generator expression would be really useful and a relatively general tool. A few years back I would have loved a cleaner way for a JSON decoder to know I only needed a couple of fields to avoid creating objects for everything else and it seems like there could be some very interesting optimizations for something like numpy/scipy as well.

1. I briefly liked LINQ awhile back before Microsoft's usual baroque design and poor QA convinced me that ASP.Net wasn't actually a time-savings. The LINQ-REST support seemed nice, albeit complicated, until you hit bugs where a change in one view would break unrelated views and running in the debugger "fixed" it.


The process going on under the hood (the whole disassemble / AST thing) is pretty unpythonic, however I would argue this then gives the developer the advantage of having a more pythonic query interface to the database on a higher level.

Providing the things happening underneath don't break (and they haven't on me yet), and it generates efficient SQL (which is does), then I'll take the hidden complexity for the advantage of the higher level generator syntax.


Totally, and I know you are not alone. I think it's just a matter of boundaries, and when it comes to the data access layer of my applications I definitely want to feel very comfortable with the library I'm using. That's why SQLAlchemy is so popular, IMO, the implementation is rock-freekin-solid. The API may be more verbose, but you can understand it, and in doing so trust it to do the right thing.


I still feel that RedBeanPHP (http://redbeanphp.com) has found the right balance between direct object manipulation and SQL. I find that abstracting away SQL too much gives more headaches than it's worth — SQL is a valuable skill anyway, and often diving into SQL is much faster and easier than diving into abstraction magic.


I love redbean, it saved my butt once, but it gets lots of it's power from doing things "the PHP way", it makes linting and editor/ide autocompletion impossible, there is no one place you can look to find the part of the database schema relevant to your application, you can't just go read a models.php file and get a birds eye view of the database structure that matters and all...

It's basically the opposite of "pythonic". I can't imagine something like redbean written in a language like Python, and maybe it's for the best :)


Another +1 for redbean. Haven't tried v4 with the namespace support yet, and I have a feeling I will not like it as much as earlier releases, but will probably end up needing to use it on newer projects.

The dev mode approach of 'just do stuff and I'll modify the tables' has scared off many people I've showed it to, but I love the approach - sort of a moderate 'nosql' approach without giving up structured tables for more complex queries later.


> It essentially decompiles the generator into its bytecode using the dis module, and then converts the bytecode into an AST.

Uhm, anyone knows what are the performance costs of this?


Here's a comment on the stack overflow answer I linked to in the post:

Very performant: (1) Bytecode decompiling is very fast. (2) Since each query has corresponding code object, this code object can be used as a cache key. Because of this, Pony ORM translates each query only once, whereas Django and SQLAlchemy have to translate the same query again and again. (3) As Pony ORM uses IdentityMap pattern, it caches query results within the same transaction. There is a post (in russian) where author states that Pony ORM turned out to be 1.5-3 times faster than Django and SQLAlchemy even without query result caching: http://www.google.com/translate?hl=en&ie=UTF8&sl=auto&tl=en&...


the performance cost is going to be mostly in the kinds of queries it produces and how well they will be interepreted by the query planner. My understanding is that Pony is very heavy on subqueries and correlated subqueries, and the user is given extremely little leverage on being able to control the structure of queries rendered. Subqueries and especially correlated subqueries have the worst performance of all, especially on less mature planners like that of MySQL.


Actually Pony can transform subqueries into JOINs in most of the cases. But when it translates the 'in' operator of a generator expression, it produces a subquery with 'IN', because otherwise the programmer can be confused by the fact that the resulted SQL looks too different from the Python code. Pony allows you to use the 'JOIN' hint in order to make it to use JOINs instead of subquiries. In the example below Pony produces a subquery when it translates the `in` section from the generator:

    >>> from pony.orm.examples.estore import *
    >>> select(c for c in Customer if 'iPad' in c.orders.items.product.name)[:]

    SELECT "c"."id", "c"."email", "c"."password", "c"."name", "c"."country", "c"."address"
    FROM "Customer" "c"
    WHERE 'iPad' IN (
        SELECT "product-1"."name"
        FROM "Order" "order-1", "OrderItem" "orderitem-1", "Product" "product-1"
        WHERE "c"."id" = "order-1"."customer"
          AND "order-1"."id" = "orderitem-1"."order"
          AND "orderitem-1"."product" = "product-1"."id"
        )
But you can tell Pony to use JOIN instead of a subquery by wrapping the 'in' section into a 'JOIN' hint:

    >>> select(c for c in Customer if JOIN('iPad' in c.orders.items.product.name))[:]
	
    SELECT DISTINCT "c"."id", "c"."email", "c"."password", "c"."name", "c"."country", "c"."address"
    FROM "Customer" "c", "Order" "order-1", "OrderItem" "orderitem-1", "Product" "product-1"
    WHERE "product-1"."name" = 'iPad'
      AND "c"."id" = "order-1"."customer"
      AND "order-1"."id" = "orderitem-1"."order"
      AND "orderitem-1"."product" = "product-1"."id"


I do a lot of heavy Django work and first glance I really like the syntax. Much cleaner that what I'm used to. I will def give this a try at some point and comment further.

Why the name PonyORM btw? I know it's superficial but I much prefer the name SQLAlchemy - has more meaning.


Hi, I am one of Pony ORM authors.

The idea of Pony ORM is to provide a Pythonic way to work with the database. We think that the generator syntax is very concise and convenient.

It is named Pony because a pony is a small, smart and powerful creature - these are the features which our mapper has. Our goal is to provide non-leaky abstraction and good user experience.


I believe it's a reference to DjangoPony (http://www.djangopony.com/).


Actually we named our project aprox two years before Django Pony mascot image appeared.


As someone who has had to maintain other people's code, please use an ORM. Usually the biggest thing to fix is a person didn't select related rows and ends up doing hundreds of queries inside a for loop.

Now the hand-written SQL people leave in SQL injection possibilities. They build up complex queries with crazy string concatenation. They either have no or a shitty data mapping layer (I mean, I really enjoy having to look at the database to figure out what fields select * from articles returns).

Obviously there are going to be queries outside of what any normal ORM can do, but every ORM I have used gives you an escape hatch to just write raw SQL when needed.


I'm just waiting for zzzeek to publish a blog post on how to implement this on top of sqlalchemy core, like this post [1]. :) Seriously though, sqlalchemy ORM layer already does this. Why not leverage all the existing features and support it has? Also, it's an easier sell for a $100 piece of software: "If you're using sqlalchemy, you can migrate to pony immediately."

[1] http://techspot.zzzeek.org/2011/05/17/magic-a-new-orm/


the AST-on-top-of-SQLAlchemy idea has been discussed for many years prior to Pony's existence. Robert Brewer's Geniusql http://www.aminus.net/geniusql/chrome/common/doc/trunk/manag... does the same thing (for Geniusql, you pass it lambdas; interpreting generators directly was more of a "future" thing, but by "future" we're talking, like four years ago :) ). He presented it at Pycon maybe in 2009, 2010 and right after I had the usual flurry of questions "Can SQLAlchemy do this?" and I said "sure! just write an adapter, easy enough". I think Robert was even interested in that.

at the end of the day the AST idea looks very nifty but IMO is too rigid to translate to SQL in a flexible enough way, and also works against the main reason you use code structures to produce SQL which is composability. When I last saw the Pony creators do a talk, the approach seemed that each time you have a given SELECT, and you'd like to add some extra criterion to it, it pretty much will keep producing subqueries of the original, because each time you can only wrap the AST construct you already have. It similarly had no ability to produce a JOIN - at that time at least, the only way to join things together was by rendering inefficient correlated subqueries. This was asked explicitly.

If they've found a way to resolve these issues while keeping true to the "AST all the way" approach and not dropping into a SQLAlchemy-style approach, that would be great. There's no reason SQLA ORM or Core couldn't be behind a similar approach as well except that nobody's really had the interest in producing it.


Hi Mike,

Pony had the ability to produce JOINs from the very beginning, but during that presentation we found that Pony produced subqueries for MySQL and that was not very performant, correct. Since than we've improved Pony and now it got a query optimizer which replaces subqueries with efficient JOINs where it is possible. Here is the query from that presentation:

    >>> select(c for c in Customer if sum(c.orders.total_price) > 1000)[:]
The straightforward way is to use a subquery here, but Pony's optimizer produces LEFT JOIN because such query usually has better performance:

    SELECT `c`.`id`
    FROM `customer` `c`
      LEFT JOIN `order` `order-1`
        ON `c`.`id` = `order-1`.`customer`
    GROUP BY `c`.`id`
    HAVING coalesce(SUM(`order-1`.`total_price`), 0) > 1000
In our opinion this is the main advantage of Pony ORM - the possibility to perform semantic transformations of a query in order to produce performat SQL while keeping the text of Python query as high level as possible.


OK, here's something I should understand. Say we start with:

    myquery = select(c for c in Customer if sum(c.orders.total_price) > 1000)
I'm inside of a query builder. Based on conditional logic, I also want to alter the above statement to include customer.name > 'G'. Intuitively, I'd do this:

    mynewquery = select(c for c in myquery if c.name > 'G')
which will take the original SELECT, wrap it in a whole new SELECT. Right?

Given "myquery", how do I add, after the fact, a simple "WHERE customer.name > 'G"" to the SELECT? Just continuously wrapping in subqueries is obviously not feasible.


There is no need to wrap it with a query. You can just add a filter:

    mynewquery = myquery.filter(lambda c: c.name > 'G')
The new query will produce the following SQL:

    SELECT `c`.`id`
    FROM `customer` `c`
      LEFT JOIN `order` `order-1`
        ON `c`.`id` = `order-1`.`customer`
    WHERE `c`.`name` > 'G'
    GROUP BY `c`.`id`
    HAVING coalesce(SUM(`order-1`.`total_price`), 0) > 1000


Well right, this is exactly a SQLAlchemy-style syntax, except a tad more verbose :). This is the "dropping into a SQLAlchemy-style approach" I referred to.

As far as the "AST allows caching" advantage, over at https://bitbucket.org/zzzeek/sqlalchemy/issue/3054/new-idea-... we're working out a way to give people access to the "lambda: <X>" -> cached SQL in a similar way, if they want it.


The lack of data migration tooling would be a critical problem for me.

The traditional problem that ORMs are supposed to solve is change. Changing your database schema means you have to change all your queries. So instead you have a a system whereby the program that executes your queries also understands your schema and can make that change for you.

Data migrations are similar to this, only much harder and more time consuming.


I think once a developer reaches a certain level of experience and maturity, they realize they should stop giving any sort of ORM a chance.


Care to explain as to why?


It's the difference between working on the problem, or working on the bugs / implementation details of the ORM. I know which I'd rather be working on.


I've been using Django for 10 years, and I never experienced an ORM bug, and when it couldn't do some complicated join, I just did a raw query. For everything else, it saved me tons of time.

Is that long enough experience?


Funny. In my experience it's been the difference between working on the problem, and working on 12 different files each full of several page-long functions (either containing raw SQL strings, or string fragments to be assembled based on arguments passed in).

Then, out of the 6 months you save by not-doing-that, you spend about 6 days dealing with bad performance, ORM bugs and limitations, and like matters. (And the bad-performance areas are the ideal place to put in your choice of raw SQL.)

But your project may have different needs. Depends on how many tables you're managing and how many different ways you need them and how easy you need the "get me a test object for my integration test" to be.


ORMs basically comes from "lazy" programmers not wanting to learn SQL and proper database schema design and wanting something "that just works" instantly with little thought effort... Optimizations and handling of large data sets are some of the problems with ORMs. However, in our case we decided to use an ORM anyways because each UPDATE/DELETE/INSERT had to be signaled out to some lowlevel code, communicating with hardware. If we hadn't used an ORM (with overloaded save-methods) the user would have to keep track of all changes himself and signal the hardware manually, so in our case this was fine (we also didn't manage large data volumes)...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: