Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Short release cycles don't solve this problem. If anything, they exacerbate it -- you can just fix any bugs in the next release, right?

On top of which, they incur heavy user cognitive costs when tools change every few weeks. Short release cycles seem to be more about making developers happy by alleviating them of a weighty responsibility, rather than making users happy by shipping something that stands on its own as a solid, well-made product that doesn't need a stream of constant updates.

I want software to behave more like reliable products that have survived the decades in the real world: they work as advertise, and they keep working. This is something PostgreSQL has achieved for many years.



Users want magical flying unicorn ponies, but short release cycles still work out better than long ones, because you can delay questionable features just a few weeks instead of a year.


That might make sense for the next omgchatpopapp, and it might make the marketing department happy to have a constant stream of feature noise, but that's not a universal truth, and I'd rather optimize my practices for practicing my craft with responsibility and care for the future.

Churn doesn't make sense for PostgreSQL; people who rely on software as a tool to accomplish their work generally do not benefit from incomplete poorly considered "minimum viable" solutions and a constant stream of instability. Strategies like continual delivery -- or focusing on MVP -- optimize for quickly proving/disproving user traction at a minimum possible expenditure, as to more quickly reach a stage where further growth can be funded via more investment.

In the process, these strategies incur a high percentage of false negatives -- including ideas that could work if given more consideration and care. These strategies are ultimately about exit events, not making well-crafted products that stand the test of time. Ironically, the latter is what often leads to much more substantial renumeration.


Your hypothesis looks compelling written down, but it's entirely wrong in practice. This is the process where Linux comes from too, and Postgres is on that order of system software produced by an open source process.

Your error is the erroneous assumption that centralised control is even feasible. This assumption has been proven wrong repeatedly. You're assuming the bazaar development method is riskier, but the certainty offered by the cathedral method is that of project overruns and likely failure. Unless you fund it sufficiently for NASA-like certainty, which you're not doing.

(Note that Postgres has been done much more that way all along, and is famous for quality; the crappy alternative, MySQL, which is in practice a product showing the sort of qualities you describe, was a one-company project for most of its history.)

Go reread The Cathedral and the Bazaar http://www.catb.org/~esr/writings/cathedral-bazaar/ and note that it's been proven pretty much entirely correct in practice. (By the way, "agile" at its best is literally an attempt to port successful open-source development to commercial practice.)


I don't understand your classification of PostgreSQL as "the bazaar", and MySQL as "the cathedral"; if anything, MySQL exemplified the bazaar model of incorporating almost any half-baked idea in the rush to regularly ship features, while PostgreSQL has always taken the approach of shipping when it's done.

PostgreSQL's careful approach has always required "cathedral" centralization of technical management to ensure that things are done correctly, or not at all.

I also have to question the assertion that Linux provides an objective example of well-written software, when the code quality and issues present in Linux are most reminiscent of MySQL.

Relative to a so-called "cathedral" model such as FreeBSD's (or PostgreSQL's), Linux's software development model produces:

1) Code of considerably lower overall quality than that of FreeBSD in terms of bug count, maintainability, and simple consistency.

2) Poor (and often immediately replaced) architectural design designs that must be supported indefinitely.

3) Additional cost levied against downstream consumers of the product; simply shipping a reliable kernel requires considerable effort on the part of downstream distributions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: