Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I didn't see a place to add comments on his article page.

In general, this reminds me a lot of architectures designed for embedded systems today (which are how software was designed for PC's in the early days).

The huge up side is performance the huge down side is that it completely ignores the significance of what a relational database offers to the company as a whole.

We need to be looking at ways to make SQL databases faster, not at ways to avoid its use.



Embedded systems are designed to squeeze the absolute maximum out of the hardware upon which they run. All this processor affinity, cache stride calculation, and avoidance of cross-core conflicts is just how business is done.

The upside is performance, because that's what is considered important in this case. The downside, which you've neglected to mention is difficulty of maintenance due to decreased comprehensibility of the system as a whole.

A relational database may not have that much value to the company as a whole.

By all means look at ways to make SQL DBMSes quicker. Lots of clever people have spent decades doing just that. I'm sure they're not finished yet. [incidentally Mohan et al, were using sequential writes for their [undo - or redo? I don't recall which] logs back when CPU speeds where measured in double-digit MHz - DB2 and all that. While its been a while since I had cause to look at the code inside any DBMS, I suspect the same is true today.]

But talking about SQL here is just a hammer in search of a thumb. Pick the tool for the job.

[in summary, I'm more than happy to keep building high-performance low-latency embedded systems in ways that might make an applications programmer weep, but I'm quite glad that the folks who take care of my company's payroll are running industrial-strength transactional systems]


I agree with your comments about picking the right tools for the job, but it is my contention that our code, at LMAX, is cleaner as a result of our architecture, not more obscure.

The code that matters to our business, the business logic processors in our services, is a clean, well modelled [ mostly ;-) ] implementation of our domain problem with NO technology constraints - no DB code, no magic annotations, no complex threading code, just single-threaded stateful POJOs that provide the solutions to our business problems. For me that is one of the most important benefits of our approach, not just the significant performance benefits that we get as a result, but the simplicity of the programming model.

We have entirely isolated the complex bits to infrastructure, most of our day to day work is in writing business logic code. High performance code is surely focussed on doing the minimum amount of work for the maximum function. How better to achieve that than have a software simulation of the business problem, a domain model in the DDD sense? Yes you need to pick your collections wisely to represent the relationships within your domain model, but other than that modelling the problem well is a key attribute of both high-performance systems and good code - at least to my way of thinking.

   Dave Farley




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: