Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is so absolutely true. What nobody recognizes is that most of these really fundamental systems we run on - unix system calls, our window managers, our shells, etc - were designed and implemented in an era where the supercomputers couldn't dent your Nexus 5 in terms of most computing performance metrics.

They didn't have the luxury of having an API that might be a few thousand KB big loaded into memory to act as a single redirection to your underlying implementation. They worked in the confines of bytes of memory, not gigabytes.

We can afford to be generic, to make extensible runtime programmable interfaces and runtime evaluation type dynamism, because we have the performance necessary. But our core APIs are still written like its 1980.



However, this line of reasoning also leads to slow web applications that make a high-powered workstation feel like a slow 386 from 15 years ago.


My only concern is that over the years I've seen everything go in cycles. So maybe 8mb is not a lot of RAM for a computer nowdays. A few years ago it was a lot for a router. I like my OpenWRT router that runs Linux. Before that the same could be said for mobile computers with GSM modules (cellphones).

I'd like to think that Linux will continue to run on machines with at most 4mb of RAM with not much processing capacity because if history keeps repeating itself we're going to keep on inventing new devices with those constraints.


But our core APIs are still written like its 1980.

Exactly what level of abstraction is good for the "core APIs"? Something actually has to send a series of bytes to be written to the disk, even if a bunch of serialization and encoding abstractions are written on top of that. If the latter is the "core", what's the less abstract stuff that inside that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: