Hacker News new | past | comments | ask | show | jobs | submit login

>They create big, complex, fragile architectures because they started with simple, off-the-shelf architectures that completely fell over at scale.

No, they fell over at "shit we're hitting the limits of our hardware, lets re-architect everything instead of buying bigger hardware". Rather than buy 1000 shitty $2000 servers, buy 2 good $1,000,000 servers. I know it is not fad-compliant, but it does in fact work.




And then you grow by another 50%. If you go the commodity hardware route, you buy another 500 shitty $2000 servers. If you go the big-iron route, you buy another 2 $5,000,000 servers, because server price does not increase linearly with performance. If you're a big site, server vendors know they can charge you through the nose for it, because there are comparatively few hardware vendors that know what they're doing once you get up to that level of performance.

Look, if you're going to make the case that one should buy bigger servers instead of more servers, then this becomes an economic argument. The reason large web-scale companies don't do this is because it outsources one of their core competencies. When they scale horizontally across thousands of commodity machines, then knowledge of their problem domain becomes encoded in the scaling decisions they make and stays internal to the company. When they scale vertically by buying bigger hardware, then they are trading profits in exchange for having someone else worry about the difficulties of building really big, fast supercomputers. It makes life a lot easier for the engineers, but it destroys the company's bargaining position in the marketplace. Instead of having a proprietary competitive advantage, they are now the commodity application provider on top of somebody else's proprietary competitive advantage. If someone wants to compete with them, they buy the same big iron and write a Twitter clone, while if their server vendor wants to raise prices, it has them by the balls since the whole business is built on their architecture.

(I have a strong suspicion that Twitter would not be economically viable on big iron, anyway. They would end up in a situation similar to Pandora, where their existence is predicated on paying large rents to the people whose IP they use to build their business, and yet the advertising revenue coming in is not adequate to satisfy either the business or their suppliers.)


No, you buy another 2 servers at the same price, because performance continues to increase incredibly quickly, and what you got $200,000 2 years ago is now half the speed of what $200,000 gets you.

>When they scale horizontally across thousands of commodity machines, then knowledge of their problem domain becomes encoded in the scaling decisions they make and stays internal to the company.

Or to put it another way: "they create a massive maintenance nightmare for themselves like the one described in the article".

>When they scale vertically by buying bigger hardware, then they are trading profits in exchange for having someone else worry about the difficulties of building really big, fast supercomputers.

You are overestimating the cost of high end servers, or underestimating the cost of low end ones. Again, their existing redis cluster is less RAM, CPU power, and IO throughput than a single, relatively cheap server right now.

>Instead of having a proprietary competitive advantage, they are now the commodity application provider

Twitter is a commodity application provider. People don't use twitter because of how twitter made a mess of their back end. People don't care at all about the back end, it doesn't matter at all how they architect things from the users perspective.

>while if their server vendor wants to raise prices, it has them by the balls since the whole business is built on their architecture.

What do you think servers are? They aren't some magical dungeon that traps people who buy them. If oracle wants to fuck you, go talk to IBM. If IBM wants to fuck you, go talk to fujitsu, etc, etc.


When two of your 2,000 servers die, your load balancers etc kick in and route around the problem.

When two of your two servers die, you ... um, well, you lose money and reputation. Quickly.


If you buy $1 million servers, a whole lot of things needs to go bad in whole lots of ways that would likely take own large numbers of those 2,000 servers too. I'm not so sure I agree with the notion of going for those big servers myself, but having had mid range servers from a couple of the big-iron vendors in house, here's a few of the things you can expect once you tack a couple of extra digits on the server bill:

- Servers that phone home; sometimes the first you know of a potential problem is engineers at your door come to service your server.

- Hot swappable RAID'ed RAM.

- Hot swappable CPU's, with spares, and OS support for moving threads of CPU's that are showing risk factors for failure.

- Hot swappable storage where not just the disks are hot swappable, but whole disk bays, and even trays of hot swappable RAID controllers etc.

- Redundant fibre channel connections to those raid controllers from the rest of the system.

- Redundant network interfaces and power supplies (of course, even relatively entry level servers offers that these days).

In reality, once you go truly high end, you're talking about multiple racks full of kit that effectively does a lot of the redundancy we tend to try to engineer into software solutions either at the hardware level, or abstracted from you in software layers your application won't normally see (e.g. a typical high end IBM system will set aside a substantial percentage of CPU's as spares and/or for various offload and management purposes; IBM's "classic" "Shark" storage system used two highly redundant AIX servers as "just" storage controllers hidden behind SCSI or Fibre Channel interfaces, for example).

You don't get some server where a single component failure somewhere takes it down. Some of these vendors have decades of designing out single points of failure in their high end equipment.

Some of these systems have enough redundancy that you could probably fire a shotgun into a rack and still have decent odds that the server survives with "just" reduced capacity until your manufacturers engineers show up and asks you awkward questions about what you were up to.

In general you're better off looking at many of those systems as highly integrated clusters rather than individual servers, though some fairly high end systems actually offer "single system image" clustering (that is, your monster of a machine will still look like a single server from the application point of view even in the cases where the hardware looks more like a cluster, though it may have some unusual characteristics such as different access speeds to different parts of memory).


I'm pretty sure the insides of your 2 $1,000,000 servers are architecturally essentially identical to a large cluster of off-the-shelf computers, with somewhat different bandwidth characteristics, but not enough to make a difference. They'll have some advanced features for making sure the $1,000,000 machine doesn't fall over. But it's not like they're magically equipped with disks that are a hundred times faster or anything, you're still essentially dealing with lots of computing units hooked together by things that have finite bandwidth. You can't just buy your way to 25GHz CPUs with .025ns RAM access time, etc. etc.


I'm not sure what you are trying to say. My entire point was that you can buy a single server that is more powerful than their entire cluster. You appear to agree, but think that is a problem?


Massively parallel single-machine supercomputers are still, essentially, distributed systems on the inside. You still have to use many of the same techniques to avoid communicating all-to-all. If you treat such a system as a flat memory hierarchy, your application will fall down.


True that they're still essentially distributed systems on the inside, but the typical bandwidth can often be orders of magnitudes higher, and the latencies drastically lower when you need to cross those boundaries, and for quite a few types of apps that makes all the difference in the world to how you'd architect your apps.


"you can buy a single server that is more powerful than their entire cluster." is pure genius.

We now know how to solve the C10k problem in orders of magnitude - use a single server!


$1,000,000 machines are not that much faster, you won't get 1000x the performance or anywhere near. The memory and IO performance is going to be within a factor 2 or 4 of that high end machine. It might have 50x the cores, but most likely that's not the limiting factor anyway.


The whole point is you can get equal performance from a single server instead of a ton of little ones. The ton of little ones forced them to totally re-architect to work around the massive latency between servers. A single server would have allowed them to stick with a sane architecture, and saved them millions in development time and maintenance nightmares.


It would be nice if were true, but you simply can't, there's no magic that makes an expensive server that much faster - it's just a bit faster for a lot more money. It can make sense if you only want 10x the performance and the server is cheaper than the rewrite.

For example, if a $30k car can go 150mph, it doesn't mean a $300k car can go 1,500mph it just doesn't happen. A Bugatti Veyron goes, what? 254mph that's not even double (and it costs a lot more than $300k)


We're not talking about cars. We're talking about computers. 8TB of RAM is 8TB of RAM, it doesn't get better by spreading it across a thousand servers. 4096 CPU cores are 4096 CPU cores, they don't get better by spreading them across 1000 servers. Those things get worse spreading them across servers, because you massively increase the latency to access them, and for them to access shared data.


Please give an example of this "monster server" you keep talking about with 8 TB of RAM, 4096 cores, N network interface cards, and 100 TB of SSD, with a cost estimate. Otherwise we can't have a real discussion. People have a pretty good idea of how to build / what it costs to build something with 128 GB of RAM, 32 cores, a couple of NICs, and 2 TB of SSD, but what you're talking about is 50-100x beyond that.


Not posting this to support his argument, but for the record some of the high end unix hardware available (for a price, no idea what these cost):

32TB RAM 1024 Cores (64 x 16 core), 928 x PCI Express I/O slots: http://www.oracle.com/us/products/servers-storage/servers/sp...

16TB RAM 256 cores (probably multiple threads per core), 640 x PCIe I/O adapters: http://www-03.ibm.com/systems/power/hardware/795/specs.html

4TB RAM 256 cores (512 threads), 288 x PCIe adapters: http://www.fujitsu.com/global/services/computing/server/spar...


And exactly to his point - you still can't treat these as one uniform huge memory / computational space for your application (these machines seem designed for virtualization rather than one huge application). You run into the same distributed computing issues you would with your own hardware, just with a 5/10x larger initial investment and without a huge amount of pricing control / flexibility in terms of adding capacity / dealing with failures as they arise.


Actually you can treat these as one uniform huge memory / computational space for your application. They're not meant only for virtualisation. In particular, the Oracle Database is a perfect fit for a system with thousands of cores and terabytes of memory.

It's true that for some use cases, you'd be better off carving it up using some form of virtualisation, but it isn't a requirement to reap the benefits of a massive system.

Both the Solaris scheduler and virtual memory system are designed for the kind of scalability needed when working with thousands of cores and terabytes of memory.

You also don't run into the same distributed system issues when you use the system that way.

You also do actually have a fair amount of flexibility in dealing with failures as they arise. Solaris has extensive support for DR (Dynamic Reconfiguration). In short, CPUs can be hot-swapped if needed, and memory can also be removed or added dynamically.


Just for a reference point, an appropriately spec'd big iron machine from a major supplier with 8tb of RAM will run you $10 to $20 million. There's no mainstream commercial configuration that is going to get you to 4096 cores though.

Fujitsu's SPARC Enterprise M9000 mentioned in another reply is $5 to $10 million depending on configuration (assuming you want a high-end config).

If you go big iron with any supplier worth buying from, they will absolutely murder you on scaling from their base model up the chain to 4tb+ of memory. The price increases exponentially as others have noted.

The parent arguing in favor of big iron is completely wrong about the economics (by a factor of 5 to 10 fold). The only way to ever do big iron as referenced, would be to build the machines yourself....



SGI won't build you that computer for commercial use for $2.8 million. It would cost you several fold more.


5.6 million. You need a hot standby. Or 8.4 if you're feeling cautious :)


Why should he give an example of a "monster server" with specs like that?

He gave the argument that spreading the CPU's and memory out does not make them better.

So part of the point is that if the starting point is 8TB of RAM and 4096 cores distributed over a bunch of machines, then a "large machine" approach will require substantially less.

I've not done the maths for whether or not a "Twitter scale" app would fit on a single current generation mainframe or similar large scale "machine" (what constitutes a "machine" becomes nebulous since many of the high end setups are cabinets of drawers of cards and on persons "machine" is another persons highly integrated cluster), but it would need to include a discussion of how much a tighter integrated hardware system would reduce their actual resource requirements.


I'm sure you could get vastly better performance. But this is not about performance.

What happens when you need multiple datacenters? How about if you need to plan maintenance in a single datacenter? Therefore you need at least two servers in each datacenter, etc.

Let's say you decide to use smaller machines to serve the frontend but your backend machines are big iron. Are you going to perform all your computation and then push the data out to edge servers?

There's much more to this then loading up on memory and cores.


Think about fault tolerance with 2 servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: