Hmm seems like he made the classical mistake of thinking that an x86-64 server is somehow special and more worthwhile than just buying a bare-bones PC.
I've managed Dell servers before (real ones) and aside from a bunch of stuff to make managing them in mass easier (like DRAC), there isn't very much value in trying to retrofit one for home use.
VT-x or 24 GB of RAM support isn't exactly a hard requirement in 2014. Plus the Xeon E5410 is a 2007 chip (7 years old!) and gets it butt kicked by even a low end i3 from 2012 [0].
Plus modern systems consume less electrical power which makes running them cheaper and a PC is certainly quieter than any rack mounted server (which are designed for maximum cooling, noise be damned).
Additionally if he had purchased off-the-shelf parts he would likely have a warranty for 12 months (or more). With the eBay purchased Dell machine he not only doesn't get any warranty but ALSO Dell themselves don't even support it(!) so no firmware updates.
So while this was a fun read, it just reminds me of my more nieve days when I thought doing stuff like this was a good idea. Go over to any SysAdmin forum (e.g. spiceworks community) and they'll tell you the same thing.
edit: Originally linked to an 2012 i5 comparison ($180 CPU). Altered it to an 2012 i3 comparison ($100 CPU) which still out-performance the 7 year old Xeon. The $100 price tag seems a lot more comparable than a near $200 CPU. Plus the i3 supports 32 GB of RAM and VT-x (the OP's requirements).
Sysadmin here. Agreed. 1 or 2U servers for home use is fairly ridiculous. In fact, its kinda ridiculous for some business use now. We're probably only a few years away from a mainstream and popular form-factor that's a fraction of the size, with tiny SSDs instead of giant spinning drives and power-sipping x86 or ARM chips. Imagine something the size of a stack of 4 or 5 ipads replacing the typical 19" racked server.
For small and perhaps even medium business, this could be a good fit. The 19" rack really is something from a past era. Soon people will look at it the same way I look at old mainframes and minicomputers.
As far as the article goes, $300 is impressive, but the costs of wasted electricity will eat up the savings eventually compared to just dropping some cash on a new Synology NAS, and you get a warranty.
Agree with you and the OP, for me I always work with these servers (or am supposed to with my cloud research), and wanted to toy around with one on my own.
The noise of the CS24-SC actually isn't so bad (as others report).
The power is 400W, less pulled when the whole system is idled. So not horrible.
We do not run 24/7, this isn't being used as a traditional server.
We turn it on to manipulate large VM images (my case), or to run an algorithm that needs a lot of RAM and cores (her case).
Sysadmin as well, proudly running a 4U supermicro box as my home fileserver. If I can find a tower that can hot swappably fit that many drives, I'd look twice, but there are some other small advantages. Redundant power is nice in some situations, as is ECC if you really care about that sort of thing. Finally, given the sheer amount of processor overhead a dual socket gives you, it was easy for me to also utilize the machine as a VM server. As other posts have pointed out, an out of box IPMI system is additionally hard to turn down; these are all capabilities that I simply have not found together in commodity products.
My point I guess, is that I think there's still situations that merit rolling your own "Server box", even at home. (Not to disagree with anything you said, but to provide my own outlook)
"The 19" rack really is something from a past era. Soon people will look at it the same way I look at old mainframes and minicomputers."
Hah, you do realize that the origin of the 19 inch relay rack is indeed the original main frames (of phone systems). And the main frame of an early computer was naturally built on the same hardware infrastructure.
And then the first "minis" (e.g. PDP-7, PDP-8, PDP-11, et al) were built on the 19 inch form factor too.
I just had to point that part out; I completely agree with your main point: the remaining colo rack I use is a mix of 4U, 2U and 1U "servers" plus a couple of tower machines. This is for personal use; for work we don't bother to even own our own hardware infrastructure any more.
There is a Hamburg based startup called Protonet which are targeting smaller agencys with basically the formfactor you describe.
They offer a drop-in solution that acts as your wifi ap, provides everything from "local cloud" data storage to offering a complete groupware solution ontop of it. Thats why they are so expensive, but their potential customerbase is huge and they've got the product design in a good position.
I don't think a Synology would have the horsepower that the story author was after. He specified that he was going to be chewing through large datasets, so the dual Xeons @2.3GHz were important to him. Synologies are great for what they do, but underpowered for CPU/RAM-intensive tasks...at least in the sub $500 range, which is what he was looking to spend.
I'm a happy Synology owner, but I don't use it for my dev server. It sits behind my VPN and serves media files. With the variable speed drives (WD Green) and ARM chip, it draws very little power. In short, I love it for the specific purpose for which it was designed, but I wouldn't use it for much else. My dev server is in AWS. It's just too easy to spin up and spin down whenever I need it and keeps me from having to devote space in my apartment (SF aint cheap!). Also, not having a hefty power bill is nice.
If I were advising the story author, I'd tell him that unless part of the enjoyment is building and running the thing, he should opt for a cloud provider instead. He could get a DO droplet spec'd beyond the physical server he bought for $0.12/hr and spin it up whenever he wanted to use it. Unless he's planning on using it almost continuously, you can fit a lot of 12-cent hours into a $310 budget, and that's before you consider the increased energy usage from running a server at home.
The WD docs for Intellipower indicate that RPM is one of the things they vary to conserve power. I can't find much in the way of specifics, but since they list the RPM of the drives as "Intellipower", the assumption would be that they don't operate at a fixed RPM.
The RPM varies between different models. Each drive has a fixed RPM [1][2].
From SPCR:
> Western Digital doesn't want to say that they're selling 5,400 RPM drives — those became second class in the desktop market years ago. Instead, they rate the drive's speed as "IntelliPower" and take pains to emphasize that there are other factors that affect performance.
Western Digital has caught a lot of flak for withholding the rotation speed of the Green Power, especially when the product was first launched and the marketing material listed the rotation speed as 5,400-7,200 RPM. This led some to speculate that the rotation speed changed dynamically during use — which would have been an impressive engineering feat had it been true. The reality is revealed by a sentence that Western Digital added to the description of IntelliPower: "For each GreenPower™ drive model, WD may use a different, invariable RPM." In other words, Western Digital reserves the right to release both 5,400 RPM and 7,200 RPM drives under the Green Power name — without telling you which are which.
We were able to confirm that our 750 GB Green Power had a spindle speed of 5,400 RPM by analyzing its sound spectrum. Why sound? Sound is vibration; the pitch of the sound corresponds to the frequency of the vibration. Hard drives vibrate at the speed of their motor, so they produce a noise at the same frequency as their rotation speed. Our sample had a sharp spike at exactly 90 Hz (cycles per second). Multiplying that number by 60 (to get cycles per minute) yielded a measured rotation speed of 5,400 RPM.
A lightly used i5/i7 with a bunch of RAM and an SSD would be a much, much better host.
I rescued a Dell workstation with an i7-950 from disposal, and added 2 disks and some RAM (9GB). Still not as quick as a 2500k in a lot of stuff but great for anything I can throw at it (decent size MySQL DB, lots of Rails apps, and quite a bit of web traffic). I could bump it up to 24GB but that's still $250 worth of RAM, new.
IMHO a bare-bones PC is a bit over the top for a home setup. Broken laptop is the way to go if you want to get the power requirements and noise down. Really it comes down to whether the broken laptop will do what you need it to and the price is very competitive compared to even the most basic (and slow) VPS. You can also pop in an SSD and the broken-laptop server sings along.
I should define broken - this is where a keyboard or screen has died yet the actual gubbins work as Intel intended. What you don't want is where the CPU has fried the thermal paste and gone hot for an extended time. That won't run good even if 'fixed'.
A broken-laptop-server will also contain battery backup. The power supply is definitely hot-swappable if you happen to have another power adapter for it.
Really there should be some USB-stick version of a common version of Linux that is completely optimised for power management with the intended use being a redundant laptop turned into a home server. It could have wake on lan magic packet in use too, effectively making it a device that only uses power if you actually use the device. This would be a complete contrast to the normal over-engineered server arrangement.
Broken laptops are certainly overlooked as an option, but you also have to be aware of your use-case. RAID is generally not available, and if you want to crunch lots of data as in the article, CPU and RAM aren't usually much to write home about. Horses for courses.
Not really a "mistake" they are different. A machine destined for a data center will have an IPMI subsystem in it, that is the thing you can run the machine remotely from, reflash the bios and reboot if you want. A data center machine will probably have ECC RAM which a barebones PC won't unless you are picky and find the one sku that ASUS and Gigabyte make that both have ECC capable chipsets and support in the BIOS. And lastly a datacenter machine will be more comfortable running headless than a barebones PC so there won't be any thing popping up on the VGA screen and saying "press any key to continue." Lastly a machine expecting to be in a rack in a data center probably has some insane fan noise when the fans spin up :-).
>a datacenter machine will be more comfortable running headless than a barebones PC so there won't be any thing popping up on the VGA screen and saying "press any key to continue."
Foreign configuration(s) found on adapter.
Press any key to continue or 'C' to load the configuration utility,
or 'F' to import foreign configuration(s) and continue.
All of the disks from your previous configuration are gone. If this is
an unexpected message, then please power off your system and check your cables
to ensure all disks are present.
Press any key to continue, or 'C' to load the configuration utility.
Entering the configuration utility in this state will result in drive
configuration changes. Press 'Y' to continue loading the configuration utility
or please power off your system and check your cables to ensure all disks are
present and reboot.
But generally speaking, that's more or less true (except for the temperamental LSI based controllers)
Holy crap you're not kidding. I (briefly) had a 1U server in an upstairs home office for a project, it got banished to the basement within an hour because of the jet-turbine noise coming off of it. I could still hear it in the basement.
All that is fine and dandy, but the RAM. These days RAM is so much more expensive! 4GB DDR3 is around $8-$10/GB. So his 24GB memory alone would cost around $200 at least.
> _Oh yeah, did I mention I also wanted a beefier machine at home so I could manipulate large virtual machine images?_
I think he's looking to create a setup where he can spin up and destroy networked VMs, which would indeed make the project count as a cloud. From your Wikipedia link:
> The main enabling technology for cloud computing is virtualization. Virtualization software allows a physical computing device to be electronically separated into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.[27]
Admittedly, it depends on what he wants to do with this that would determine whether it's "really" a cloud or not.
He's not saying he has a cloud at home, but rather that this server was likely part of one in a prior life:
"""
There is a lot of conjecture on where the Dell 1U rackmount model CS24-SC came from. Some people say Facebook data centers. Others just say that it was mass-produced for “clouds.”
"""
I remember when the company I was working at bought a bunch of new Sun hardware, and ended up throwing out all their old Sun 420Rs. Despite being slightly outdated, they were still in perfect condition, so I took the 4 of them to the house (carefully, because they were heavy) and set up a pretend data center.
The first shock, I can confirm, was the noise they made. I ended up relegating them to the basement, which was otherwise unused.
The second shock was when my utility bill came in the next month. It's very easy to know the power consumption of enterprise-class servers for planning purposes. There's a giant abstraction between the power consumption on paper, and the power consumption on your home utility bill that I think everybody working around servers should experience at least once.
The noisiest system I've ever owned was a Sun T1000 1U server. Here's some examples. I tried to mute it a bit by putting it in a huge tupperware-like under-bed storage box, but even that didn't help one bit.
I worked on a stack of T1000s. We had a room next to our sysadmin hovel that we used for setting up systems before putting them in the datacenter. I worked remotely a lot, and would periodically reboot these servers. When they booted, part of their POST was running their fans at 99% of their RPM capability. The noise was astounding, and my coworkers grew pretty tired of me...
I had the opportunity to get a 7U Compaq server for cheap one time (quad xeon CPUs, tons of RAM, 2 different RAID arrays). And I'm looking at it thinking "Wow, this would be so cool to have at the house!".
But then I look at the label by the dual power supplies: 20A circuit required -- each! I would have to pay an electrician to install a new breaker and run a new line to it. And that would have doubled my purchase costs. I passed.
Heard the same often repeated about enterprise grade storage arrays. These things keep the disks spinning pretty much all the time. A home grade NAS is going to aggressively power save.
Absolutely: don't confuse home/SME with enterprise grade. The needs are very different.
The 1U servers are very loud. But a 4U or tower server will be fairly quiet, and have the bonus ability to hold more disks. A must for VM's, disk contention on spinning disks gets pretty high after your 2nd VM.
Depends on workload of course, but for most VMs they read a lot more than they write so an SSD can handle quite a few before you notice any IO-related latency (3-4 small web-apps, 1-2 medium, 1 database VM).
The OP also put his two drives into RAID 1 so they won't get any benefit of having two drives.
I use mdadm's RAID 10 on 2 drives for redundancy and speed.
"In mdadm, the RAID10 level creates a single complex software RAID that combines features of both RAID 0 (striping) and RAID 1 (mirroring). Multiple copies of all data blocks are arranged on multiple drives following a striping discipline."
http://doc.opensuse.org/products/draft/SLES/SLES-storage_sd_...
RAID 1 mostly gives more random read IOPS, not faster. You have two drives from which to read data, since they're mirrored.
You do have more chances for the spindles to be near your data (especially with >2 drives in the array), which can slightly reduce average latency, but that isn't as large an effect.
Maybe I'm missing something, why not just use a cloud VM? On and off as like and pricing turns off. Surely that would be less than $300, wouldn't be in your house blowing air, and would be (more) easily accessible anywhere.
- no remote access to the console if ssh stops working
- no possibility to remotely power it down/up
- if the hardware dies, no possibility to restart on a different hardware remotely
There's nothing wrong with a home server, I have one or three myself, but these limitations are why I move "essential" services (e.g.: email) to the cloud.
Home servers are better suited for exactly this, need of high CPU, and also need of a lot of storage, for a backup server for example. I used to run everything on home servers and back them out to the cloud, but I have now started to reverse this, run the services in the cloud (cheap for small service, remotely accessible etc...), back them up to the home server (cheaper large storage, it's fine if they go offline for a few days while on holiday for example).
> - no remote access to the console if ssh stops working
That the depends on what you install, obviously. But more importantly, this is an intel cpu, so there is remote management, and:
> - no possibility to remotely power it down/up
Again, intel cpu, see below...
> - if the hardware dies, no possibility to restart on a different hardware remotely
That also depends: backup and automation is orthogonal to it being "cloud" or not -- not to mention with vtx support, one might set up the actual services as vms, even if just running on a single physical machine -- you would have to do off-site backup -- but then again you should be doing that anyway...
But, back to the intel remote management stuff: I recently scared the willies out of myself, by setting up remote management on my laptop. Full console and serial access, optionally with dial-out over wlan support. Just from running an intel chip/bios on my laptop. That's this stuff here:
I once had a 2U rack server. Proper double Pentium III's inside, ECC RAM, serial console, alarms on the case, everything. Doubled as a coffee table in my living room. Alas, I was young.
Nowadays I have an Array R2 pc case with six HDDs, one SSD for the OS, a mini-itx board and an Intel CPU with virtualization support (I really don't remember the model) and a few gigs of RAM. All (except the HDDs) were bought new at about the same cost. The SSD is a 32GB chinese model from eBay. The machine is powerful, quiet when the HDDs are spinned down, relatively small and elegant and acts as NAS, Server, testbed, etc. Also at idle it consumes about 20W.
If you have a local makerspace / hackerspace, check with them. We get quite a few decommissioned servers donated to ours but we generally don't have much use for them. Noise and heat we can sorta manage, but the power usage is a bit steep. We'd be happy to host them for people if they pay for the power and aren't doing anything too crazy with the network.
My Kill-A-Watt meter helped me determine that the best home server is a almost always going to be a fast laptop (around 2 years old) stashed in your bedroom closet. You'll get the most computer per dollar of purchase price/power price available plus a built-in UPS and quiet operation as a bonus.
Unless you need, like me, 24-32 GB++ of RAM. Dual HDDs, and a whole lot of cores to run 3-4 VMs at a time.
Then the laptop is so resource-poor that it is rendered useless for the workloads I described in the article.
This article wasn't about how to make a "cheap home server"---it was about replicating real data-center-class hardware in the home and how to do that cheaply.
Unless you need ECC memory for some reason, next time, do your ears and your wallet a favor and just buy regular x86 consumer hardware. It's half as loud (those 10k RPM fans must be killing your ears) and half the price and will consume half the energy.
well now you're assuming virtualization is the same as cloud, it is not, a cloud researcher should know what a "cloud" actually is instead of the marketing usage for it.
for one, redundancy is always implicit in any cloud solution.
I don't believe redundancy, what I think you mean---fault tolerance---is implicit in the definition of cloud computing.
The most general form of the term 'cloud' in my mind is elastic access to computational resources.
There is no guarantee of high availability (in fact, we see failed AWS instances and horror stories all the time), nor automated failover/redundancy/fault tolerance.
Chaos Monkey by Netflix is a great example of the shortcomings of the modern cloud in terms of fault tolerance and redundancy.
The cloud doesn't give it to you, you have to build that yourself.
I'm a very open-minded person, perhaps because of my academic background, so I would be happy to say that I agree that a subtype of the fundamental cloud could be a redundant/fault-tolerant one.
But the fundamental definition of cloud does not include fault tolerance.
Don't trust me.
Check out the more formal definition from US NIST (years of work, 15 drafts), there is no mention of redundancy or fault-tolerance:
Warning, this page loads 20 megabytes of huge images that might crash your mobile browser. It does that for no good reason, the images are only shown in thumbnail size..
Dell (T20) and Lenovo (TS140) sell quiet and low-power servers which are targeted at homes and small business. Prices start at $300. Model with VT-d and vPro Xeon E3 is $500. Dell model officially supports RedHat Linux, so there's no Windows tax.
I'm pretty sure the E5410s (or any Harpertown/Woodcrest processor) in a standard configuration use fully buffered (FB-DIMM) RAM, as opposed to unbuffered. Super cool that Dell did some trickery in the CS24-SC (apparently CS means "custom server") to bypass that obnoxious requirement.
EDIT: Apparently this family of servers use a bastardized chipset that doesn't require FB-DIMMs; nothing special on Dell's part.
I've got something like this sitting in my living room, but instead of using a 1U rackmount server, I opted for a Mini-ITX system with a LIAN LI PC-Q25B case.
It's quiet, holds tons of drives and actually looks like it belongs next to the TV (I run XBMC on it as well). It was pricer than the OP's configuration, but it won't keep you up at night (and bleed your wallet in power bills).
If you want previous-generation rackmount gear, try Weird Stuff Warehouse, in Sunnyvale, CA. Weird Stuff is where Silicon Valley data centers go to die. Typical: "HP Proliant DL160 G5 2x Quad-Core Xeon E5472 @ 3.0GHz 4GB RAM 1U Server - $189.95" They have lots of older Cisco gear and all the accessories you need to build a retro data center. Not that you'd want to.
I think he meant it's a repurposed cloud server, i.e. in its original home it was used in a cloud computing scenario.
I'm with everyone else though, in thinking that a good quality workstation would be a better fit for his purposes. For the same $300 you can get a decent workstation with a quad core CPU and hardware virtualization on eBay that can handle 32GB of memory, and room in the case for at least 2 HDDs and 4 SSDs if you use a stacking mount.
If you do have hardware extensions for virtualization optimization in your CPU, you're probably missing out on performance with Xen by using paravirt.
System calls and memory access are slower in PV than HVM (See: Problems introduced with paravirt when x86_64 removed two of the protection rings from CPU architecture) , and PV-on-HVM drivers for block/network devices get you PV level performance.
If you want old hardware, Ebay is one route. If you don't want to deal with bidding or need a warranty, there are places like http://www.servermonkey.com that specialize in refurbishing and reselling old servers and workstations.
On-the-fly survey for what I'm currently working on: http://utter.io/. Vote up if you have a server(s) at home which you are already virtualizing. I have 5 machines running OpenStack at the house.
I hope he has a basement or something to put it with all the noise it's going to make. I wonder if he would be better off financially buying a second-hand desktop PC due to power usage.
Perhaps this is a cloudlet then? Cloud computing is not merely VMs on demand. An essential nature of cloud is multiple distributed machines with the ability to transfer applications (or VMs) between machines for various reasons including load balancing, fault tolerance, and high availability. Furthermore, the application user is abstracted from which physical machine the application is running on. In your case, you have a single machine which you are likely addressing directly. This is not cloud computing.
Perhaps you mean "cloud storage", where your personal devices are able to store and retrieve things "in the cloud". The equivalent of a local FTP server (or NAS) does not a cloud make.
That was my first thought. For storage, a Synology would wipe the floor with his setup. He's using it as a compute node, though, and even the newest DS415+ wouldn't hold a candle to this server project.
Sounds like a hair drier on low, as others have reported.
It's only super loud if you load it (I calculated Pi to 5 billion digits). It would spin up fans, be loud for 30 seconds, and spin down for a few minutes and repeat as it computed.
I've managed Dell servers before (real ones) and aside from a bunch of stuff to make managing them in mass easier (like DRAC), there isn't very much value in trying to retrofit one for home use.
VT-x or 24 GB of RAM support isn't exactly a hard requirement in 2014. Plus the Xeon E5410 is a 2007 chip (7 years old!) and gets it butt kicked by even a low end i3 from 2012 [0].
Plus modern systems consume less electrical power which makes running them cheaper and a PC is certainly quieter than any rack mounted server (which are designed for maximum cooling, noise be damned).
Additionally if he had purchased off-the-shelf parts he would likely have a warranty for 12 months (or more). With the eBay purchased Dell machine he not only doesn't get any warranty but ALSO Dell themselves don't even support it(!) so no firmware updates.
So while this was a fun read, it just reminds me of my more nieve days when I thought doing stuff like this was a good idea. Go over to any SysAdmin forum (e.g. spiceworks community) and they'll tell you the same thing.
[0]http://cpuboss.com/cpus/Intel-Xeon-E5410-vs-Intel-Core-i3-32...
edit: Originally linked to an 2012 i5 comparison ($180 CPU). Altered it to an 2012 i3 comparison ($100 CPU) which still out-performance the 7 year old Xeon. The $100 price tag seems a lot more comparable than a near $200 CPU. Plus the i3 supports 32 GB of RAM and VT-x (the OP's requirements).