Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
My $300 Home Cloud Server (acm.org)
155 points by theonewolf on Sept 26, 2014 | hide | past | favorite | 121 comments


Hmm seems like he made the classical mistake of thinking that an x86-64 server is somehow special and more worthwhile than just buying a bare-bones PC.

I've managed Dell servers before (real ones) and aside from a bunch of stuff to make managing them in mass easier (like DRAC), there isn't very much value in trying to retrofit one for home use.

VT-x or 24 GB of RAM support isn't exactly a hard requirement in 2014. Plus the Xeon E5410 is a 2007 chip (7 years old!) and gets it butt kicked by even a low end i3 from 2012 [0].

Plus modern systems consume less electrical power which makes running them cheaper and a PC is certainly quieter than any rack mounted server (which are designed for maximum cooling, noise be damned).

Additionally if he had purchased off-the-shelf parts he would likely have a warranty for 12 months (or more). With the eBay purchased Dell machine he not only doesn't get any warranty but ALSO Dell themselves don't even support it(!) so no firmware updates.

So while this was a fun read, it just reminds me of my more nieve days when I thought doing stuff like this was a good idea. Go over to any SysAdmin forum (e.g. spiceworks community) and they'll tell you the same thing.

[0]http://cpuboss.com/cpus/Intel-Xeon-E5410-vs-Intel-Core-i3-32...

edit: Originally linked to an 2012 i5 comparison ($180 CPU). Altered it to an 2012 i3 comparison ($100 CPU) which still out-performance the 7 year old Xeon. The $100 price tag seems a lot more comparable than a near $200 CPU. Plus the i3 supports 32 GB of RAM and VT-x (the OP's requirements).


Sysadmin here. Agreed. 1 or 2U servers for home use is fairly ridiculous. In fact, its kinda ridiculous for some business use now. We're probably only a few years away from a mainstream and popular form-factor that's a fraction of the size, with tiny SSDs instead of giant spinning drives and power-sipping x86 or ARM chips. Imagine something the size of a stack of 4 or 5 ipads replacing the typical 19" racked server.

For small and perhaps even medium business, this could be a good fit. The 19" rack really is something from a past era. Soon people will look at it the same way I look at old mainframes and minicomputers.

As far as the article goes, $300 is impressive, but the costs of wasted electricity will eat up the savings eventually compared to just dropping some cash on a new Synology NAS, and you get a warranty.


Agree with you and the OP, for me I always work with these servers (or am supposed to with my cloud research), and wanted to toy around with one on my own.

The noise of the CS24-SC actually isn't so bad (as others report).

The power is 400W, less pulled when the whole system is idled. So not horrible.

We do not run 24/7, this isn't being used as a traditional server.

We turn it on to manipulate large VM images (my case), or to run an algorithm that needs a lot of RAM and cores (her case).

So it runs at most 8-10 hours, 2-4 times a week.


Sysadmin as well, proudly running a 4U supermicro box as my home fileserver. If I can find a tower that can hot swappably fit that many drives, I'd look twice, but there are some other small advantages. Redundant power is nice in some situations, as is ECC if you really care about that sort of thing. Finally, given the sheer amount of processor overhead a dual socket gives you, it was easy for me to also utilize the machine as a VM server. As other posts have pointed out, an out of box IPMI system is additionally hard to turn down; these are all capabilities that I simply have not found together in commodity products.

My point I guess, is that I think there's still situations that merit rolling your own "Server box", even at home. (Not to disagree with anything you said, but to provide my own outlook)


"The 19" rack really is something from a past era. Soon people will look at it the same way I look at old mainframes and minicomputers."

Hah, you do realize that the origin of the 19 inch relay rack is indeed the original main frames (of phone systems). And the main frame of an early computer was naturally built on the same hardware infrastructure.

And then the first "minis" (e.g. PDP-7, PDP-8, PDP-11, et al) were built on the 19 inch form factor too.

I just had to point that part out; I completely agree with your main point: the remaining colo rack I use is a mix of 4U, 2U and 1U "servers" plus a couple of tower machines. This is for personal use; for work we don't bother to even own our own hardware infrastructure any more.


There is a Hamburg based startup called Protonet which are targeting smaller agencys with basically the formfactor you describe.

They offer a drop-in solution that acts as your wifi ap, provides everything from "local cloud" data storage to offering a complete groupware solution ontop of it. Thats why they are so expensive, but their potential customerbase is huge and they've got the product design in a good position.

(Not affiliated with protonet anyhow, just like the stuff they do.) https://protonet.info/en/product/comparison/


I don't think a Synology would have the horsepower that the story author was after. He specified that he was going to be chewing through large datasets, so the dual Xeons @2.3GHz were important to him. Synologies are great for what they do, but underpowered for CPU/RAM-intensive tasks...at least in the sub $500 range, which is what he was looking to spend.

I'm a happy Synology owner, but I don't use it for my dev server. It sits behind my VPN and serves media files. With the variable speed drives (WD Green) and ARM chip, it draws very little power. In short, I love it for the specific purpose for which it was designed, but I wouldn't use it for much else. My dev server is in AWS. It's just too easy to spin up and spin down whenever I need it and keeps me from having to devote space in my apartment (SF aint cheap!). Also, not having a hefty power bill is nice.

If I were advising the story author, I'd tell him that unless part of the enjoyment is building and running the thing, he should opt for a cloud provider instead. He could get a DO droplet spec'd beyond the physical server he bought for $0.12/hr and spin it up whenever he wanted to use it. Unless he's planning on using it almost continuously, you can fit a lot of 12-cent hours into a $310 budget, and that's before you consider the increased energy usage from running a server at home.


> With the variable speed drives (WD Green)

None of Western Digital's HDDs change their RPM. It is fixed.


The WD docs for Intellipower indicate that RPM is one of the things they vary to conserve power. I can't find much in the way of specifics, but since they list the RPM of the drives as "Intellipower", the assumption would be that they don't operate at a fixed RPM.


The RPM varies between different models. Each drive has a fixed RPM [1][2].

From SPCR:

> Western Digital doesn't want to say that they're selling 5,400 RPM drives — those became second class in the desktop market years ago. Instead, they rate the drive's speed as "IntelliPower" and take pains to emphasize that there are other factors that affect performance.

Western Digital has caught a lot of flak for withholding the rotation speed of the Green Power, especially when the product was first launched and the marketing material listed the rotation speed as 5,400-7,200 RPM. This led some to speculate that the rotation speed changed dynamically during use — which would have been an impressive engineering feat had it been true. The reality is revealed by a sentence that Western Digital added to the description of IntelliPower: "For each GreenPower™ drive model, WD may use a different, invariable RPM." In other words, Western Digital reserves the right to release both 5,400 RPM and 7,200 RPM drives under the Green Power name — without telling you which are which.

We were able to confirm that our 750 GB Green Power had a spindle speed of 5,400 RPM by analyzing its sound spectrum. Why sound? Sound is vibration; the pitch of the sound corresponds to the frequency of the vibration. Hard drives vibrate at the speed of their motor, so they produce a noise at the same frequency as their rotation speed. Our sample had a sharp spike at exactly 90 Hz (cycles per second). Multiplying that number by 60 (to get cycles per minute) yielded a measured rotation speed of 5,400 RPM.

[1] http://www.silentpcreview.com/article786-page2.html

[2] http://arstechnica.com/civis/viewtopic.php?p=23526214#p23526...


A lightly used i5/i7 with a bunch of RAM and an SSD would be a much, much better host.

I rescued a Dell workstation with an i7-950 from disposal, and added 2 disks and some RAM (9GB). Still not as quick as a 2500k in a lot of stuff but great for anything I can throw at it (decent size MySQL DB, lots of Rails apps, and quite a bit of web traffic). I could bump it up to 24GB but that's still $250 worth of RAM, new.


ECC is nice, especially coupled with ZFS.


Yeah, the RAM alone is a costly thing actually.

I was lucky to find it so cheap :-)


IMHO a bare-bones PC is a bit over the top for a home setup. Broken laptop is the way to go if you want to get the power requirements and noise down. Really it comes down to whether the broken laptop will do what you need it to and the price is very competitive compared to even the most basic (and slow) VPS. You can also pop in an SSD and the broken-laptop server sings along.

I should define broken - this is where a keyboard or screen has died yet the actual gubbins work as Intel intended. What you don't want is where the CPU has fried the thermal paste and gone hot for an extended time. That won't run good even if 'fixed'.

A broken-laptop-server will also contain battery backup. The power supply is definitely hot-swappable if you happen to have another power adapter for it.

Really there should be some USB-stick version of a common version of Linux that is completely optimised for power management with the intended use being a redundant laptop turned into a home server. It could have wake on lan magic packet in use too, effectively making it a device that only uses power if you actually use the device. This would be a complete contrast to the normal over-engineered server arrangement.


Broken laptops are certainly overlooked as an option, but you also have to be aware of your use-case. RAID is generally not available, and if you want to crunch lots of data as in the article, CPU and RAM aren't usually much to write home about. Horses for courses.


We are using this server sporadically, essentially wake-on-lan.

Especially with the BMC console, we can remotely power it on, off, reset, etc.


Not really a "mistake" they are different. A machine destined for a data center will have an IPMI subsystem in it, that is the thing you can run the machine remotely from, reflash the bios and reboot if you want. A data center machine will probably have ECC RAM which a barebones PC won't unless you are picky and find the one sku that ASUS and Gigabyte make that both have ECC capable chipsets and support in the BIOS. And lastly a datacenter machine will be more comfortable running headless than a barebones PC so there won't be any thing popping up on the VGA screen and saying "press any key to continue." Lastly a machine expecting to be in a rack in a data center probably has some insane fan noise when the fans spin up :-).


>a datacenter machine will be more comfortable running headless than a barebones PC so there won't be any thing popping up on the VGA screen and saying "press any key to continue."

        Foreign configuration(s) found on adapter.
        Press any key to continue or 'C' to load the configuration utility,
        or 'F' to import foreign configuration(s) and continue.
        
        All of the disks from your previous configuration are gone. If this is
        an unexpected message, then please power off your system and check your cables
        to ensure all disks are present.
        Press any key to continue, or 'C' to load the configuration utility.
        
        Entering the configuration utility in this state will result in drive
        configuration changes. Press 'Y' to continue loading the configuration utility
        or please power off your system and check your cables to ensure all disks are
        present and reboot.
But generally speaking, that's more or less true (except for the temperamental LSI based controllers)


noise be damned

Holy crap you're not kidding. I (briefly) had a 1U server in an upstairs home office for a project, it got banished to the basement within an hour because of the jet-turbine noise coming off of it. I could still hear it in the basement.


I don't think it's special at all.

I work with these things in my cloud research and thought it'd be cute to play around with one at home.

Partly because of the challenge at getting an old clunker back into a working state.

And partly because I like dogfooding and really knowing what it takes to run an entire stack---including the hardware underneath me.

Those are my real reasons. This was for fun :-)


All that is fine and dandy, but the RAM. These days RAM is so much more expensive! 4GB DDR3 is around $8-$10/GB. So his 24GB memory alone would cost around $200 at least.


Precisely, you get it.

This RAM was _insanely_ cheap :-)


Sure, it's cheaper than buying new RAM, but it's also

* 5+ years old

* has a peak transfer rate that's only 40% of contemporary PC3-12800


And the whole point of the article is cheapness.

As in, some people buy old clunker cars and fix them up as a hobby and also sometimes for cheapness.

Sure, they pollute more, get less MPG, etc. etc.

But they can be dirt cheap, and fun to restore if you like that kind of thing.


"Plus the Xeon E5410 is a 2007 chip (7 years old!) and gets it butt kicked by even a low end i3 from 2012"

Yes, but in an age of ZFS, can a non-xeon support ECC RAM ?

I think a rackmount home server is a good choice if you already have other rackmount equipment in place, such as switches and routers.

If that server is the only rackmount device you're going to get, then I think it's probably unnecessary.


The i3s support ECC. The i5s and i7s don't because that's where intel branches their consumer from their server models.


Actually, it seems some (not all) laptop CPUs from the i3 up to the i7 do support ECC, while the desktop versions don't.

Search for ECC in the pages below.

https://en.wikipedia.org/wiki/List_of_Intel_Core_i3_micropro...

https://en.wikipedia.org/wiki/List_of_Intel_Core_i7_micropro...


Depends if you want compute heavy HPC/Big data the dual xenon's with 128GB memory and 30-40 TB of DASD are the way to go.

And a lot of the consumer boards cant handle that many of the Tesla cards they run out of PCI lanes


Why use the term cloud? It's just a server, no cloud-ness.

https://en.wikipedia.org/wiki/Cloud_computing


> _Oh yeah, did I mention I also wanted a beefier machine at home so I could manipulate large virtual machine images?_

I think he's looking to create a setup where he can spin up and destroy networked VMs, which would indeed make the project count as a cloud. From your Wikipedia link:

> The main enabling technology for cloud computing is virtualization. Virtualization software allows a physical computing device to be electronically separated into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.[27]

Admittedly, it depends on what he wants to do with this that would determine whether it's "really" a cloud or not.


Cloud research.

Working with modified hypervisor (QEMU), and modified cloud software (OpenStack in my research).


He's not saying he has a cloud at home, but rather that this server was likely part of one in a prior life:

""" There is a lot of conjecture on where the Dell 1U rackmount model CS24-SC came from. Some people say Facebook data centers. Others just say that it was mass-produced for “clouds.” """


Precisely, thanks for reading.

And technically I use it as a mini-cloud for cloud research.

I modify hypervisors and do things no cloud in the world can :-)


The fact that the primary purpose is for varying computing tasks on demand makes makes the "cloud" term appropriate in my opinion.



Don't bring a server into your home if you value your sanity. Those little fans spin at 20,000RPM and can be heard throughout your house.


I remember when the company I was working at bought a bunch of new Sun hardware, and ended up throwing out all their old Sun 420Rs. Despite being slightly outdated, they were still in perfect condition, so I took the 4 of them to the house (carefully, because they were heavy) and set up a pretend data center.

The first shock, I can confirm, was the noise they made. I ended up relegating them to the basement, which was otherwise unused.

The second shock was when my utility bill came in the next month. It's very easy to know the power consumption of enterprise-class servers for planning purposes. There's a giant abstraction between the power consumption on paper, and the power consumption on your home utility bill that I think everybody working around servers should experience at least once.

They were retired, shortly after.


The noisiest system I've ever owned was a Sun T1000 1U server. Here's some examples. I tried to mute it a bit by putting it in a huge tupperware-like under-bed storage box, but even that didn't help one bit.

http://www.mrbill.net/t1k/noise/


I worked on a stack of T1000s. We had a room next to our sysadmin hovel that we used for setting up systems before putting them in the datacenter. I worked remotely a lot, and would periodically reboot these servers. When they booted, part of their POST was running their fans at 99% of their RPM capability. The noise was astounding, and my coworkers grew pretty tired of me...


I had the opportunity to get a 7U Compaq server for cheap one time (quad xeon CPUs, tons of RAM, 2 different RAID arrays). And I'm looking at it thinking "Wow, this would be so cool to have at the house!".

But then I look at the label by the dual power supplies: 20A circuit required -- each! I would have to pay an electrician to install a new breaker and run a new line to it. And that would have doubled my purchase costs. I passed.


Heard the same often repeated about enterprise grade storage arrays. These things keep the disks spinning pretty much all the time. A home grade NAS is going to aggressively power save.

Absolutely: don't confuse home/SME with enterprise grade. The needs are very different.


As someone who was forced to have a 1U server with a tiny 12k RPM fan on his desk for a week while doing some forensics on it: i agree.


The 1U servers are very loud. But a 4U or tower server will be fairly quiet, and have the bonus ability to hold more disks. A must for VM's, disk contention on spinning disks gets pretty high after your 2nd VM.


2nd disk or a single SSD.

Depends on workload of course, but for most VMs they read a lot more than they write so an SSD can handle quite a few before you notice any IO-related latency (3-4 small web-apps, 1-2 medium, 1 database VM).

The OP also put his two drives into RAID 1 so they won't get any benefit of having two drives.


RAID1 does get you faster random IOPS, so it's not like there's no benefit...


Nice, I thought RAID 1 just gave redundancy.

I use mdadm's RAID 10 on 2 drives for redundancy and speed.

"In mdadm, the RAID10 level creates a single complex software RAID that combines features of both RAID 0 (striping) and RAID 1 (mirroring). Multiple copies of all data blocks are arranged on multiple drives following a striping discipline." http://doc.opensuse.org/products/draft/SLES/SLES-storage_sd_...


RAID 1 mostly gives more random read IOPS, not faster. You have two drives from which to read data, since they're mirrored.

You do have more chances for the spindles to be near your data (especially with >2 drives in the array), which can slightly reduce average latency, but that isn't as large an effect.


I'd use the term "can" rather than "does" just because the performance is tiny[0] and some RAID controllers don't do a very good job utilising it.

[0]http://kmwoley.com/blog/?p=429


I'm assuming now that most of my VMs will be completely cached in my 24 GB RAM.


Disk contention?

I'm expecting to cache my VM disks in RAM completely while I do my experiments.


Yeah I don't see how Xeon makes much sense for CV in the home.

I think you'd do much better to pick up a milk crate miner with a core i* processor.


It's not just CV, but I get your point.


Mine are more like 8k RPM. And if you have a room to isolate them in, it's not so bad (watch temps!).


Maybe I'm missing something, why not just use a cloud VM? On and off as like and pricing turns off. Surely that would be less than $300, wouldn't be in your house blowing air, and would be (more) easily accessible anywhere.


I do cloud research.

I modify the hypervisor.

I do things that no cloud in the world can do.


You had me at hypervisor. ;)

(Thanks! That's all I wanted to know.)


This is particularly funny when you use the Cloud to Butt chrome extension.


How do you know when someone uses a text replacement extension?

Don't worry, they'll tell you.


This is a home server, not a cloud server:

- no remote access to the console if ssh stops working

- no possibility to remotely power it down/up

- if the hardware dies, no possibility to restart on a different hardware remotely

There's nothing wrong with a home server, I have one or three myself, but these limitations are why I move "essential" services (e.g.: email) to the cloud.

Home servers are better suited for exactly this, need of high CPU, and also need of a lot of storage, for a backup server for example. I used to run everything on home servers and back them out to the cloud, but I have now started to reverse this, run the services in the cloud (cheap for small service, remotely accessible etc...), back them up to the home server (cheaper large storage, it's fine if they go offline for a few days while on holiday for example).


> - no remote access to the console if ssh stops working

That the depends on what you install, obviously. But more importantly, this is an intel cpu, so there is remote management, and:

> - no possibility to remotely power it down/up

Again, intel cpu, see below...

> - if the hardware dies, no possibility to restart on a different hardware remotely

That also depends: backup and automation is orthogonal to it being "cloud" or not -- not to mention with vtx support, one might set up the actual services as vms, even if just running on a single physical machine -- you would have to do off-site backup -- but then again you should be doing that anyway...

But, back to the intel remote management stuff: I recently scared the willies out of myself, by setting up remote management on my laptop. Full console and serial access, optionally with dial-out over wlan support. Just from running an intel chip/bios on my laptop. That's this stuff here:

https://www.fsf.org/blogs/community/active-management-techno...

http://blog.yarda.eu/2012/09/intel-amt-quickstart-guide.html

http://www.howtogeek.com/56538/how-to-remotely-control-your-...

http://www.thinkwiki.org/wiki/Intel_Active_Management_Techno...

(etc)


https://news.ycombinator.com/item?id=8374003

Not sure if I should repeat myself a lot or not.


Latter would be more preferable.


I once had a 2U rack server. Proper double Pentium III's inside, ECC RAM, serial console, alarms on the case, everything. Doubled as a coffee table in my living room. Alas, I was young.

Nowadays I have an Array R2 pc case with six HDDs, one SSD for the OS, a mini-itx board and an Intel CPU with virtualization support (I really don't remember the model) and a few gigs of RAM. All (except the HDDs) were bought new at about the same cost. The SSD is a 32GB chinese model from eBay. The machine is powerful, quiet when the HDDs are spinned down, relatively small and elegant and acts as NAS, Server, testbed, etc. Also at idle it consumes about 20W.


$300 for the server, $50 a month to power that heater.


With my 400W and around 10 cents per kilowatt hour in my area I'm seeing for 750 hours a cost of $30.00 estimated.

Now, I do not have this running 24/7.

Only when I or she wants to run a research job.

So we're talking 8-10 hours, 2-4 times a week.

Pretty cheap actually overall :-)


If you have a local makerspace / hackerspace, check with them. We get quite a few decommissioned servers donated to ours but we generally don't have much use for them. Noise and heat we can sorta manage, but the power usage is a bit steep. We'd be happy to host them for people if they pay for the power and aren't doing anything too crazy with the network.


Thanks for the tip!


Is this super-loud? I have an old 1U server which seemed cool until I booted it up. It's like vacuum-cleaner-loud.


It's not horrible, sounds like a blow drier on low setting when idling.


My Kill-A-Watt meter helped me determine that the best home server is a almost always going to be a fast laptop (around 2 years old) stashed in your bedroom closet. You'll get the most computer per dollar of purchase price/power price available plus a built-in UPS and quiet operation as a bonus.


That makes sense.

Unless you need, like me, 24-32 GB++ of RAM. Dual HDDs, and a whole lot of cores to run 3-4 VMs at a time.

Then the laptop is so resource-poor that it is rendered useless for the workloads I described in the article.

This article wasn't about how to make a "cheap home server"---it was about replicating real data-center-class hardware in the home and how to do that cheaply.


Ymmv I guess. 20gig ram (2 8s plus onboard 4), 1.5 tb 7200 rpm and an i7 was enough for me to do my handful of vms.

Granted no ecc, no mirroring, no future expansion. But then no 18$/month power bill either. To each their own.


How much did your 20gig of RAM cost you?

We don't run 24/7, only when we need to do our experiments.

So the power bill is much lower than what a lot of people are worried about.

This setup is definitely used on-demand :-)


Unless you need ECC memory for some reason, next time, do your ears and your wallet a favor and just buy regular x86 consumer hardware. It's half as loud (those 10k RPM fans must be killing your ears) and half the price and will consume half the energy.


So you believe I can buy 24 GB of non-ECC RAM for less than $30 ?


how is this a cloud server? that's like calling every picture a "selfie"...


First, what I meant, was in its past life it was in a cloud probably.

Secondly, I do cloud research.

I run a modified KVM/QEMU hypervisor.

I run a cloud with features you can not find anywhere else in the world today.

So, it is, indeed, a mini-home cloud :-)


well now you're assuming virtualization is the same as cloud, it is not, a cloud researcher should know what a "cloud" actually is instead of the marketing usage for it. for one, redundancy is always implicit in any cloud solution.


I don't believe redundancy, what I think you mean---fault tolerance---is implicit in the definition of cloud computing.

The most general form of the term 'cloud' in my mind is elastic access to computational resources.

There is no guarantee of high availability (in fact, we see failed AWS instances and horror stories all the time), nor automated failover/redundancy/fault tolerance.

Chaos Monkey by Netflix is a great example of the shortcomings of the modern cloud in terms of fault tolerance and redundancy.

The cloud doesn't give it to you, you have to build that yourself.

I'm a very open-minded person, perhaps because of my academic background, so I would be happy to say that I agree that a subtype of the fundamental cloud could be a redundant/fault-tolerant one.

But the fundamental definition of cloud does not include fault tolerance.

Don't trust me.

Check out the more formal definition from US NIST (years of work, 15 drafts), there is no mention of redundancy or fault-tolerance:

http://faculty.winthrop.edu/domanm/csci411/Handouts/NIST.pdf


Also, my setup is a private cloud (in the NIST definition).

There are two tenants: myself, and my fiancee.

We use VM technology to share the same set of physical resources.

Our VMs are managed via Vagrant.

VM technology does not imply cloud, you are right; however, it is often used to implement clouds as an abstraction layer over hardware.


Warning, this page loads 20 megabytes of huge images that might crash your mobile browser. It does that for no good reason, the images are only shown in thumbnail size..


Fixed. Sorry, I was hosting them myself and keeping them hi-res for the magazine.

ACM is fixing something at their end to host them, and at the smaller size.


thanks!


Dell (T20) and Lenovo (TS140) sell quiet and low-power servers which are targeted at homes and small business. Prices start at $300. Model with VT-d and vPro Xeon E3 is $500. Dell model officially supports RedHat Linux, so there's no Windows tax.

http://www.rackrealm.com/dell-t20-server-and-openmediavault-...


How much are they with 8 real cores and 24 - 32 GB RAM? And two hard drives?


I'm pretty sure the E5410s (or any Harpertown/Woodcrest processor) in a standard configuration use fully buffered (FB-DIMM) RAM, as opposed to unbuffered. Super cool that Dell did some trickery in the CS24-SC (apparently CS means "custom server") to bypass that obnoxious requirement.

EDIT: Apparently this family of servers use a bastardized chipset that doesn't require FB-DIMMs; nothing special on Dell's part.


I've got something like this sitting in my living room, but instead of using a 1U rackmount server, I opted for a Mini-ITX system with a LIAN LI PC-Q25B case.

It's quiet, holds tons of drives and actually looks like it belongs next to the TV (I run XBMC on it as well). It was pricer than the OP's configuration, but it won't keep you up at night (and bleed your wallet in power bills).


Mine runs at 400W, less at idle.

We don't run it 24/7, only when we have some research jobs to run :-)


If you want previous-generation rackmount gear, try Weird Stuff Warehouse, in Sunnyvale, CA. Weird Stuff is where Silicon Valley data centers go to die. Typical: "HP Proliant DL160 G5 2x Quad-Core Xeon E5472 @ 3.0GHz 4GB RAM 1U Server - $189.95" They have lots of older Cisco gear and all the accessories you need to build a retro data center. Not that you'd want to.


That's already more expensive than what I paid for the 1U server :-)

And I packed in 24 GB RAM for less than $190 (I think $188).

Only the HDDs bring the price up, and a couple other accessories, to $300.


How is that cloud? Isn't that just... a server?


I think he meant it's a repurposed cloud server, i.e. in its original home it was used in a cloud computing scenario.

I'm with everyone else though, in thinking that a good quality workstation would be a better fit for his purposes. For the same $300 you can get a decent workstation with a quad core CPU and hardware virtualization on eBay that can handle 32GB of memory, and room in the case for at least 2 HDDs and 4 SSDs if you use a stacking mount.


"Home cloud" is 100% buzzword-compliant. Servers are those things with tape drives and blinkenlights from the 80s!


> Figure 4. The HDDs connect fine without caddies.

Uhhh....


So the SATA/SAS ports are low enough that the hard drives fit fine in there and connect nicely. They're at a very slight angle.

Are you worried about vibration? Or something else? I am a little bit.


Yeah, vibration, or accidental removal. I have cats, dogs, and kids.


"We’d need CPUs with VT-x or AMD-V so we could run VMs with accelerated hardware support. VMs run slow as a snail without acceleration."

This is false. This is why I use xen on a debian host with paravirtualization.

I can then have many linux guests all served with cheap older hardware that doesn't have the hardware hooks.


If you do have hardware extensions for virtualization optimization in your CPU, you're probably missing out on performance with Xen by using paravirt.

System calls and memory access are slower in PV than HVM (See: Problems introduced with paravirt when x86_64 removed two of the protection rings from CPU architecture) , and PV-on-HVM drivers for block/network devices get you PV level performance.


HP-built Itaniums go for $100 on eBay. I bought one the other day (for kicks). The performance is pretty nice.


It's pretty fun :-)

Also, if you're the real RWMJ, hello :-)

I work on introspecting virtual disks with QEMU/KVM (someday hoping to patch QEMU to support my research).

We've interacted a few times before.


Interesting .. got a link?



If you want old hardware, Ebay is one route. If you don't want to deal with bidding or need a warranty, there are places like http://www.servermonkey.com that specialize in refurbishing and reselling old servers and workstations.


Totally agree, as a hobby eBay is fine.

I avoided these guys (and others like them) because of the premium cost on top.


On-the-fly survey for what I'm currently working on: http://utter.io/. Vote up if you have a server(s) at home which you are already virtualizing. I have 5 machines running OpenStack at the house.


I hope he has a basement or something to put it with all the noise it's going to make. I wonder if he would be better off financially buying a second-hand desktop PC due to power usage.


This guy uses 400W, less at idle.

If I could find a cheap desktop with 24-32 GB RAM (again, also cheap, like $60 cheap), than that would be an option.


Personally I want to make a huge Raspberry PI and Netduino cluster.


Sounds awesome :-)


Or PogoPlugs.


Home and cloud are mutually exclusive I think. Cool build though.


Au contraire.

See the cloudlets concept: http://elijah.cs.cmu.edu/


Perhaps this is a cloudlet then? Cloud computing is not merely VMs on demand. An essential nature of cloud is multiple distributed machines with the ability to transfer applications (or VMs) between machines for various reasons including load balancing, fault tolerance, and high availability. Furthermore, the application user is abstracted from which physical machine the application is running on. In your case, you have a single machine which you are likely addressing directly. This is not cloud computing.

Perhaps you mean "cloud storage", where your personal devices are able to store and retrieve things "in the cloud". The equivalent of a local FTP server (or NAS) does not a cloud make.

I appreciated the writeup nonetheless.


Just buy a Synology NAS and be done with it.


That was my first thought. For storage, a Synology would wipe the floor with his setup. He's using it as a compute node, though, and even the newest DS415+ wouldn't hold a candle to this server project.


I run a customized hypervisor for cloud research.

I do things no cloud in the world can currently do.

My use case is not a NAS :-)


As a geek hobbyist, I have always liked the 1U form factor, but the noise out of those things can be crazy.


The CS24-CS isn't so bad.

Sounds like a hair drier on low, as others have reported.

It's only super loud if you load it (I calculated Pi to 5 billion digits). It would spin up fans, be loud for 30 seconds, and spin down for a few minutes and repeat as it computed.


Don't let the sound bother you. Put it down in the basement or somewhere where you can't hear it and use the DRAC to power it on and off.


Precisely. We leave it in a more unused space, and the BMC is useful in remotely powering up and checking on it :-)


How long it took to complete the Memtest ? just curious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: