Hacker News new | past | comments | ask | show | jobs | submit login
How enthusiasts designed a powerful desktop PC with an ARM processor (pcworld.com)
88 points by rbanffy on May 2, 2017 | hide | past | favorite | 48 comments



Article is all about vaporware. Probably we were all hoping for more when we clicked. Maybe title should be something like "Enthusiasts Dream of a Powerful ARM Desktop PC".

ARM isn't a super awesome ISA though. (I've programmed directly in 6811, 68000, PPC, MIPS, ARM.) I'd be a little tempted to hope for skipping ARM and getting to RISC-V desktop class hardware. Latest RISC-V chips are more like mid range embedded ARM, but the dream of a clean new ISA that's open source and could become the Linux of CPUs is enticing.


Agreed, I can't believe that article existed with that title. I was rather annoyed when I got to the end.

Title should have been, "Summary of a audio recording about the type of ARM computer a bunch of nerds wish existed."


Have you worked with AArch64? It's a nice improvement over 32-bit ARM.

I'd be more than happy for RISC-V to take over, but it does no good to pretend it isn't a risky endeavor. And I'd rather have an AArch64 world than an x86-64 one.


What makes you prefer RISC-V to MIPS hardware, out of interest?


When I saw the picture at the top I thought someone besides me was using ThunderX as a PC. The single-thread performance is nothing to write home about, but if you can find something to engage all 48 cores (like pbzip2) then its bananas.

I don't know why they say Cavium is "not interested" in making a PC. It's not their main market, but they'll sell you chips and eval boards. You need a few daughter cards for SATA and PCIe so you can plug in a graphics card. What's really needed is someone to integrate that design, putting the normal IO on board.

It runs 98% of Ubuntu packages, thanks in part to Linaro's own hard work. So they should know, it's a perfectly viable project.


Ansolutely. Ill go further to say they should buy Cavium and an S-ASIC vendor (eASIC or Triad) to have a pile of SoC tech that can be applied in most applications plus an option for rapid development of non-performance-critical chips. They already have fabs for sensitive stuff, too, but I dont know how they perform. Maybe buy one graphics vendor next. And they're covered in desktop, servers, laptops, and many purpose-built appliances. The stuff might also be scaled down or sold low margin for embedded where microcontrollers might have been used.

Russia has the money. There's a number of vendors with the tech. They should just buy their way into the high end.


Able to point to a sales channel where one might be picked up? The last time I reached out to Cavium directly, they made it seem they weren't interested in a sale that wouldn't lead to volume later.


I'd talk to Gigabyte. They build systems to spec. If you want to go to Cavium directly then you're right that you have to convince them you have a solid business plan to move some thousands of units a month, in which case they will bend over backwards to help you do so.


Actually, they did not design anything, or that's not described in the article.

Besides, no mention about the fact that there's a complete lack of graphics drivers for ARM under Linux, which is one of the most problematic issues for anyone who wants to build a desktop/laptop with it. No ARM SoC manufacturer has any open source effort for drivers, and reverse-engineering efforts have stalled for the most part.


Reverse engineering efforts have only stalled for ARM SoCs using PowerVR GPUs from Imagination Technologies. However for Qualcomm Adreno and Vivante GPUs, the freedreno and etnaviv projects are steadily and quietly moving forward; both have been integrated into mainline Mesa at this time.

For instance, I learned from HN last week that you can run android on i.MX6 with open source GPU drivers. This is a big deal. See: https://www.collabora.com/news-and-blog/blog/2017/04/27/andr...


What about Nvidia's powerful K1/X1/P1 Tegra chips? Any hope there?


The Google Pixel C runs using the Nouveau open-source drivers.


> No ARM SoC manufacturer has any open source effort for drivers

Eric Anholt at Broadcom has been working full time on open source VideoCore IV (Raspberry Pi) drivers for quite some time now. https://anholt.github.io/twivc4


Yeah, the article doesn't suggest that they tied down a particular design.

I don't think that the server class chips have display controllers so you would just use an external PCIe card with them.


NVIDIA have Linux 32-bit ARM drivers available on their download page, right next to the x86 versions.


Those are drivers for the regular GPUs that you plug into a PCI slot on an ARM server. I don't think they'd work with the onboard GPU on the Tegra SoC.


I thought nvidia had some opensource driver effort (somewhat ironically) for their ARM chips?


Yes, used in production for the Pixel C.


Is it developed by nVidia themselves though? Usually Nouveau is driven by the community.


NVIDIA has actually contributed some code to Nouveau to make sure that it will work with the Tegra platform (the graphics engine there, unlike on every x86 platform, isn't behind a PCI bus). Also, the display part of the Tegra SoCs is entirely different from what the desktop chips have and is controlled by an open source driver written by NVIDIA (tegradc).


Long time embedded Linux developer here.

This article exaggerates the undesirability of cross-compiling from x86 to ARM. It requires a bit of a conceptual shift, but once your toolchain is set up workflow is practically the same. Debugging is a bit trickier, but most issues can be sorted out on x86 before cross-compiling. It's definitely not like developing for Windows on a Mac.

There may be a compelling case for a PC that consumes less energy​ by using an ARM CPU, though peripherals like DDR RAM, SSD and video card will use the same energy regardless of CPU architecture.


Embedded Linux developer here as well.

As much as I grind my teeth with Yocto, it does make the process of generating a usable toolchain much easier than it used to be. Which makes the rest of it easier.

Up until a couple of years ago it was pretty much "grab what you can from CodeSourcery and cross your fingers".


i've found that:

* if all you need is a toolchain, go with crosstool-ng.

* if you want a toolchain and a bootable kernel with busybox and dropbear for a popular chipset, go with buildroot.

* if you want buildroot plus flexibility in every direction (at the cost of configuration pain), go with yocto.

* if you've outgrown even yocto, you're back to crosstool-ng and rolling your own build/bundle system.


For my current project I'm pulling the cross compiler from Debian and the libraries from Debian:armel. This doesn't work for every project, but it sure is easy when it does. I sure don't miss the days of building my own compiler and carefully crafting my root filesystem, but I know I can always fall back on that if I have to.


It seems a shame that Nvidia don't make a NUC like product from the Jetson TX2. See: http://www.nvidia.com/object/embedded-systems-dev-kits-modul....

They'd need to sell it for about $400 though, with case and PSU.


Just the module itself is selling for $530 at Arrow, so I don't think your price target will be hit anytime soon.

However, if you want a fairly powerful 64-bit ARM system with a non-expandable 8gb of RAM, I think that's the way to go right now.


What size of case will fit that board?


Looks like the devkit should fit mITX cases


There's a MIPS + PowerVR PC that can run Debian Linux, with support for hardware virtualization of the GPU. Does not seem to be available outside of Russia.

http://www.pcworld.com/article/3040528/computers/this-russia...

https://www.imgtec.com/blog/t-platforms-tavolga-terminal-des...


Another great ARM-based, enthusiast-built system: the Open Pandora, and its follow-up, the Pyra:

https://pyra-handheld.com/boards/pages/pyra/

Maybe not high-performance, but .. decent. And such a great, fully integrated, community-led project.


Yes, I'm one of the pre-orderers and it's moving closer to delivery. Nobody knows yet when it'll happen though.

Current status: https://pyra-handheld.com/boards/threads/i-wish-you-could-fe...

Here is the cost break-out: https://pyra-handheld.com/boards/threads/money-makes-the-wor...

The current setback is that a first batch of cases was either damaged on delivery or not sent: https://pyra-handheld.com/boards/threads/the-following-is-ba...


What about repurposing ARM-based Chromebooks?[1] Same result, less work.

[1] https://www.chromium.org/chromium-os/developer-information-f...


With Chromebooks that support android you can run linux via termux for console linux


This is how our school teaches AP CS 1 and CS 2. They use to use Crouton but required developer mode. Now with Google adding containers to ChromeOS you can use Termux inside a container without developer mode required.

ChromeOS is the only commercial OS that has the exact same environment used in the cloud on a laptop.

In someways ChromeOS is now an OS of OSs. My 2nd oldest son studying CS as University uses ChromeOS, Android and desktop Linux all on the same machine at the same time. With containers this is done with basically zero additional overhead.

Google has something really special here and would expect it to become a strong developer platform over the next couple of years.

It is just ideal in that you just pull down whatever containers you need and they will run unchanged on ChromeOS as it has the same Linux kernel as the cloud.


All of the more recent ARM SBCs are capable of running as desktops for the most common tasks of browsing the web, watching movies, editing documents, etc... on less than 5W of electricity.

ARM's selling point is efficiency and cost, not raw power irrespective of cost.

edit: The real issue right now is not whether ARM can run as a desktop because it can. It lies in keeping the large number of devices and processors upstream with the linux kernel. Monitoring Kernelci.org shows that this is slowly becoming less of a problem as more enthusiasts become fans of ARM processors and the available chips and SBCs.


I'm pretty sure this title is going to apply to Apple, in the not too distant future.

Unless they've changed their direction, again.

It's also one more way to explain the foot-dragging on the Mac Pro. Hard to launch a new high-end platform when you're focused to switching to a platform squarely focused on mobile (phones and laptops).

Now that they say a new Pro is coming, I wonder a bit more about whether they've switched directions and are going to stay with Intel in the near future.

P.S. Or have they continued development and are they going to be one of the first to introduce true high-end ARM power through up-design and an effective multi-processor/multi-package integration? (Aside from ARM server clusters, which can be powerful but are a different kind of thing.)


While it'd be neat to see, I think the hurdles that exist in moving from Intel to ARM are pretty high, and far higher than they were when they moved from the PowerPC range to Intel.

Just a few off the top of my head:

- The Mac platform, today, is much more widely used than it was in the PowerPC days. This is a double-edged sword. Apple can use their substantial weight to force ISVs to recompile[0]. These ISVs are unlikely to provide the next version for free. Some ISVs won't exist any longer, forcing one to run the application in whatever Rosetta-like compatibility later is produced. I can't speak to how well a translation application would work going from x86->ARM, but the only emulators I've seen that perform well are ones where the target platform is dramatically more powerful than the emulated platform (Nintendo emulators, etc). The impact to users on upgrading is very high and there are many more of them, now, which will get noisy. Their competition has also gotten better at producing more desirable competition (Surface Book, Windows 10[1])

- ARM's aims are performance-per-watt, not performance at all costs. I don't care if my desktop drinks electricity. I care if it does things quickly. I don't believe Apple will make this move until they can be assured that a processor can be developed for about the cost of an Intel equivalent and will perform as well[2].

- I'm foggy on the details, but my understanding is that App Store submissions are required to be done in a way that provides LLVM or other IL/VM language code instead of machine code. This could land them in a spot where re-compiling to a new platform can be done by Apple (assuming ToS have granted them that right), which is great ... for apps in the App Store. The mac platform app store isn't the only source of software.

- Apple would almost certainly have to design the processor as they have with their phone. "They've done it before, they can do it here" is a somewhat fair and unfair argument. Desktops have different design goals than phones/iPads. Notebook designs would probably fall somewhere in-between the two. To do it right, they probably have to support two new processor designs: one for "Pro Desktop" and one for a notebook that has more performance than their highest end mobile device, but sips power like their highest-end mobile device. The cost, all around, will be high: low-power/high-performance/cheap, pick two. Intel gives them high-performance/cheap and moderate power.

There are other, less important reasons, but in weighing pros and cons, I can't come up with a lot of benefits to doing this. The funny thing is that even as I was writing this, I could come up with plenty of counter arguments and the reality is that "Apple could actually pull something like this off". I am having a difficult time figuring out, though, what the upside is for them. Apple owning the processor doesn't buy them a whole lot. And while having a single set of CPU instructions[3] sounds like it might benefit ISVs, it really only helps mac-only ISVs. Most of those ISVs are going to have to target Intel platforms if they're supporting Windows. There would have to be a few other huge reasons to do this to offset the costs.

Personally speaking, I'd love to see something like this ... I'd probably find myself buying an Apple product[4].

Part of me (that more cynical side) me wonders if these rumors don't originate out of Apple as a way to keep Intel on its toes. Apple is a big customer and hints that Apple may jump ship to ARM keep Intel focused on improving the performance/watt ratio and likely helps them on price.

[0] And we all know it's not just a matter of changing the target platform for any moderately complex application

[1] Yeah, maybe not the best examples, but privacy elements aside, Windows 10 works, is pleasant to use and rarely crashes despite my running insider builds in the fast channel.

[2] I haven't looked too deeply into the server processors, but my sense is that they're desirable mainly because of the core numbers, almost as though the ARM processors are making up for single-threaded performance by throwing more cores at the problem. And that's probably a good deal for many/most server use cases. Many server tasks are simple as far as an individual thread is concerned, but multiplied by the concurrent use.

[3] Well, no, not exactly.

[4] For all of the same reasons these "insiders have [not actually] designed an ARM desktop", but also because my inner geek would like to play with an ARM desktop.


> I'm foggy on the details, but my understanding is that App Store submissions are required to be done in a way that provides LLVM or other IL/VM language code instead of machine code. This could land them in a spot where re-compiling to a new platform can be done by Apple (assuming ToS have granted them that right), which is great ... for apps in the App Store. The mac platform app store isn't the only source of software.

Apple calls this ”Bitcode“. It's not useful for portability between architectures, it's only good for adapting to smaller changes in instruction sets. (For example, maybe the current chip doesn't do integer division, but a future one does.)

Here's a discussion: https://news.ycombinator.com/item?id=9727599


The article begins with looking back many years. In those days there were technical limitations and clever workarounds. One MB was enormous.

Today, what are the technical limitations?

PCWorld's summary of by the comments in the talk lacks much insight. The article itself is misleading because it seems to suggest the goal is a "consumer PC". As I understand it the goal is a build machine, a computer whose purpose is to run a compiler. This is not a computer primarily for consumption. Graphics are optional.

Here is how I understood the comments:

1. Need expandable memory. Not everyone will require the same amount of memory. For example, the kernels I compile only need about 200MB of RAM, max. But this might not be true for other users. Back in the PC days, users could add their own RAM. This concept has been lost on mobile phone manufacturers. Not everyone needs the same amount of RAM.

2. Need better secondary storage. I would guess the reason is not to supplement RAM i.e., swap space, but because source trees have become so large. This is a pet peeve of mine. I keep writing half-baked tools to reduce the size of source trees to only what I will need. If the trees were smaller, perhaps more customized, then I could fit them in RAM (mfs or tmpfs). On my RPi I never use SD cards as writeable storage. They are just where I store a read-only copy of the kernel and userland which I can load into memory. I pull out the card after booting. I/O via secondary storage is just too slow.

Looking back at times past there was some incredible creativity to find ways to make due with limited memory. Today we have enormous amounts of RAM (and computing power) but I see little effort or creativity in getting systems to fit into these "constraints" (sheesh).

The systems of the past fit easily in today's "minimum" amounts of RAM. The systems of today are still used by a majority of users for many of the same boring tasks as in times past but cannot fit into GB of RAM?

Software is like a gas. It expands to fill space.

Forcing one to keep purchasing new hardware. I guess that is the point.


I think this will make a fine build machine for aarch64:

http://b2b.gigabyte.com/Density-Optimized/H270-T70-rev-100#o...

With 372 cores and 32 dimm slots you can put your whole Ubuntu source tree in tmpfs so you will not have to wait too long to recompile your distro.


Does anyone know any ARM SoCs which can run without binary blobs? I'm looking for something with a fast serial video interface, H.264 decoder and a SATA or PCIe controller. So far I'm looking at the Allwinner A20 and the i.MX6 - however the latter doesn't have enough bandwidth on the video interfaces to drive a high-resolution display.

The RK3399 mentioned in the article looks great, however it seems like at the very least, the Mali GPU requires a binary blob (not sure about some of the other peripherals).


They are not going to use Linux as the OS are they?

This is the OS in which opening browser tabs cause music to go choppy. After more than 20 years of Linux this problem still exists. WTF?

Did that ever happen on the Archimedes?


What's weird is Russia simultaneously has teams building CPU's so threatening that Intel will buy one in defense but the one making desktops is worse than what some academics built on RISC-V and OpenSPARC. Russia needs to pay their top people for a competitive design that can be cheaply licensed due to the subsidy. Then sponsor some SoC's. Then a desktop. Then get the process rolling.


Which Russian chip company did Intel buy?


Pretty sure he is thinking of Elbrus, but AFAIK nothing actually happened from that being announced back in 2004... Elbrus still exists and is making chips. They were basically the Russian Transmeta.


I saw the is buying article but not if they concluded the deal. It was a huge amount of money. I also remember the tech description was like a better version of Itanium. I thought they were blocking an Itanium rival before it hit market.

Didnt follow it from there as I was doing a quick survey of Russian chips and fabs.


Intel hired people from Elbrus, to work in Moscow Intel's office

https://www.extremetech.com/extreme/56406-intel-hires-elbrus...


That's exactly the one I read! Thanks Jenya! So, not only was I right they could make Intel-grade chips: they've been doing it at Intel since 2004. Haha. There should be some more in Russia, though, that could support a larger team of hardware people. They could also try to lure the best ones back or get some from IBM's POWER team.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: