Hacker News new | past | comments | ask | show | jobs | submit | smspf's comments login

Nice read, I wonder how this was detected though. Did it trigger any alarms on the infected machine? Was a firewall or specialized traffic inspection involved?


I couldn't finish reading the article.

The first iPhone made connectors and electronic components easier to get to? Sure, it indirectly helped, as any other new, recent electronic device did. But please don't try to rewrite history.

He couldn't make things work over the phone, although he invested large sums of money and dedicated hardware? Come on, I rigged an old analog (rotary) phone to do that using a relay when I was 14 or 15 years old. After all, in that case all he needed to do was to switch on the heating and nothing more.

The whole thing just reads like a marketing fluff piece.


This is nice and makes Red Hat look like an amazing OSS contributor, which I think they are. But truth be told, things were not that easy when it comes to RHEL and derivates on aarch64. As contractors working on getting stuff working on aarch64, I know we struggled quite a bit to get CentOS7 to get to the level Ubuntu was at the time on aarch64. And real progress (i.e. getting a fully working distro for our intents at the time) was only seen after money were thrown and a contract was signed.

The bottleneck however paid off imo - Red Hat insisted on doing things slow and steady, so once things started landing in the distro, they just worked. Unlike other distros which touted aarch64 support early, but struggled to fix issues along the way.

I remember working with OpenJDK and Opendaylight on aarch64. The crashes I've seen were terrifying. The JVM crashed so hard, our expert Java developer laughed 30 minutes straight looking at one trace. To this day I have no idea what was so funny, as he couldn't explain it in a simple manner and I lost interest.

Anyway, thank you Red Hat developers for all the work (minus the systemd "architects", but that's a different story).


A couple of years ago we reached out to the python community about wheels and arm64 - how it should be handled and whether they plan on embedding non-x86 blobs. We received the standard "we'll think about it and let you know". Now that Apple switched to arm64, all communities are suddenly interested in porting things to arm64.

And of course, Apple is not investing in these ports, at least as far as I know. They just rely on what other arm64 players did in the ecosystem before Apples rolled out M1; respectively lets developers figure out the remaining porting.

As much as I hate to say this, IBM does handle porting things to ppc64 right - you can find IBM contributed code and optimizations anywhere you look. For many packages, porting to arm64 was a matter of "does it have ppc64 support? if so, it can be reused for arm64" ...

Disclaimer - used to be a contractor porting stuff to arm64 for a couple of years.


> And of course, Apple is not investing in these ports

What count as "investing in these ports"? Does submitting patches to CPython (and many other open source projects, including NumPy) for macOS 11 and Apple Silicon count? Here's a list of Apple-submitted PRs on python/cpython: https://github.com/python/cpython/pulls?q=is%3Apr+author%3Al... Also some co-authored patches excluded by the search. See https://bugs.python.org/issue41100 for more related PRs. Your subsequent comments about IBM seem to imply that these do count.

I'm also curious about who's "we" in "we reached out to the python community about wheels and arm64".

---

Edit: Forgot to say, arm wheels have been supported for many years now (not sure about the specific timeline of aarch64 support, but if 32-bit arm was supported I don't see why aarch64 wouldn't be). Maybe most famously there's https://www.piwheels.org/ for RPis. Are you talking about aarch64 support on PyPI/warehouse?


By investing in these ports I was thinking about any contribution that enables/improves arm64 support. So yes, PRs for python itself definitely qualify. However, the python ecosystem is not only python, there are lots of python packages that still lack arm64 optimizations or native arm64 support. Most of them work, being python, but every once in a while you'd run into a package that does not just work on arm64 without some tweaking.

In my case, "we" referred to the working group of ARM contractors working a very specific higher level project (at the time we were doing some cloud/openstack/k8s stuff, which indirectly required some python packages). Note that the way contractors were assigned to projects at the time implied that sometimes you'd run into overlaps, e.g. some packages we needed had already been ported by a different team of contractors for a different project a while back. Sometimes multiple teams would hit the same missing port at the same time, most of the time we could sync and team up by escalating that through the right channels, but sometimes we'd just end up duplicating work items and end up with two different sets of patches/ports.

About wheels, someone else in my team handled that, so I'm not that familiar with the issue - it might have been a particular package that was lacking arm64 wheels or it might have been impractical to use a different repository only for arm64 (in general we'd try avoiding using separate resources based on architecture and we definitely did prefer the arch-agnostic approaches, e.g. debian repository URLs are arch-independent, dockerhub manifests allowed using the same container image names etc.). We were interested PyPI at the time (couple of years ago), maybe the situation has improved meanwhile, but we just accepted compilation at install time as a good enough solution.


For what it's worth, I think Apple sent the Homebrew devs ARM Mac Minis for building and when the Homebrew devs asked for more, they sent more. So they're doing something, even if they're not doing as much as one would hope.


To be honest, I don't blame Apple for this. I blame the ARM ecosystem which is very fragmented, each company working with ARM is contributing to the stuff they are interested in and that's it.

Lots of contractors and always shuffling/changing projects they work on.


Isn't Apple doing the exact same thing with their proprietary ARM ISA extensions?


Their ISA extension is an ML-specific one, and macOS runs fine with it disabled. Their public compilers do not support it either.

You are supposed to use it through Accelerate.framework, which exposes a more traditional interface to that capability.


> Their ISA extension is an ML-specific one

Apple Silicon has more than one ISA extension.

There is also the x86 memory ordering extension used by Rosetta 2.

Maybe there are yet others too.


> There is also the x86 memory ordering extension used by Rosetta 2.

That one isn't really a requirement either, and is handled fully in kernel mode.

Wkdm and friends are handled fully in kernel mode.

APRR? Not a strict requirement for user-mode, JIT regions are just left as RWX without it.


Apple enforces APRR on macOS; you need to use their APIs to work within the confines of W^X.


There are more but they are not exposed to applications. Apple wants you to ship standard arm64 code.


They also submitted some patches to MacPorts, and I think Homebrew though I couldn't find a reference offhand. So i suppose it's always possible to argue they could do more, but code and hardware is a long way from nothing.


Why would Apple invest in third-party toolchains? Aren't they incentivized to only support Xcode, Swift, and other tools that are as tightly bound to their ecosystem as possible?


Because they sell hardware and commodize its complement. Having a good Dev experience is part of that, be that a good story for mac-only software (swift/xcode -- ymmv if this is better than other technologies, but that's the intent) or having your posix software mostly just work.


Apple uses community tools internally for some things.


They can't. They've put themselves on the path where they can't abide by the GPL. GPL3 says you must preserve the right to run the software and apple takes that away from their customers then sells it back.

So apple nopes out of it. They rely on the initial boost they got from GPL software. They do some MIT/BSD licensed stuff (and publish some of it but not everything). Every year less and less happens, unless it's their own language/etc.

They do have nice gradients and round corners though. :)


> They can't. They've put themselves on the path where they can't abide by the GPL.

PyPy is MIT licensed.


I'm talking about apple investing in non-apple/community software.


You're talking about GPL3 software, which is irrelevant here. They clearly do contribute to MIT-licensed "non-apple/community software", as I demonstrated up thread, so what's the point of this "they can't" BS.


just remember it all started with stuff like gcc afaik.


>They rely on the initial boost they got from GPL software

You mean BSD? That's how Darwin is licensed.


I believe the GP is talking about the software macOS ships with, not macOS itself.

macOS ships with some GPL2 software, along with its BSD/MIT/Apache-licensed Darwin layer. (The GP is presumably implying that having this GPL2ed software "built-in", helped macOS entrench itself as a useful POSIX for developers — at least before tools like MacPorts/Homebrew came along to make acquiring this type of software "after-market" simple.)

For any GPL2 software macOS makes use of that then transitions to GPL3 licensing, Apple can't actually adopt the GPL3ed version (exactly because of the "runs anywhere" clause in the GPL3) and so instead, macOS keeps the old, GPL2ed version of the software around, left to rot. Eventually, if it's something critical, Apple removes their dependency on said software, replacing it with something that's not GPLed.

This is the story of Bash on macOS: it was regularly updated for as long as it was GPL2-licensed; then it got "stuck" on the last GPL2 version when GNU transitioned Bash to GPL3 licensing. The Bash on macOS remained on that old version for the longest time, before finally being swapped out for Zsh in Catalina.


also they were technically in violation of the GPL with bash. They made some modifications and did not release them all. (try to compile bash without rootless.h)


Presumably, they are on https://opensource.apple.com/source/bash/bash-118.40.2/ although sources for the latest macOS are usually not released in a timely manner (no Big Sur at the moment, for instance)


rootless.h has never been made available to the public as far as I know.


I just found that the new dump of sources for 11.0.1 has a version of bash that doesn't #include <rootless.h>


that's odd... what does Apple have to win out of a GPL violation for this?


Nobody has forced them to release it, so until then they get to keep some magic to themselves, presumably.


In the beginning the packaged a bunch of stuff, both mit/bsd and gpl. At some point they stopped gpl software. The darwin stuff is published but not everything. The code shared has become less over time even as apple has grown enormously.


They didn't stop shipping _GPLv2_ software. They just never shipped GPLv3 software.


Emacs is gone IIRC


Parts of Darwin are APSL.


Yes. That. I am done with the MacOS for good. My first Mac was a Classic II, back in '92. My last will be a MB Pro 2018 (actually 2020, but already repaired once and broken beyond working condition again). It was my third MB Pro in 3 years and every single one had to be repaired once before it crashed shortly thereafter. The company I work for has to replace around 1% - 2% of their MB Pro fleet per month. 6 - 10 devices are always in repair (additionally 1 - 2%).

I have notified my superior, that I am not able to work properly in January when I am back from a 2 months LOA were I privately switched everything over onto a WIN10 with WSL2 running Ubuntu.

I am not missing a thing currently. OK - I know at one point I will probably be missing Keynote.


>In an interesting turn of events, the investigation of the whole SolarWinds compromise led to the discovery of an additional malware that also affects the SolarWinds Orion product but has been determined to be likely unrelated to this compromise and used by a different threat actor.

Either that one was used to compromise the supply chain (in which case it makes little to no sense to keep it around and risk detection), or at least 2 different groups had the chance to target sensitive US infrastructure.

Funny how media coverage of this issue misses no chance of mentioning Russia and nobody else, not even possible suspects.

I wonder what happens if the attackers notice each other on the compromised system. Do they get along in exfiltrating data or do they fight quietly?


> Funny how media coverage of this issue misses no chance of mentioning Russia and nobody else, not even possible suspects.

There are parts of the intelligence community that know with confidence who the true attacker is. Even if they had no idea they were being exploited, there are many ways to perform post-mortem analysis when you're, e.g., the NSA. So, someone has 100% confidence, or close to it.

In terms of what the media says: typically, they report on off-the-record remarks from officials and leaks. That's just how the game is played. It's an unfortunate byproduct of everyone wanting to tell, but nobody wanting to be caught telling. The value of Reuters and AP is that they typically do enough due diligence on their own sources to make sure that they're not just spouting nonsense. "Top of the food chain" sources like them are very regularly correct, but fallible.


The secretary of state has said as much, and pointed at Russia. Sure, he could be lying, but given the president's reflexive defense of Russia, that would be a weird lie to go with. If anything, it's an admission against interest, which strongly suggests to me that this is the assessment of the relevant security agencies.


Trump said it was China


Trump said a thing and China was one of the words in that thing.


Don't forget the "intelligence" community is paid to find Russian spooks hiding everywhere. The 2014 JP Morgan hack was blamed on Russian state backed hackers[1]. We know now that was pure speculation and not NSA inside knowledge, since some time later a small criminal gang were successfully prosecuted for it. Apparently they were running a pump-n-dump scheme. 1. https://eu.usatoday.com/story/tech/2014/08/28/russia-jpmorga...


> Russian state backed hackers

> small criminal gang

They're the same picture.


> In terms of what the media says: typically, they report on off-the-record remarks from officials and leaks. That's just how the game is played.

This isn't how the game is supposed to be played and is a symptom of the erosion of the media's journalistic integrity. Anonymous sources can tell you where the bodies are buried, but you still need to dig up the bodies. One would think if you're going through all the trouble to track down three different sources who are both competent and trustworthy to comment on who the government suspects, that you'd take the opportunity to ask a follow up question like "why do you think it was them?" Yeah, everyone wants to be the first to break a story, and real investigation is a lot harder than tabloid journalism, but that's the job, or at least that's what it used to be.


And herein lies the problem, anyone who actually knows who it is, is not going to tell you how they know. The intelligence that was used to discover who the attacker, is much more valuable than the information of who the attacker is. The best you'd probably get is 'classified sources/methods/intelligence'.


And anyone who doesn't know can give you just as much information. If you don't substantiate the rumor, it remains an unsubstantiated rumor.


But media can just add ", person X says" at the end of a sentence and then the burden of proof is no longer with them. They can report that "Obama is born in Kenya, President Trump claims" and, hey, they're reporting the true fact that Trump claimed something...


ThunderX was a huge disappointment. ThunderX2 (which one may think is the successor of ThunderX, but is actually a completely different system that Cavium obtained by acquiring a different company that was also working on ARMv8 hardware) was a (not so huge) disappointment. Cavium tried to copy-paste lots of coprocessors and offload things from the CPU, but the overall system was not that great.

Early AMD Softiron and Applied Micro boards (which had 8 cores unlike the ThunderX which had 48 or 96) were actually faster, which I always found interesting.

But Ampere's previous generation (before N1) is fast, much faster than the ThunderX2. Afaik, they built it on top of previous Applied Micro IP. So I'd expect N1 to be in a different league and not worth comparing to ThunderX2.


> But Ampere's previous generation (before N1) is fast, much faster than the ThunderX2.

I think you're getting ThunderX2 and ThunderX mixed up here.


In my experience, for tasks like everyday operation, kernel building etc.: Thunderx < ThunderX2 < eMAG (Ampere's platform before N1).

I wouldn't say it's the same order of magnitude for the 2 comparisons, but it's definitely noticeable.


No. Because (in no particular order):

- it shouldn't be the community's (or a crowd-funded dev's) responsibility to provide software support for hardware produced by one of the largest companies out there (bonus: with zero hardware specs);

- Apple could make all this futile with a push of a button (SecureBoot can be disabled for now, but what guarantees are there this won't change?);

- other arm64 machines will be available soon enough, most if not all of them with publicly available specs;

- I do not own an M1 machine, nor do I plan on buying one;

From a technical perspective, it's doable. Looks like it has UEFI and can run Windows. But we know nothing about possible silicon errata and required driver changes (or at least I don't).

Anyway, I'm sure others would like to see this happening and would actually pay - hopefully the Twitter poll will reveal whether this is actually worth it.

Disclaimer: I ported things to arm64 for a couple of years as a contractor.


For the record, I disagree with some of your points, but the one point I agree with is really important.

There's no clear guarantee that there will be other performance-competitive arm64 CPUs in laptops anytime soon. I don't think anyone has as much incentive as Apple does. Who else is as incentivized to make a laptop/desktop-class arm64 chip? Maybe ARM themselves ... but without a mainstream OS to run it on (with mainstream software available for it), I don't see it happening in the next 5 years. Its a chicken-and-egg problem that Apple is uniquely suited to address with their vertical control over the Mac ecosystem (hardware/dev tools/OS/competitive software).

Server chips, maybe - but we can already see with Azure that competitive x86 chips from AMD have killed Microsoft's plans to deploy arm64 on their cloud service.

But this:

- Apple could make all this futile with a push of a button (SecureBoot can be disabled for now, but what guarantees are there this won't change?);

This is huge. We could all contribute to getting Linux ported to M1, and then Apple could shut us down with little or no effort. And ... maybe they won't? They probably won't? But who knows? Why build an ecosystem around a hostile hardware vendor?


> Server chips, maybe - but we can already see with Azure that competitive x86 chips from AMD have killed Microsoft's plans to deploy arm64 on their cloud service.

I have heard a theory that ARM Servers have a difficult time because there aren't really many developer machines that run ARM. With Apple changing that, there is a chance that the next round of ARM server chips will have better success.


I think this a contributing factor, but not the whole story. Another part is in order to switch to arm64, your entire software stack needs to support that architecture. If you are using linux and open source software, you'll be fine for most, maybe all of that stack, especially if you are willing to compile things yourself. But it just takes one component to block the transition.


I see it as being a bit like the move from Python 2 to Python 3, but easier.

Most software that can run on x86_64 can run on ARM after recompilation. Some software does require changes (anything using vector intrinsics for e.g.). But in general, the biggest barrier is the dependencies.


I agree. Cross-compiling is just awful in general, unless someone works really hard to put together a really high-quality cross-compilation toolchain which includes compiling, deployment, and remote debugging. That's what you get with the iOS and Android toolchains. But it just isn't there with Linux (or Windows) in general. There's a whole lot of work that's gone into the iOS and Android toolchains. It's easy to overlook that.


That seems silly. Developer machines rarely have AVX-512 or multiple TB of ram too...


Various people have suggested to Intel that they made a mistake by not selling desktop/workstation chips with AVX-512.


They DO sell desktop chips with AVX-512. I'm writing this from one: i9-9900X.


They do now. But for a long time they didn't (I think). Or maybe they always did but were unreasonably expensive.


Amazon have their own ARM server chips out since some time. They are competitive: https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...


I know - which is why this is all so interesting.

But my point stands - 3 years ago Microsoft was talking about deploying arm64 to Azure, and those plans were cancelled with Zen 2. Given that GCP will be gone by 2023, that leaves half of the cloud titans with arm64, and the other half without. Will AMZN sell Annapurna chips to other services? My guess is they won't.

Someone has to make arm64 chips for other data centers and cloud providers, and Cavium/Marvell and Ampere have failed trying. It's a huge investment for an uncertain payoff. Annapurna had buy-in from the rest of Amazon.


You do know there is a version of Windows for ARM64 already, right? It was largely hampered by the speed of the available CPUs, but it does exist.

What I’m really curious about is how the ISA for the M1 is different from the CPUs Windows for ARM already supports. And as others have mentioned — I think GPU support will be more difficult, but I don’t have any data to support that.

But, it’s not like alternative (and mainstream) OSs don’t exist for ARM64. Maybe with some faster ARM64 CPUs, this could be the push to really make ARM a viable architecture for more than just Macs.


You do know there is a version of Windows for ARM64 already, right?

You do know that there's no software available for it, right? Without software, arm64 Windows is not a viable operating system for an arm64 laptop. ARM Windows is as relevant as MIPS Windows or Itanium Windows without the huge ecosystem of software you get on x86.

And who goes bankrupt first to build this platform? Is it the laptops makers who lose millions investing in laptops that almost no one will buy because there's no software for them (actually, they already did that). Or is it the software devs who lose millions porting their software to laptops that no one owns? When Apple says the future is ARM, everyone knows the hardware will be coming and they'd better fire up their IDEs. There's no such confidence in ARM Windows.


> When Apple says the future is ARM, everyone knows the hardware will be coming and they'd better fire up their IDEs. There's no such confidence in ARM Windows.

Pretty critically, Apple was able to modify their chips so that they can efficiently emulate x86 code. Microsoft will not be able to do the same.


There is plenty of linux software that runs on arm64 though. Even if windows remains the dominant deaktop OS, there is certainly a niche for developers and other who would want to run linux on a high-performance arm64 laptop. And then there is chrome OS.


I don't think the "developers who want a high performance ARM laptop" niche is large enough for any serious hardware manufacturer to address.


You can't design a desktop-class CPU for a market niche. Even board-level design doesn't scale that way - every good linux laptop on the market is actually a Windows laptop that's had linux installed on it and benefits from the economies of scale in the Windows market (as much as it saddens me to say this, typing this on a thinkpad running fedora 33).


Fujutsu have made some arm chips which could be interesting for servers if they ever get out of the HPC segment which I think they are in exclusively for now. Specifically the A64FX.


Eventually they will come. And I think they will be linux laptops, not windows. The legacy support of closed source software windows has is working against them atm. Qualcomm for example already has hardware that is one or two iterations away from being competitive with the M1( with different strengths and weaknesses) and they can mainline their drivers if they want.


I agree that competition will come, but I am skeptical it will be from Linux. Linux has less than 2% of the desktop market share (that includes laptops). Where's the payoff for the huge investment required to make a dent? The numbers just don't add up. Apple has made this happen using their massive iOS product revenue to fuel their custom CPU development teams.

https://gs.statcounter.com/os-market-share/desktop/worldwide

As you say, Qualcomm is probably the closest to being competitive, but there is a lot of work to do to catch up. Nvidia is trying to acquire ARM, so they seem to be interested in moving into the space. Samsung and AMD are now working together in ARM processors, so they are another player. Intel used to have an ARM presence via their DEC acquisition, but that was sold to Marvell about 10 years ago. Marvell might move into the space. There are also some ARM startups, some of which were founded by Apple engineers. Lots of activity. Intel is probably the biggest loser in all of this. Sad to see that. They will respond, but it will take time.

At any rate, in terms of desktop market size, Windows is the biggest (76%) and getting a chunk of that processor revenue is a big enough payoff to warrant the required investment. Doubling or even tripling Linux desktop share is comparatively small potatoes. Microsoft is mostly agnostic so they will encourage cannibalizing X86 Windows in favor or ARM Windows rather than lose market share. I think we are going to see a big uptick in ARM Windows investment and product announcements.


Eventually they will come

Perhaps - if it makes sense, cost/performance/power-wise to put a core like that in an Android mobile phone. I guess it would? But in the next 5 years?

Remember that anything Qualcomm makes will be optimized for mobile phones and only mobile phones. They won't waste any die-area at all on anything that isn't required by the Android phone market. The Android phone market is the only market for these chips that sells enough units to pay for their design, and it's fiercely competitive.

Especially with ARM's own designs improving so much and Samsung abandoning their own under-performing designs in favour of ARM's, Qualcomm is going to get a lot more competition in the next few years.

Anything they put on those chips that makes them more expensive or use more power than competitive chips is going to cost them design wins. No way will they sacrifice 10% of their mobile market for some pie-in-the-sky, maybe-maybe-not ARM laptop market that doesn't exist and will depend on lots of theoretical buy-in from Microsoft/Redhat/Canonical/Adobe/Lenovo/Dell/etc.

The legacy support of closed source software windows has is working against them atm.

That's pretty outlandish. Try saying that to someone who uses Excel or After Effects or Photoshop for their work. A performance linux-based arm64 laptop has everything working against it that a windows arm64 laptop does, and arguably even more.


>it shouldn't be the community's (or a crowd-funded dev's) responsibility to provide software support for hardware produced by one of the largest companies out there

You must be thinking of a different Open Source community than I am, because the Open Source community I know thrives off of providing community support for software on major vendor's hardware.


I submitted arm64 patches to OSS ranging from the kernel to the most obscure userspace applications.

All my contributions required some support or at least confirmation from the hardware vendor that my assumptions were correct - e.g. I submitted a patch for a GICv3 errata on a specific chipset; I had to confirm with the vendor that my findings were correct - sure, the patch "worked", but was it doing the right thing or just hiding the real issue? (e.g. why did writing zeros to some magic register fixed the problem we observed? was it a hardware issue or a software issue in the kernel? without feedback from the vendor, such things are a lot harder to isolate and fix properly).

I agree about OSS thriving on major vendor hardware, it's just that Apple is special in this case and intentionally makes it hard for the OSS community to provide support for their hardware.


It isn't UEFI-based, it uses iBoot. Proprietary up and down.


My bad, I didn't do proper research when writing that comment. Thank you for the correction.

And that's too bad, UEFI would have been easier to deal with imo.


Afaik, UEFI (at least on ARM hardware I worked with) embeds the DeviceTree for non-ACPI boot usecases, so the user is no longer responsible for providing the proper DTB for that board, although the user can still override it if needed. Last time I worked with ARMv8, UEFI provided both ACPI tables and the embedded DTB, since ACPI support was undergoing a major rewrite in the kernel at that time.


Not sure about the socket used (it might be soldered down), but aarch64-based workstations are already available for the general public, e.g. [1].

[1] https://www.anandtech.com/show/15737/arm-development-for-the...


I've been using postgres, cassandra, redis and mysql/mariadb on aarch64. Only ran into issues with MySQL, which we rootcaused to some weird atomic locks not working as expected on the first generation of ARMv8(.0) a couple years back.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: