Hacker News new | past | comments | ask | show | jobs | submit login
Disabling the Intel Management Engine (gentoo.org)
531 points by metadat on Oct 26, 2022 | hide | past | favorite | 290 comments



The IME gets a lot of hate around here, but let's not get distracted by it: higher-privilege co-processors running code outside the main OS' control is becoming (or already is) the norm everywhere. Intel-based PCs are just one instance of it (and perhaps not even the most egregious one).

Most hardware has evolved to effectively run the main OS under a sandbox where it "thinks" it is in control, but isn't.

A nice talk on this: https://www.youtube.com/watch?v=36myc8wQhLo


> higher-privilege co-processors running code outside the main OS' control is becoming (or already is) the norm everywhere.

I don't think this fact is what you should focus on. That fact the blobs are binary, closed, proprietary, signed but not easily verifiable by the user, and not easy to disable is the problem.

The promise is they're going to "improve security for PCs." Yet, they're using techniques that we know to be invalid. There's no reason to tolerate this.


When you consider both at the same time it is cause to pause and speculate on how malware might take advantage of this built-in tool.


They can have a physical switch or tool to disable it, or sell separate chips with/without IME.

Unfortunately there isn’t really incentive for Intel to do this, unless larger companies / governments refuse to run IME-enabled chips due to security concerns.


Governments and large companies are the ones who are explicitly requesting this functionality. End users don’t give a rats ass, it’s managed computed where the money is here.

That it can be used to back door the machine is the primary use case for the audience, as that is what lets them do a remote reinstall of Bob’s broken workstation somewhere, or any number of other legit use cases.


Ironically government probably mostly doesn't care. The assumption of networked systems is that they're compromised if they have any internet connectivity. If it's really important then you air-gap it, and if it needs networking then you still isolate that network.

Governments do you use public internet VPN sometimes...via accredited boxes which handle the tunneling for them and are the one point of ingress (and have a commensurate price tag).


Government != NSA


Irrelevant. For anything classified aka serious, you start with these assumptions. If it's not classified, then it's basically about as confidential as a business process is.

Which is to say, all they want from their suppliers is "yeah, ME is safe. Also buy these tools to manage your fleet."


> higher-privilege co-processors running code outside the main OS' control is becoming (or already is) the norm everywhere

There may be good arguments for allowing these types of "features" but this is not one of them. I'm so tired of seeing "it's fine because everyone else is doing it too"


The GP is not saying anything is fine.


Well, he kinda makes it sound like that the fight is over and it is time to move on.


Quite the opposite. While IME is discussed to death, the same loss of control is happening everywhere and becoming more and more entrenched.

Like mentioned elsewhere in this thread, the problem isn't the presence of these types of components, but how opaque to the user (read: highly technical user) they are. Also they exist because there is demand for their features.

The talk I linked makes the case that OS development is failing by pretending these co-processors are outside its scope, and hardware vendors just go and do their own thing on the side. I add that this incentivizes proprietary firmware instead of open one. I mean, if there were pressure (from paying customers) for Intel to support open-source IME firmware, they'd do it. After all, they just want to sell more chips.


We need more exploits of these co-processors running in the wild. This stuff is done in the name of security but is incredibly insecure by nature. We know e.g. the NSA requests builds with this stuff turned off, but if more govts are affected then fewer will put up with this, the markets can follow.


Yes, and its a movement into the wrong direction. I do not trust the vendors to run code on co-processors that i have no control over. I somewhat expect it to be spyware and ads/data collection soon.


And support DRM to protect media companies' IP.

Because $$$ talks, and there's a lot of money in media.


Tech is actually much bigger than media. The tail keeps wagging the dog for some reason.


Well, luckily we have TPM chip just for that...


Nope. kernel module mei_hdcp exists on modern systems.


nope, that's blacklisted on all my machines


I have taken the time to watch the talk.

What I have learnt (to my dismay): Complex hardware has their own software, like IME, but they do important things you can't turn off. The danger in this: hardware-based exploits. In other words, the security of an OS is irrelevant if there is some shoddy closed-source software on your system running hardware components. Linux already has lost the control of hardware. It's like virtualization, but on the hardware level.

People talk about switching off the IME, but that's barking up the wrong tree.

What's proposed: redesign hardware and write an OS encompassing all hardware functions. Don't accept opaque SOCs where your OS is just an API consumer.

This is controversial because of course DRM components want to be a black box, for example.


Follow-up: Now I am afraid the victory of Asahi running Linux on Apple Silicon systems is only superficial. Apple still has a tight grip on how hardware is ran on a level lower than Linux.

If I am wrong, please point out. I would be happy.


People have been worrying about Apple locking stuff out for decades - and they haven't. And why would they? What do they gain by restricting access to the hardware? It just doesn't make sense. People love to ascribe all sorts of motives to them that are, quite frankly, utterly ridiculous. They aren't perfect - no one company is - but they certainly are one of the better choices given where things are going in general.


No need to be negative ("utterly ridiculous").

And you seem to have misunderstood me. It's not about Apple locking something, its about Apple software on the hardware of your Asahi Linux system.


Apple M* CPUs do not have anything like that.

Their coprocessors are not higher-privileged. On the contrary, they are all isolated from AP, each other and main memory (by IOMMU).


Hmm. If Apple were easier to develop on I'd consider buying their M chip laptops for that reason alone.

My issue last I worked with one was that userspace was ridiculously restricted - they have made it antagonistically difficult to do anything on a Mac. Normal PC, despite the nonsense in the hardware, remains fairly easy to work with, as long as you aren't doing security critical stuff.


You might be interested in following https://asahilinux.org/ then


Sure, but does the separate co-processor needs access to network stack? for a typical end user? definitely not.


It does if you want remote management, which almost every IT department does.


No IT department wants their remote management at BlackHat.

https://www.runzero.com/blog/ilo-vulnerabilities/

I'm not sure that iDRAC is much better; haven't checked lately.


At least with IPMI interfaces on servers they have a dedicated NIC port you can put on a restricted network.


...And which almost every pther computer decidedly does not, and more problematically, every other computer user has no visibility into the configuration, implementation details, or actual specs of said highly privileged component.

It's one thing to have it, but if it sits out of my reach, sorry hoss, I just don't trust you that much, and the fact you and your buddies all do it and are the only shops in town doesn't make me feel any better.


Can we not have separate enterprise and individual classes of processor?


We do, sort of.

In order to have network access, Intel Management Engine is not enough, it does not have full network access at all. You need Intel AMT (also marketed as "vPro"), and that one is paid extra. With CPUs featuring such support being separate SKUs, so you would definitely know -- and you can check in ark. You also need to pair it with Intel ethernet or wifi, any other network interface is not good enough.

So here you have it, your separate class of processor.


Consumer PCs already don't have vPro/AMT, although Intel can't afford to make separate hardware so there's a concern that the out-of-band hardware path could be activated later by malware.


Heck, ECC is already market-segregated


This is also segmented. The remote management stuff is marketed as vpro which is not available in all SKUs. However, all Intel processors need the ME.


We do. Every time this topic comes up, everyone gets angry about something that doesn’t affect them, at all.


This is plain false.


Is any remote management system available to the public using the ME stuff on consumer systems? I haven't seen it.

And when you look at server hardware the have completely different backdoor facilities.

It really looks like pure pretext, especially since there isn't just a simple bios option to comprehensively and completely disable it.



Yep, the practical difference between a hidden higher privldihe level and another random coprocessor on the system bus which can send memory writes to your core's internal MMIO region (common on ARM based SoCs, anyways) is quite literally zero. If you can write arbitrary physical memory, the entire system is cooked (well, mostly, but RIP SGX). IME is no worse than random DSP, ISP, ML, etc. cores on your average SoC in terms of its privilege in the system. Don't miss the forest for the trees.


They banned Huawei equipments for less than that

How come Intel get away with it?

I went ahead and i disabled it


It's about who's your threat. The us government probably like having an American company (Intel) that distributes an attack vector. But they probably don't like being distributed one.


Except the irony is it's baked into their systems as well, so you distribute your own national security threat...

It's like phone lines, Intel agencies loved being able to arbitrarily tap lines, at least until they started having their own lines tapped as well.

If I were China or Russia I'd have stockpiled a couple bugs in these backdoors and I'd be waiting to cause major economic disruption to the US govt with their own systems... so then it's just a game of who can knock out the other's comms first (which is as I understand it doctrine generally speaking in conflict, but)


> Except the irony is it's baked into their systems as well, so you distribute your own national security threat...

From the article posted by OP

>sets the 'High Assurance Program' bit, an ME 'kill switch' that the US government reportedly[11] had incorporated for PCs used in sensitive applications[12][13];

[11] links to a dead website, but it makes you understand the US called dibs on disabling it


Architecturally, that is fine... but if it's not open and well-specified it will continually face (well-deserved) distrust.


‘Everyone else is doing it’ is a bad excuse. Arbitrarily focusing on intel has made it so others know if they perform shady actions then it’s possible they’ll also become an arbitrary target.

The disproportionate hate is a good thing, if you ask me.


Thank you thank you thank you. I've been trying to find this talk forever after watching it once. I immediately knew this was it when I saw it under this particular thread. Super Illuminating stuff.


I think this model is completely wrong and should be inverted. The main CPU should be on top and run completely transparently. If you want secure coprocessors for DRM and cryptography, they should be subordinate black boxes.


Like a tpm module?


I for one would be fine with a TPM that is on its own silicon and carrier rather than on the processor die or otherwise inside the CPU package. Then I could disable it when I don't want it working, without possibly killing my processor by doing so.


Yeah, or like a HSM for example.


I sorta disagree with the premise of that talk, although the problem is real.

Its just that even that talk vastly underestimated just how many microcontrollers exist on a modern machine.

In the past those controllers were isolated to a few areas (disk controllers, higher end network cards), but the drive over the past decade+ for more efficient devices and "universal" packetized buses (ex PCIe, USB), has sprinkled them in places simply to monitor utilization and adjust bus clocks, as well as packet scheduling and error/retry logic, etc, etc, etc. I was reading about some of the latest m.2 NVMe controllers a while back and IIRC there were something like a half dozen independent Arm's just inside the controller. The last fully open disk stack on a PC was probably an MFM/RLL controller in the mid 1980's.

So, while I would love if the manufacture of every little USB device or whatever published the full register documentation, firmware listings, whatever, that ship has long sailed. The worst part isn't looking for the piles of scattered SPI flash eeproms on random boards, its the integrated "Secure" sides of these devices which happen to be all but invisible. None of that is going to be documented anytime in the near future. Every single one of these companies hides their "secret sauce" in the firmware of these devices, be that how to minimize latency on a NVMe device, to how to get maximum throughput on a wifi chip, to how to increase a DRAM controllers power efficiency. In some of these cases, the firmware probably isn't even that special, they are doing basically the same thing as every one of their competitors, but you will never get them to admit it.

So, imagining that an "OS" can control this mess like a 1960's mainframe is nonsense. Modern mainframes don't even control stuff at that level anymore.

So like software abstractions, we have hardware abstractions which provide higher level constructs for low level software to talk to. Be that something like XHCI where the system talks to generic endpoint queues and a processor does all the low level packet building/scheduling or its something like the tiny integrated cores making decisions about which parts of a CPUs clock and power domains need to be dynamically enabled/disabled for a given perf/power profile and the OS talks to generic firmware interfaces to set policies. To even LBA disk layouts which abstract away all the details of flash channels, COW, wear leveling, NAND error correction, bit pattern sensing, page/block erase sizes, etc.

In the end, if someone wanted to actually work on this problem, the first step towards open hardware isn't really building a RISC-V system, its building competitive NIC's, keyboards, USB controllers, etc, etc, etc with open hardware designs. What we have today is like linux, everyone wants to work on the kernel, no one wants to maintain old crufty code in Make. So, in the end swapping an x86 for a RISC-V doesn't give you more open hardware if its still got its own management processors tied to the same closed hardware IP for literally everything else in the machine.


I agree that having an integrated OS control every random co-processor in a machine would be undesirable, and that hardware abstractions are a good thing.

But co-processors that can break OS' assumptions (regarding security, for example) sound like they should be under OS control. Not that this means under control of a single kernel but, at least, under control of some set of components that are developed together.


"the first step towards open hardware isn't really building a RISC-V system, its building competitive NIC's, keyboards, USB controllers, etc, etc, etc with open hardware designs"

Actually it's building the software tools to emulate these, USB is a dense standard, you will have more luck emulating a working USB device first pass before building out the supporting infrastructure.

Once you can emulate/design your open chip computer, then you can start doing test runs/production runs. The market for such a thing will be limited to engineers and tech enthusiasts, at least until some hardware tech startup starts outcompeting the other players on the market.


If anyone is interested, it's possible to buy a laptop with ME already disabled:

https://puri.sm/products/librem-14/

EDIT: there's more at Wikipedia:

https://en.wikipedia.org/wiki/Intel_Management_Engine#Commer...


I love the concept of a librem, TBH everytime I look at buying one I balk at the price compared to a more traditional build. It's like buying a Mac and I can't even justify it by saying it is for iPhone dev. I don't have a market reason for buying one, I wish I did but what I'd use it for? No IME great, now what? If someone flips a switch I guess I have a safe computer, but I still have to interact with the world and that's still unsafe (banking, identity, employment).

I wish it were a bit more raspberry Pi - I'd have an easier time justifying that purchase just if I had integrated GPIO. I have (multiple) rpis but they sit in boxes because they are a combination of unsupported and complex to assemble/use, outside of browsing some very simple websites I find them difficult to use, even for embedded development.


They're doing novel R&D which no other manufacturer is doing, and putting together what looks like a quality workstation, which brings you much closer to consensual computing than almost anything else on the market.

If I was a buyer of new hardware, and especially getting paid for professional software development, I can't imagine thinking very long before buying one.


It's incumbent upon those caring about liberty [looks in mirror] to support/advertise the alternatives.


StarLabs also sells laptops with ME disabled.


Usually when I'm reminded about IME (and whatever the equivalent is in AMD chips), it's in the context of some strong claims about it being "game over" for security and privacy against mass surveillance, engineered / funded by nation-state intelligence agencies, and rendering all other technical efforts moot. They make it sound plausible, and I think "why isn't this talked about or investigated more?" The section of the Wikipedia page that discusses the "backdoor" claim is frustratingly thin. I just don't know what to make of it. Hyperbole about a crappy thing, like the bloatware pre-installed on most new laptops and phones by the vendor? An open secret, with discussion about it suppressed?


I think we frankly don't know how much of a problem it is, yet. Since there's no widely applicable remote exploit for it, as far as the mainstream is concerned, all we're left to do is speculate on the risk. If someone operates a server, it's best practice not to have any extra services running on top of what's needed to run the original service. This is because every extra open port, software or complexity increases the attack surface. Same with Intel ME, people don't understand why it needs to be there, if nobody seems to even use it.

Preinstalls are not hyperbole though, there were some nasty stuff over the years. Lenovo, for one, bundled Superfish, which man-in-the-middled all HTTPS browser communication[0]. Similar effort from Dell[1].

I think ME's situation is similar to Stallman's attitude toward proprietary software. Proprietary is not evil by itself, but it's very easy to corrupt it to be so, and then the end user is powerless. And because the end user can't decide when this change happens, they are powerless to begin with. Therefore the thing shouldn't exist in the first place.

[0] https://en.wikipedia.org/wiki/Superfish#Lenovo_security_inci...

[1] https://en.wikipedia.org/wiki/Dell#Self-signed_root_certific...


Before Snowden, I think absence of evidence could often be construed as evidence of absence.

But I think that ship has well and truly sailed.

We now know that, behind closed doors in classified places, every bad thing we imagined might be happening, _was_ happening, and then some, beyond the scale of the wildest imaginations of the most paranoid activists. And then some, and then some.

The fact that we don't have proof of _this_ particular bad thing, which is entirely possible and downright trivial and could actually be the entire purpose for which the functionality was designed, should in no way suggest that the capability isn't being used.

Ten years ago, I could see that being a reasonable argument. Now it just rings as blindingly naive.


What about Snowden changed your mind?

I found the material he released to be pretty much as expected.


Before Snowden you had _conjecture_, after you had _evidence_, that’s what changed.


It doesn't necessarily need to be a backdoor. Look up Remote Attestation, which is getting easier every year. With that, you can run whatever software you want on your device - but other servers do not need to talk to your device if they detect that you are.

It's coming up in Android more with SafetyNet. If your device is rooted, you fail SafetyNet. If you fail SafetyNet, almost all banking app servers will refuse to talk to you, rendering their apps useless. SafetyNet could be spoofed historically, but SafetyNet is moving into hardware instead of software since ~2020, so the spoofing has gotten way, way harder and may cross into downright impossible.

It's also coming to Windows with the Windows 11 TPM 2.0 requirement. See the video game Valorant, for example. If you are on Windows 11, it will mandate that you have a TPM 2.0 enabled and Secure Boot enabled. It has exceptions for VMs and Windows 10 and earlier right now - but they can literally close that door, at any time, and immediately remotely lock all machines to that requirement. No amount of game patching will bypass it - the multiplayer servers won't talk to you unless your hardware cryptographically reports that you've passed Secure Boot checks.


> It's also coming to Windows with the Windows 11 TPM 2.0 requirement.

My Lenovo L430 is apparently incapable of running Win11 for that reason. Win10 will soon be out of support, so I'm preparing to blow away my last-ever Windows system, and become all-Linux. I'm looking forward to it.


Isn't 'soon' 3 years from now? And it'll definitely impact PCs more than 7-10 years old at that point, but that's kind of a hard number to get worked up about. If it's that big a deal, when the deadline gets closer buy a new-to-you 7 year old machine for a couple hundred dollars.


You're right; I thought it was coming up in November. I wonder why I thought that? It might be a message that Microsoft presented to me after the forced update I received yesterday morning, while I was trying to use the damned machine.


This it's all true, and all frankly awful. I refuse to take part in apps that do this and implore you all to do the same.


If you fail SafetyNet, almost all banking app servers will refuse to talk to you

This is probably unique to me but I see that as a bonus security feature. All I want to use the phone for is voice, text, mumble, irc and ssh/sftp, only things hosted by me. Im still trying to find a non-google rom that is well supported for my model of android. If I could get a vendor unlocked CAT I would turn the droid into a dedicated mp3 player.


Before looking at IME, let's review other topics. Printer machine identification codes were secretly inserted into printers some time between the 1980s and 2004. Our communications are being monitored in a host of ways. One last refuge was our CPU, but now that is under foreign control as well.

Then there's older US government operations like Minaret, Shamrock, Cointelpro etc. to surveil US domestic political activities, from black civil rights, to Vietnam doves, to a very extensive surveillance of feminist groups. Cointelpro also involved US intelligence disrupting political movements, writing poison pen letters (a database admin and 60s peacenik I knew had one sent to his boss, a lawsuit later revealed the FBI sent it).

Nowadays this is PRISM, Xkeyscore etc. interacting with the telco monopolies and FAANG, to spy on Angela Merkel's phone calls (along with BND turned by the CIA), disrupt Airbus contracts in favor of US aerospace etc.


> Hyperbole about a crappy thing, like the bloatware pre-installed on most new laptops and phones by the vendor? An open secret, with discussion about it suppressed?

Personally, I worry about things like IME based on an entirely hypothetical theory: I think many of the big tech companies are riddled with spies from a variety of nations.

My rationale for this is simply that if I was in charge of a spy agency's offensive cybersecurity group, my top priority would be placing agents in Microsoft, Apple, Google, Cloudflare, Juniper, Cisco and so on. They'd have orders be careless in undetectably subtle ways - nobody's imprisoning a guy just because he added log4j to the codebase in 2010. To me this seems well within the capabilities of a spy agency with a multi-billion-dollar budget and tens of thousands of employees.

Even with code reviews, I doubt anyone could deliver a project like IME with no security bugs, if five of their peers were compromised by different nations' spy agencies.

If you think that's completely believable and what else would spy agencies be doing in the modern age, you'd be very suspicious of IME. But if you think that's an undisprovable conspiracy theory with no solid evidence whatsoever, you might think IME sounds just fine.


> my top priority would be placing agents in Microsoft, Apple, Google, Cloudflare, Juniper, Cisco

Interesting thought. Or more likely, I'd guess, spy agencies might recruit existing Big Tech company employees who have access to sensitive and desirable things. That's usually how it happens, reportedly, when American security clearance holders get caught doing bad things: they aren't deep cover agents who spent years working their way into position, they approached or got approached by foreign agents because of their position.


I absolutely agree with your view. Maybe people forgot about this incident which happened post-Snowden.

https://www.theregister.com/AMP/2016/10/14/congress_yahoo_ma...


Very much so.

They found all of this long ago, and it is inconceivable that they did not use it.

Solarwinds is a prime example, but people get careless.

What we have seen is the smallest fraction of what is, I think.


The existence of the High Assurance Platform (HAP) bit makes it pretty clear that 1) three-letter agencies don't trust the IME, and strongly implies that 2) they asked for it to be there in the first place.

"High Assurance Platform" https://trademarks.corporationwiki.com/marks/high-assurance-...


Yeah, that's the kind of thing I've seen before, lots of circumstantial evidence that makes the claims sound plausible, but then the trail just seems to stop cold.


Abscence of evidence isn't evidence of the abscence thereof.

That it runs cold lodges it firmly in the "we are pointedly not going to talk about it" space, which for me is where the worry even starts. If my little gray hat wearing mind can come up with plausible ways to exploit something like that...

A) I am not that smart

And

B) Someone in a position to pull something like that off has probably already implemented it.


'Who benefits?' Seems to be a relevant question.

Intel has to have spent quite a bit of money to add any feature that you see; so why would they do that without a strong market case...?


Yeah, nobody in this topic has copped to actually using the ME so far. I've never heard of anyone using it.


Most do not use it "directly", but instead use features implemented by it.

E.g. I've used Intel Platform Trust Technology (PTT) to implement system security features, and AFAIK that runs on ME.


I mean, it is the NSA. They're probably pretty good at what they do. I should hope so, tax dollars pay for it.


>trail just seems to stop cold.

There's your evidence.


No, that's the absence of evidence.


Only when your priors are that absence of evidence (in the sense of the trail going cold) is normal. Your parent comment's point is that this is a conspicuous absence of evidence.


This is offered very much in a “take it for what you will but for obvious reasons I am not going to give many more details” spirit. I worked for a major player in cybersecurity back when they were really trying to get everyone onboard with SGX. Our CISO was a technical guy, and worked closely with a peer who had a hybrid academic and professional background in cryptography. They both had strong credentials in mathematics and one was a practicing mathematician at one point.

After a thorough review, all of the stakeholders who reviewed it told the executive leadership not to touch it because their opinion was that it couldn’t offer anything meaningful beyond what we already had in place using the Windows API and it’s interface with the TPM, and they had concerns about what they felt were insufficiencies in the SGX design.

That experience was a bit more in-depth than I’ve detailed here, but the takeaway for me was that Blue was desperately trying to justify a technology that wasn’t what it was hyped up to be.

I’ve often thought IME is the same thing, “different day”.

Edit: typo


The AMD version is https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...

They seem to update it a lot less frequently than Intel


It's just too old for people to be outraged about still.


By this logic, should we not be outraged by 19th and 20th century genocide?


I'm not telling you what emotions to have, just observing the world around me.


Yes, as someone born in the 21st century all of that is just stuff in some history book that I was forced to learn to pass some test.


> By this logic, should we not be outraged by 19th and 20th century genocide?

Well, no. I don't think you will actually find a real person living today that matches a real definition of "outrage" for genocides in the 19th and 20th centuries.

Discarding performative theatrics, you will find people who all agree it was bad... but they won't be literally outraged. The passing of time, and generations, has that affect.


Pretty sure Holocaust survivors and their immediate families, not to mention the scarcer immediate family members of Holocaust non-survivors, are still outraged about the Holocaust. I don't think that's performative theatrics.


Not the impression I've got. People come to terms with it - I'm not saying they would be wrong to still be outraged, but the human mind isn't built to keep that up for decades.


Performative theatrics is attempting in any way to contrast Intel vPro with the Holocaust.


Intel vPro and similar systems centralize power over communication and record-keeping in a way that has historically been both necessary and sufficient to cause atrocities like the Holocaust, the Great Leap Forward, GULAG, and so on.

But, because of newly pervasive computer mediation of day-to-day interactions, these spyware systems potentially provide a degree of centralized social control that Stalin or Mao could never have dreamed of. Recent infringements on human rights in XUAR provide a preview of the resulting future. Essentialist explanations that attribute them to some unique depravity of the Chinese race are utterly implausible; they are due to the lack of effective checks and balances on state power.

Consequently we can expect the atrocities resulting from systems like vPro to be far worse than the Holocaust or any other historical events.


I cannot tell if you are arguing in good faith or if this is some very clever wit.

Comparing vPro to Stalin, Mao, the Holocaust and more is really not serving to forward your argument... particularly while you have an iPhone or Android device in your pocket, watch curated TV content on your Smart TV, and drive your modern car into the office where you use your Windows or OSX computer and ISP provided DNS.

This would definitely count in the "performative theatrics" category of any normal book. Why is this age so sensationalized? Words are becoming meaningless due to overuse, abuse and re-definition to fit convenient arguments...


More strawman.

>particularly while you have an iPhone or Android device in your pocket - I don't. >watch curated TV content on your Smart TV - I watch media from physical discs on a TV with no network interface. >and drive your modern car into the office - I drive a car made before 2005. >where you use your Windows or OSX computer - None of my personal machines run any software developed by Microsoft or Apple. >and ISP provided DNS. - I do not.

Also, I have privacy expectations from my personal devices that I do not have of my workplace devices - privacy expectations that are threatened by ME/PSP.


You are either the Gray Man or you live in a cabin in the woods... or you're not quite as clever at disconnecting as you might think. If you use technology in 2022, it's reporting on you. It is that simple. And everyone, despite their best efforts, uses some technology.

These were contrived examples to highlight all the different mundane items in our daily lives that track and report on our behavior, habits, data, etc. Most of these we do not even consider as hostile devices or services... yet they are. In your example you buy DVD's... where did you get them? How did you pay for them? You were tracked and reported despite trying to be clever.

It is truly hard, next to impossible to operate in our society with total privacy, unfortunately.

This sidebar was brought on by someone bizarrely trying to connect IME first to the Holocaust, and then to Stalin and Mao, which I will never understand. IME isn't the only privacy hill to die on... and frankly, that hill already has too many bodies on it.


>If you use technology in 2022, it's reporting on you.

Categorically impossible statement to apply universally. I have several machines that do not have the physical hardware necessary for any kind of networking. I also have multiple machines that do not have ME fully functional. My most upstream local router, running open source firmware, has whitelist rules for outbound traffic and blocks by default. I also have detailed traffic analysis running 24/7 on my other routers, running different open source firmware. I regularly review for any traffic that I cannot definitively associate to my own activity, and I regularly mix and match the network route my devices take outbound to look for anomalies.

>In your example you buy DVD's... where did you get them? How did you pay for them?

As opposed to copying a friend's discs, or receiving them as gifts, both of which apply to a nonzero number of my movies and shows? What if I did buy them and paid in cash? Not cash received from an ATM or bank teller, of course, but cash received as payment from a customer at a farmer's market?

>IME isn't the only privacy hill to die on... and frankly, that hill already has too many bodies on it.

Privacy is somewhat like security in that you're never truly "done" implementing it. That's not an excuse not to strive for it. While it remains unproven that ME/PSP actually is a functional backdoor, there's no good reason to trust these subsystems. I have personally observed Ryzen-based systems attempting to send outbound traffic while the system was hibernating (before you ask, I will not reveal any metadata about this traffic publicly for obvious reasons.) I know I personally would gladly pay 3x MSRP for Ryzen chips without the PSP. I know many other people who would pay well above MSRP for modern Intel/AMD chips that do not have these subsystems. Market demand is there. The fact that neither major chip producer even offers the option to purchase chips without these subsystems should absolutely continue to arouse suspicion.

You are correct that there are many other issues like writing style analysis, timing analysis (including netflow metadata being sold by your ISP to Team Cymru), many entire threads could be filled with software privacy threats, etc, but again - that's not good justification to just throw your hands up and stop caring altogether. Privacy is an uphill battle in a losing war in today's world, but I for one will not stop fighting. I have a natural human right to privacy, not granted by any man, nor a million men calling themselves a government, and I will stop at nothing to exercise that right.

To your point, that insistence does push me closer and closer to the "cabin in the woods" lifestyle than a vast majority would be comfortable with.


I am not comparing vPro to Mao, and no reasonable person could construe my comment as comparing vPro to Mao.

I am comparing vPro (and similar hardware backdoors) to the totalitarian central government control established by the PRC in the early 01950s, in compliance with widely accepted Communist doctrine, which resulted in inevitable atrocities several years later — in this case, the Great Leap Forward, which was the worst famine in human history. Mao was far from unique among heads of totalitarian states in carrying out mass atrocities. Like hardware backdoors today, totalitarianism was new enough at the time that reasonable people could disagree about its likely effects, but in retrospect the causality is obvious.

I do not have an iPhone or Android device in my pocket (although I do carry one on special occasions), watch "curated" TV content on a "Smart TV", drive a modern car, or use [Microsoft] Windows or OSX. Furthermore, there is no basis for you to suspect that I do these things; you are attempting to drag HN down into the slime of Twitter-style "gotchas" instead of attempting to rise to the level of collaborative exploration of the truth.

Moreover, even if I did suffer these afflictions, it wouldn't make my argument invalid — even if it were not so wide of the mark, your inept attempt at a rebuttal is at best an argumentum ad hominem of the same sort as those who dismiss Noam Chomsky's criticism of US foreign policy on the basis that he pays US income tax.

I am disappointed in your total failure to engage in rational argument. You're arguing at the animal layer of vague emotional associations rather than reasoning about causes and effects. Please, try to do better.

(I do use ISP-provided DNS, which is a problem but not in the same category.)


> I am disappointed in your total failure to engage in rational argument

I really fail to see how one could believe it rational to discuss Mao, Holocaust, and Stalin in the same conversation as computer processors.

This conversation can only be taken as extreme hyperbole.


Dismissing the argument because you're not familiar with it doesn't demonstrate anything but your own ignorance.

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

Undoubtedly when Mao drove the Kuomintang out of the Mainland, there were people who "really fail[ed] to see how one could believe it rational" to fear that within a decade Mao would starve to death ten times as many innocent people as the Kuomintang had ever murdered, particularly since such a large democide had never happened before in history. Then, it happened.


I'm in no way conflating the impact of the two, I'm pointing out that the implication of the original comment "It's just too old for people to be outraged about still", is that people shouldn't be outraged at evil things solely because those evil things happened a long time ago.

The implication itself is ridiculous. Time does not make evil things less evil.

To suggest that I'm contrasting the impact of ME (not the same as vPro) with the holocaust is either blatantly missing the point or a deliberate, bad faith strawman.


The word "outrage" is problematic. It implies, by it's very definition, that the mere mention of these things brings people into a furry of uncontrollable anger.

I would wager people are abusing the word and changing it's meaning to sensationally signal displeasure or disappointment with historical events. Those are not the same.

Outrage has an emotional immediacy to it. It's really hard to be actually outraged by events that transpired 40 years ago, 100 year ago, centuries ago or more.

I assert there is no human alive today that is actually, really outraged by the Holocaust or any of the other atrocities mankind has perpetuated over it's history. Who would they be outraged with? Hitler - who has been dead for 77 years?

It would be quite emotionally immature to be literally outraged with any of this in a modern context...


This is a fair criticism. That said, I have a hard time believing that anyone was literally brought into an uncontrollable rage over ME even when we first found out about it. Additionally, nobody in the comment section appears to be in such a state.

Accordingly, I assumed that the top level comment was using "outrage" defined closer to the most scathing comments posted, perhaps as "unwilling to forget about, or accept".

We have no duty or obligation to forget about or accept the risks of unauditable, embedded microprocessors with full, undetectable access to onboard GbE, memory, main CPU registers, PCI devices etc. This subsystem poses extreme risk to privacy. The fact that is impossible to purchase new consumer-grade (not $1,000+ Power9) chips without this subsystem is consistent with what we would expect from an on-chip backdoor should one be proposed (or imposed) by US intelligence agencies, which have a lengthy history of rampant human rights abuses, a mission focused on violating privacy, a history of attempting to impose similar subsystems (clipper chip, MS Palladium), and who have a clear economic incentive to develop access that doesn't require them to keep playing the continual cat-and-mouse game of software exploit development and management.

I'm extremely skeptical of the intentions of anyone telling me that I should not be angry about the fact that there is an unauditable subsystem that heuristically matches almost everything needed for the MVP of a hypothetical hardware backdoor, that I cannot freely decide not to have bundled with new hardware, solely for the reason "it's existence has been known for close to a decade".

This top-level comment reeks of COINTELPRO-esque efforts to convince individuals to risk-accept a subsystem they have zero incentive to keep, but that intelligence agencies have massive incentive to retain, should it actually be a backdoor.


While it's a flawed philosophical/psychological model for reality, the seven stages of grief is quite applicable here. Why do people "get over" grief? Time...

Grief never actually goes away, but it lessens to the point where it no longer is emotionally painful to think about. Grief lessens after every thought has been thought, every word has been said, every emotion has been felt, over and over to the point where there's nothing left. Time heals all wounds, as it has been said.

The reason people are not feverously debating IME anymore is time. All of the arguments have been made... over and over. At this point, people are tired of the same things being said ad nauseam.

This is the same reason we see systemd-related comments downvoted and flagged into oblivion. People are tired of it...

So, while most of us agree IME is probably not something the average home user wants or needs, and IME is probably something that should be resisted... people are just not going to get worked up about it at the mere mention of IME anymore. That time passed... and therefore the word "outrage" is wildly inappropriate when applied here.


> I assert there is no human alive today that is actually, really outraged by the Holocaust

You could hardly be more wrong.

> It would be quite emotionally immature to be literally outraged with any of this

Your implicit claim to possess superior emotional maturity to those Holocaust survivors who remain outraged is both false and repugnant.


> Your implicit claim to possess superior emotional maturity

I do possess superior emotional maturity over those who wield the Holocaust as a tool in arguments about computer processors for internet points... yes.

> Holocaust survivors who remain outraged

I think your interpretation of "outrage" needs updating.


Those two things have disproportionate direct impact and can’t really be compared on the same level. But apples for apples, school educates students about genocide and not about the privacy considerations of backdoor chips.


I'm in no way conflating the impact of the two, I'm pointing out that the implication of the original comment "It's just too old for people to be outraged about still", is that people shouldn't be outraged at evil things solely because those evil things happened a long time ago. The implication itself is ridiculous. Time does not make evil things less evil.

To suggest that I'm contrasting the impact of ME (not the same as vPro) with the holocaust is either blatantly missing my point (that the implication of the original comment is obviously completely false) or a deliberate, bad faith strawman.


It's not talked about more because it's a crazy conspiracy theory that has no merit. After all these years of scrutiny the worst vulnerability required physical access and disassembly in order to preform a hardware attack.

The people who believe this conspiracy theory, like many others, peddle misinformation to prove their point. No matter how much you try and debunk it you can't change their mind.


Yeah, see that's the other side of the story that doesn't seem to be told much either, and I'm interested in that too. It does seem like some researcher or journalist should have blown the case open by now if this thing were systematically providing telemetry from everyone's "powered off" (but still plugged in) machines to an intelligence agency. Can you point to an article or paper that thoroughly debunks the claims as crazy conspiracy theories?


The claims shift around so that they're nondisprovable. Someone could say that ME is a backdoor that has never been activated and will be undetectable until some future day when it is activated.


It’s a backdoor for sure. I think the extensive online campaign which desperately tries to prove its not, proves it is. Who can afford to police EVERY forum, social media platform, and web site only to call people mentally ill for suspecting it is? It’s a pattern which only fits certain players.


> the extensive online campaign which desperately tries to prove its not, proves it is

That's like saying the extensive campaign to prove the earth is a sphere proves it's flat. That isn't how logic works.


Nobody has any real material gain from tricking people into believing the earth is a sphere.

Conversely, if ME/PSP actually is a hardware backdoor:

1. The proprietor of the backdoor would've expended considerable resources designing, implementing, testing, and distributing it, and thus would have an economic incentive not to see it exposed and discarded so as not to incur development expenses on a successor.

2. The proprietor of the backdoor would have an operational incentive to not have collection disrupted by the backdoor being detected, exposed, and discarded.

Your comparison of what looks like a real conspiracy to a known character-assasination conspiracy (flat earth) is consistent with COINTELPRO techniques.

I'm not accusing you of being a bad-faith actor, but you're arguing against someone who is opposed to an alleged US intelligence agency backdoor, using rhetoric consistent with known techniques used by US intelligence agencies to disrupt conversations (and discredit participants) who are exposing US secrets.


you're arguing against someone who is opposed to an alleged US intelligence agency backdoor, using rhetoric consistent with known techniques used by US intelligence agencies to disrupt conversations

Then they should use better arguments and not "everyone saying one thing somehow proves the opposite".


That’s what they want you to think.

First they make you drink fluoridated dihydrogen monoxide, then when you get a job in enterprise IT, the extra ions in your teeth make you pay extra for vPro.


Very nice reference.

Anybody here got a complementary source to suggest for dealing with more difficult flash chips?

( > If your BIOS flash chip is in a PLCC or WSON package, you will need specialized equipment to connect to the chip, the process for which is not currently covered in this guide. )

I've got a laptop with BIOS on WSON laying around unused since a while back because I haven't managed to take the time and dig up what's a reasonable way to interface with them. ( Bought the machine with an expectation of just clipping onto SOIC, like it's been in all my previous encounters. That'll teach me to look up the specs for the exact model rather than just something similar in the product line I guess.)


There are two ways to do this:

One is to buy an expensive, specialized test socket with pogo pins and a clamshell, from eg https://www.loranger.com/loranger_edc2/html/index.php or similar manufacturers. This is what you'd do if you wanted to do a burn-in test of some exotic amplifier or sensor, or to set up a small-scale assembly line and custom-program hundreds (not 1, not thousands) of these chips, and could write off a $100 standard socket or $10,000 custom socket as a cost of doing business.

The other way is to just use a hot-air gun to desolder the WSON from the motherboard, use some Chip Quik to temporarily solder it (or an identical chip you bought for $0.50 from Digikey) to a breakout board, program that, desolder it, then reattach it to the motherboard.

Of course, the third way is to have the manufacturer or the distributor do this for you.


Thanks! It's great to have some approaches and terms to structure around and as a bonus you've offer a few glimpses of a world I don't know enough about.

I was hoping there would be an affordable option for programming in place, but the examples I've come across so far seem uncomfortably fragile ( https://flashrom.org/File:DIP_socket_as_SOIC_clip.jpg ), so I guess I am most likely in for reviewing some heat implements and trying to refine my soldering skills on a more expensive board than I'm used to.

Will have to give more consideration to option three in the future, though in principle I appreciate having the access to keep modifying if needed, so maybe more importantly be more careful with the hardware selection.


Another approach with which I've had success is to use something like PCBite's probes [1] to stab the little bits of solder sticking out the sides of the WSON package. PCBite's probes are excellent; they're sharp enough bite into the solder and hold themselves in place. (Those stalks aren't stiff; they support themselves by digging in.) PCBite is an all-around great product and definitely worth the somewhat-steep-for-a-hobbyist price tag, in my opinion.

[1] https://sensepeek.com/pcbite-20


Very cool. Seeing as those probes are far more general hardware and offer extra capabilities which could be really nice to have around, that's an attractive option. Strong contender for best approach to date for my case (and just overall valuable information). Thank you for sharing your experience!


Why exactly isn't there a setting or jumper to just disable this?

I don't really see a business reason for Intel to make this hard to do...

They could totally have made the machine reset if the ME couldn't be initialized. But they didn't.


Remind me of that secured phone sold by a german company to governments around the world.

In practice, the company was indeed a joint venture involving the US government who used a german proxy to sold compromised hardware to unsuspecting official. Everything went straigh to NSA.


> They could totally have made the machine reset if the ME couldn't be initialized. But they didn't.

Hm? That's what they did: if you disable too much of the ME the computer will reboot after 30 minutes.


If I recall correctly, there were special DELL machines where intel allowed owners to disable ME in the bios. You had to contact sales to buy them.

The issue here is that ME can be disabled but intel doesn't want you, the "normal consumer", to disable it. Why?


It's absolutely insane that _this_ is what it takes to get IME fully disabled.


This does not and cannot "fully disable" the ME subsystem on modern CPUs.

A small remnant is left operational - without it, a PC shuts down after 30 minutes (this is well-known).

The Core 2 Duo/Quad architecture was the last iteration where the ME subsystem could be entirely removed.

I posted two BIOS images on this link for old HP machines. They can easily be flashed from within the booted bios without much hassle. Looking for the link...

Found it on Bing of all places!

https://github.com/corna/me_cleaner/issues/233


Sadly I just learned that even the remnants seem to cause known harm

Neither of the two methods to disable the ME discovered so far turned out to be an effective countermeasure against the SA-00086 vulnerability. This is because the vulnerability is in an early-loaded ME module that is essential to boot the main CPU.

https://en.wikipedia.org/wiki/Intel_Management_Engine#Disabl...


That is exactly why the Core 2 platform remains popular precisely for this purpose.

A 45nm platform that will do what you ask is far preferable to a 10nm platform that won't.


I observe that the end of the passage you quoted bears a "[citation needed]".


Will this do?

"Additional major security flaws in the ME affecting a very large number of computers incorporating ME, Trusted Execution Engine (TXE), and Server Platform Services (SPS) firmware, from Skylake in 2015 to Coffee Lake in 2017, were confirmed by Intel on 20 November 2017 (SA-00086).[39] Unlike SA-00075, this bug is even present if AMT is absent, not provisioned or if the ME was ‘disabled’ by any of the known unofficial methods.[40] In July 2018 another set of vulnerabilitites were disclosed (SA-00112).[41] In September 2018, yet another vulnerability was published (SA-00125).[42]"

https://njnewnjnew.medium.com/management-engine-interface-dr...

https://www.theregister.com/2017/12/06/intel_management_engi...

It does make me wonder what else has been missed.

We have such elaborate means to deceive one another. Perhaps, one day, we will be good enough that it is no longer necessary. But that is not today.


> Will this do?

Sorry to be a stickler, but not really.

The citation indexed [40] (that is, the relevant portion) in your quote points to the Register article you also linked, just as the Wikipedia entry does in support of the statement:

"Unlike SA-00075, this bug is even present if AMT is absent, not provisioned or if the ME was "disabled" by any of the known unofficial methods."

That being the case I would expect the Register article to contain something that bolsters the quote above but if it does, it is so subtle as to escape my repeated rereading.


> The Core 2 Duo/Quad architecture was the last iteration where the ME subsystem could be entirely removed.

Yeah, but unfortunately intel also didn't bother providing microcode patches for meltdown on those chipsets "because to old" by some arbitrary definition of "old".


These are vulnerable to Meltdown, and the page table isolation patches are required to secure kernel memory. These do involve a performance hit, so I'd recommend Core-2 Quad 9550s as an upgrade for a minimally-usable machine.

However, these are not SMT/hyperthreaded, so many of the Specter vulnerabilities do not apply.

OpenBSD runs well enough on them, and these machines are likely what I trust most with this OS.

Most Linux runs on these machines (RedHat 9 doesn't - requires an i3), but will pause on the mei_me module and look for a response from the ME that you have lobotomized; blacklist the related modules if you want to boot faster.


The well-known spectre-meltdown check says that my Q9650 is not vulnerable to Meltdown or Spectre 1-3.

It is vulnerable to variant 3a, 4, Fallout, Zombieload, and and both RIDLs.

https://github.com/speed47/spectre-meltdown-checker


Depending on the motherboard it can be very hard, pretty easy, or very easy. For my one motherboard that isn't covered by me-cleaner due to the newness, I verifiably turned off ME the "pretty easy" way: By downloading the latest bios from gigabyte, opening it in Intel's CSME tools (there are download links on some forums geared towards bios modding), flipping the unlabeled "reserved bit" which turns on "high assurance platform mode", and then flashing that bios .bin, also with Intel's tools.

I believe some motherboards won't let you flash the modded bios if it's cryptographically unsigned or something like that, which is good for other reasons... but I haven't run into it myself.

I've disabled ME on a couple of supermicro boards too, using me-cleaner, since they were supported. (What I consider the "very easy" method.)

edit: Sibling poster is right that it can't be fully disabled. I do assume it's effectively disabled when it no longer appears in device manager and Intel's ME inspection tools show it as disabled.


Well, it's a very detailed guide how to dump contents of flash device, update and put it back.

If the guide said "dump the flash" and "write back the flash" instead of the detailed instructions, and only described firmware manipulation steps in details it would be much shorter.


But according to Intel it exists to provide functionality that is desired by hardware owners.

Big "Look what you made me do" energy.


> "functionality that is desired by hardware owners"

We hear this all the time don't we? Claims that something is;

"Because people want it".

"Markets demand it".

But we see absolutely no evidence of them whatsoever, this mythical mass of people clamouring for features that are strangely aligned with the things big-tech suppliers and manufacturers wish ti push and get to simply assert that "people want".

We like to think of ourselves as "evidence based, rational society" We'll happily hold governments, scientific and health research to a high standard of evidence. Even Wikipedia articles demand "citation needed".

Show us those people! Back up your claims Intel.


The tell is that you cannot even pay more to buy ME-disabled hardware when it is obvious that there is plenty of money in it, at little additional cost to Intel. The workaround in me_cleaner was originally intended for government buyers that demanded it. And they probably had good reason to demand it.


This seems like the hardware owners are demanding the opposite of what Intel is delivering.


Rather, it's both.

The government folk want it gone from theirs, but they want the rest of us to have it. Thus the claim "Our users want it" is true, in a tongue in cheek way.


I feel similar with 5G. I don't know anyone who was actually demanding 5G speeds from their phone, or excited about it. Technically it's very cool, but I'm unsure it actually is enabling end users to do something they could not.

From my experience, I actually must disable 5G. The 4G network in my area actually works well enough in all circumstances. The 5G network is all-or-nothing. I either wind up with incredible speeds or completely unusable.


Some aspects of 5G are sensible in that they take advantage of improving hardware to use spectrum more efficiently: denser encoding, full-duplex radios, etc.

Some of it, like beam steering that tracks moving devices, which is going to be challenging to make it work in real world cases, and using spectrum that makes it hard to penetrate inside cars and buildings, is a reach nobody asked for.

Some seems greed driven, like "If we can convince AWS customers they need to put computing at the network edge we (telcos) will capture some of the value AWS accumulates now."

As for your 4G network, that's what we call 5Ge now.


I knew a guy who worked on cell phone beam forming from the tower 20 years ago. He said it worked flawlessly in Florida where the company was based. He also said every single deployment failed because no where in the US has such a flat terrain without reflections.

Is 5Ge some sort of joke? Or is that a real designation.


At least in the US, AT&T made the incredibly cynical move to rebrand their 4G service in areas where 5G was expected one day to be available as '5Ge' aka '5G Evolution'. That was so they could 'imply' in ads you were getting a 5G connection before, you know, there was a 5G network to connect to. Even changed the little icon on your phone from 4G to 5Ge.

Sprint sued for the blatant false advertising and AT&T unsurprisingly settled.


OK, thanks for clarifying that. I don't have AT&T so I hadn't heard of the term.


> Is 5Ge some sort of joke? Or is that a real designation.

It’s real. My iPhone 13 says it right now. https://arstechnica.com/tech-policy/2020/05/att-still-refuse...


Is the end user actually the market this is aimed at? All we really know is that 5G and the Intel ME are endeavors that are expected to make a profit. But who wants this enough to pay for it? Someone does. If not the mass market consumer, then who?


In the case of 5G, telcos love it. It's vastly less expensive to run than any lower G, both in cities and the countryside. That interest even aligns with end users' interest.


Except they still charge the same anyways, or more.

I'm with Telus up here in Canada. You pay the same old rates as per the usual for 5G speeds. If however you go with their subsidiary (Koodo) using the older infrastructure, you can pay a little less for similar packages.

Check it out yourself. Mind you, I use prepaid, cause I don't want to be on a contract, so I buy my own phone and use it. Koodo even charges more for bringing your own phone, since they aren't collecting on having leased one to you.

https://www.telus.com/en/mobility/prepaid/plans?linktype=sub... https://www.koodomobile.com/en/rate-plans?INTCMP=KMNew_NavMe...

Simply put, if I want to save money while still having enough data for what I actually need data for; I can either spend about 35-40$ with Koodo for 2-4GB of data at 3 & 4G speeds; or 40-50$ for 2.5-4.5GB at 4 & 5G speeds. I round things this way by the way, because of taxes. Also, auto-top up also tends to give some extra data too. 500MB more. So generous of them (/s).

And also, this is new packages. They just updated them with the new promo on Telus with that whole 1GB extra data and 10$ one time credit. I'm gonna have to call them and get that I guess. Unless they auto gave it to me? Who knows with them. Ultimately, I only need 500MB though, since I use Spotify in offline mode, and only download music via my wifi at home; and the only other thing I tend to use is Google Maps which can also be downloaded ahead of time to save on data.

Edit: I should also note that they do actually state 4G on the Telus website, but my phone says I am getting 5G speeds. Hence why I state 5G. I could care less what they claim on their website. End user experience is truth.


If it aligned with end-users why not just create a robust narrowband specification that attempts to guaranteed some minimum bandwidth between the device and tower in many different signal conditions? Or would we just call that 4G?


If you have Verizon, that’s a bad idea as they’ve bungled the rollout and LTE performs poorly in many areas.


5G offers significant improvements in network congestion.

I want my phone to work in busy cell tower areas, so that is absolutely something I was demanding.


How is Intel ME any different in functionality than the Baseboard Management Controller usually found on servers (eg: Aspeed)? And what of those whom extend these feature sets with boards like the Raspberry Pi?


Here's the real kick in the nuts that IME does compared to BMC or other 'Management ports'.

(1) It is not something that you can (easily) disable

(2) It uses the same Network port that your LAN NIC uses instead of a separate "I won't plug that in if I don't want it" NIC.

(3) Security/Patches? This is outside the control of the BIOS manufacturer, so how do you make sure it's patched and upto date? and

(4) It wasn't an option.


Note that the BMC does not always restrict itself to the BMC port. I've worked with machines that have a dedicated BMC port, but also have a BIOS-configurable option (on by default) to let it use whatever port is connected.


Ouch, atleast it's a BIOS option.


That's a really low bar because (1) BMCs are a security nightmare because their firmware is garbage and (2) many PC owners do not need or want BMCs.

I think the ME hating is kinda strident but it has a bunch of undocumented firmware and your PC still works after you remove it so... what was that firmware doing?


if someone wants and demands it, it's the nice people at cia and nsa


As hardware owner I disagree.

Both personally and as part of the management team of 150.000 computers at work, we don't use this stuff there either.


I can tell you that I have used HPE Integrated Lights Out (iLO) on Gen8/9/10 servers.

It is a great help for server lock-ups - it is able to force a full power-down of the main board and cold-boot.

The software behind iLO was also a presentation at BlackHat, so it's important to keep them patched (and I don't know anybody else that does).

https://www.blackhat.com/us-21/briefings/schedule/index.html...


Yep we use that too but it has nothing to do with IME.

We also have Dells with iDRAC cards. But it's a nice thing with iLO that it's built-in, and it can be managed on a completely dedicated out-of-band network. Unlike the IME thing.

I understand there's a point to this in stuff like servers, but for workstations?


I use it to segment network access.

The devices are on an untrusted network and VPN into a LAN based on the device assignment. Things like printers are on a separate network, and there’s no cleartext on the network.

In the case of laptops, if they fall out of certain compliance baselines, they get remote wiped or bricked.


We do this too, but based on 802.1x certificates. Devices without this don't get access to the internet and are relegated to a closed VLAN.

But IME is not needed for this. The certs are issued through windows / mac management systems e.g. SCCM/Intune. They are also dependent on the current security state of the machine (e.g. no EDR installed -> only access to remediation network). Is IME really used in your case?


Depends on the requirements. One I set this up years ago and it uses AMT - IME only provides a few functions although I don’t recall exactly what from memory.

The key difference is that you can provide a level of assurance and multi-tenant access without an OS. For example you can run a hypervisor on the PC and have a few OS instances running.


I've used that and Dell's DRAC. They have their uses. We ran those on a separate network, and it was somewhat routine to use them to get into a host that was locked up or had disconnected from the network somehow.

It's definitely a security risk, but at a big company with a poorly managed IT department it wasn't the worst offender.


Knowing Intel, if this functionality was actually desired by hardware owners it would only be available on high end chipsets and i7+ processors.


Parts of it you want. The management engine does a lot of stuff and I don't think you can say all of it is good or bad. It would be nice if they would break it down area-by-area and give owners some controls to disable the unnecessary parts.


What is a thing it does that a user may want?


CPU health monitoring and fan speed control. It’s in the Intel QST which is part of the management engine.


Thanks :) I just wanted to get down to some concrete examples.


It makes a body wonder just who Intel thinks the hardware owners are.


Both absolutely insane and completely understandable.

...hopefully RISC-V will save us from this nightmare.


Ha - no. Absolutely not. I don't know where this total myth came from that RISC-V is open source therefore implementations will be better.

RISC-V is just an ISA (Instruction Set) that anyone can use, but what people use it in, and how they use it, is not specified and does not have to be open source. Apple could take RISC-V, plop it in their iPhone, and release it tomorrow in a processor that only boots Apple-signed code and requires proprietary firmware without any issue whatsoever. Intel could literally release a Core i5 with a RISC-V instruction set and an Intel ME built-in, no problem.

Where the hope mainly comes from is small chip developers like SiFive, who make many of their drivers and such open-source. But that's only if you buy from vendors like them - if you implement your own RISC-V core, there's no requirement that the drivers or firmware be open-source for it, in any way. You might say that's a missed opportunity. I say RISC-V wouldn't have caught on otherwise.


> I don't know where this total myth came from that RISC-V is open source therefore implementations will be better.

The hope is that (unlike x86/ARM) you will be able to purchase core designs from people who aren't sockpuppets. RISC-V will at least let people choose between which backdoor they want installed, which is an upgrade from a status quo of "All Your TCP Traffic Belongs To U.S.".

It's not exactly Superman, descending from the skies to deliver us from dystopia. But it's certainly a better path than letting ARM dominate any more of our chip landscape.


> The hope is that (unlike x86/ARM) you will be able to purchase core designs from people who aren't sockpuppets.

It also lowers the barrier to entry for new/rebranded sockpuppets, but having choices is a step in the right direction.


> Where the hope mainly comes from is small chip developers like SiFive, who make many of their drivers and such open-source. But that's only if you buy from vendors like them ...

So, you're saying it is possible (or will be down the track...) as long as things are bought from SiFive or a similar OSS-friendly place.

That's still a large improvement over the current situation, even if other vendors take different, locked down approach.


Ofc, as you mentioned RISC-V is simply an open-source ISA; however, it is arguably the groundwork for chips independent of Intel/AMD.


> Where the hope mainly comes from is small chip developers like SiFive, who make many of their drivers and such open-source.

But there are still roadblocks as they likely bought the memory controller from a 3rd party as an IP block they drop into their chip. This means the bring up procedure for the memory controller is proprietary and delivered in blob form to be loaded into the black box ip. Likely the same for other 3rd party ip blocks as developing this stuff from scratch is very difficult and time consuming. Especially for critical hardware like memory controllers. This makes opening the platforms firmware just as tricky as any other chip from $bigvendor. This makes full top to bottom security audits difficult or impossible.


It's still an improvement over x86, where anyone who manufactured an alternative would be sued into oblivion by intel for patent infringement.


Next year all x86_64 patents will expire. From then on everybody can make a IME/PSP/Pluton-free x86_64 chip. This makes RISC V completely obsolete since the x86 ecosystem is obviously much more mature.


> This makes RISC V completely obsolete since the x86 ecosystem is obviously much more mature.

While I'd really love to agree with you, the IPC of a RISC-V chip can annihilate an x86 machine on equivalently advanced manufacturing node. It's performance-per-watt can reach up to 10x efficiency over x86 in the right situations, and pretty much all of the cool stuff we like in x86 can be added as an ISA extension.

If we're headed to a RISC/low-power computing future, RISC-V will be the future people's champion. x86 will be a legacy compatibility mode that we use for games and "retrocomputing", likely.


X86 may be mature but I think the M1 has shown that there is plenty of potential for improvement. I know M1 is ARM instead of RISCV, but there may yet be ways to get better chips.

That said, the hardware we have is really good, it's just the software side that is a complete garbage heap.


Apple Silicon was an interesting move when you look at it from a numbers perspective. The M1 is a really impressive chip, but AMD had competitive x86 hardware that was out on the 7nm node. It benchmarked ~10% slower (the 4800u did, at least), consumed more power (25w max vs 15w max) and ran equally as hot as M1, but it did make me wonder - could AMD have made an M1-class chip if TSMC sold them the 5nm silicon they needed? It's hard to say, and arguably the Zen process wasn't (and still isn't) competitive with Apple's process enhancement.

Still though, AMD seems convinced that x86 can compete against modern RISC ISAs. They aren't far away from proving themselves right, honestly.


M2 and Ryzen 7000U will both be on TSMC N5 with similar RAM, etc. It will be very interesting to see the comparisons.


So... you're saying someone could (but not necessarily will) save us using RISC-V. Seems like a necessary precondition to it.


In the future, buying Chinese designed and made RISV-V will be the way to assure yourself that there’s no extra NSA garbage in there.


ME gets a lot of well-deserved hate. And a lot of work goes into disabling it. But I am surprised that none of the people working on such projects ever looked at the very peculiar ME payloads that intel chromebooks carry for hints on how to do it better...


“removes the vast majority of the ME's software modules (including network stack, RTOS and Java VM)”

There’s a Java VM on these things?!


Not surprised, Java vm is literally everywhere. From your credit card to sim, if it is a ic card then there is Java vm. It is almost universal language for mini embedded system for some reason I don't understand.


IIUC, it's because it's easier to rigorously prove the VM prevents classes of bugs (i.e. memory safety issues) and then reuse that VM in many places than it is to rigorously prove that many separate embedded systems not relying on the VM have independently avoided those bugs.


Is there an example of a JVM that has been proven correct in this sense?

I haven't heard of one.


I don't know specific details about the correctness of any particular VM, sorry. That was just the explanation I got from another engineer that apparently had experience in developing embedded java things.


> It is almost universal language for mini embedded system for some reason I don't understand.

Marketing-fueled hype.


With regard to the guide itself, please be aware that the guide, of which this is but a one section, is no longer actively maintained (since 2020).

It is a great and useful guide. I have used it to modify my own Gentoo installation. But, be aware of what you are doing. :)


This is the big benefit of companies like System76 that disable this for you.


How does something like this access my network? Like if I'm connected to WiFi, what's the stack look like for this chip getting access to that without the OS cooperating?


It requires an Intel NIC which connects to both the main CPU and the ME at the same time. The ME has drivers for Intel NICs and a full TCP/IP stack. From the docs: https://software.intel.com/sites/manageability/AMT_Implement...

"The Intel 82566 Gigabit Network Connection identifies out-of-band (OOB) network traffic (traffic targeted to Intel AMT) and routes it to the Intel ME instead of to the CPU. Intel AMT traffic is identified by dedicated IANA-registered port numbers. The [southbridge] holds the filter definitions that are applied to incoming and outgoing in-band network traffic (the message traffic to and from the CPU). These include both internally-defined filters and the application filters..."


Does this mean if your motherboard lacks an Intel NIC (or if you use an add on card instead) that it cannot communicate?


Yes, that is my interpretation.


How common are these Intel NICs?


100% of business PCs have Intel NICs because it's required for vPro. In the consumer market Intel NICs are generally considered (marginally) higher quality than Realtek. Intel Wi-Fi is also very common.


It has an enhanced 486 running Minix and unrestricted access to everything on the system bus.


Because the intel me 'is' a standalone system. So it can do anything on its own. Of course it won't connect to your WiFi because it didn't know the password. But lan connections don't need password so it can connect and listen to it in that case.


There is a standard for LAN authentication, though I think only high-end network hardware enforces it.

https://en.wikipedia.org/wiki/IEEE_802.1X


Depends on your definition of "high-end", while I personally stick with Mikrotik and Juniper gear a TP-Link TL-SG2008 is only $70 and gives you 8x1GbE ports and support for 802.1x just fine. For wireless you'd use WPA-Enterprise, which is pretty common on most consumer grade routers (for some reason), readily accessible on anything you can install OpenWRT on, and then on prosumer stuff like Ubiquiti AP's.


Most laptops don't even have an RJ-45 anymore


WPA Enterprise is basically 802.1x over Wi-Fi and yes, the ME has drivers for Intel Wi-Fi cards.


A slightly off-topic question: many modern motherboard have a function to flash a BIOS even without CPU present (e.g. Gigabyte markets it as Q-Flash). Any idea how does that technically work? Do they put a separate CPU on the motherboard?


I just upgraded to a Ryzen 9 7950x with a Gigabyte x670e motherboard and while using qflash+ I also got curious but nothing online answers how it works and the manual is too simplistic to include the details. If I would have to guess It’s probably the chipset?


smolder has a comment in this thread expressing viability for some boards doing this.

https://news.ycombinator.com/item?id=33347065


If you are worried about IME you may want to check if your laptop or desktop has Intel VPro with KVM enabled. Much bigger real world threat. It can also have some very very cool uses though for remote management. Sometimes VPro needs a separate Bios update - make sure you do this and if you do keep it enabled make SURE you set a good password.


Hey Intel, I'd pay you a premium to buy a CPU with this crap already disabled.


Some of the Dell professional laptops, at least many of the Dell Precision mobile workstations, have a customization option that allows the buyer to choose "Intel ME disabled".

I hope that they really do disable it in the laptops sold with this option.


Hey AMD, me too.


Is there an equivalent to IME for AMD and/or Apple M-class processors (that would similarly benefit from disabling for home user)?


The equivalent for AMD CPUs is called the Platform Security Processor (PSP). I am not aware of a way to disable it.

I don't know about Apple CPUs but they definitely have co-processors running besides the main CPU.

In fact, many people talk about the IME but the practice of having proprietary systems with their own privileged hardware is the norm nowadays. Another example is the "baseband" processor in phones, it is a complete proprietary system with its own processor, OS, etc... and it controls the modem, among other things.


I’d like to know more about AMD specifically too. I’m well aware that PSP is their equivalent but there seems to be so little information out there about it. Is it really an equivalent? Is it as bad as ME? Can it be disabled? Does it have the same level of access as ME? Have their been any exploits of it yet?

The wikipedia page is rather bare. There’s a couple of papers linked to but frankly they go over my head. Is there any respectable analysis out there?


> I’d like to know more about AMD specifically too. I’m well aware that PSP is their equivalent

This is why I'm curious and get a little into conspiracy-land with this stuff.

Neither are exactly "well documented", they both arrived on the scene around the same time. It makes one wonder if there was a specification they were told to follow.

That's why I'm very curious if Apple's new chips follow a similar "specification".

Sorry, I mean "feature set".


AMD relies on ARM's Trustzone to do this.

"The PSP itself represents an ARM core with the TrustZone extension which is inserted into the main CPU die as a coprocessor."

https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...



Apple has Secure Enclave. Who knows what that's doing.


For at least a decade this engine has been a total backdoor. There's even a VNC server embedded. [1]

  1: https://blog.michael.kuron-germany.de/2011/10/using-intel-amts-vnc-server/


Note that most PCs do not have AMT.


The management engine is a privacy nightmare.

It's incredibly useful for companies and organizations, especially when lending computers to their employees, but why the hell would this tech be put inside consumer devices? It just sits there as an exposed attack surface without the user even having the tools to maybe make something out of it.


> It's incredibly useful for companies and organizations

Is it? We don't use it for any of the 40k+ desktop or mobile devices we manage.


Same here with 150k+. Not using it, certainly never asked for it.

Same with all the vPro stuff (which is kinda related but not completely).

We do use Windows autopilot though but that doesn't depend on IME.


Back when I worked on the sysadmin side of things we used vPro for out of band management of servers in our datacenters, but we never used it for our 10k+ laptops and desktops.


Yeah exactly. We used Dell iDRAC remote management cards and HP ILO for that mostly. We still use the latter on the few servers we have left (which is very very few). But on laptops/desktops never.

That still doesn't really give it any reason to have it in workstation chips, in Xeons perhaps...


It gives you OOB management on every endpoint. These days I think it is less useful (I like autopilot/intune) but for some field devices it is nice to solve boot loop scenarios or similar bare metal problems over the internet instead of making a dude drive for 6 hours to BFE to find out why your doodad has ghosted.


If you're using Intel architecture, it needs at least some SMM: it is used on startup (initial hardware configuration) and often during power management events (CPU clock scaling, hibernation, etc). The article mentions that they disable most but not all of SMM, for those reasons.


> why the hell would this tech be put inside consumer devices

Because it is cheaper to make one single CPU chip variant, that is then sold to both the corporate and consumer channels, than it is to make two, one with ME for the corporate channel, and another without ME for the consumer channel.

Plus, once the ME was required to actually boot the CPU (note, why it became a requirement is a different argument), it then became much more expensive to omit for consumer grade CPU's because a "non-ME consumer grade" CPU would need to be a completely different chip with some alternate way to "initially boot up".


Q:

>but why the hell would this tech be put inside consumer devices?

A:

>It just sits there as an exposed attack surface


rather than disable ME, i would want to pwn it.

you can dump, substantially re-engineer, and write back, to add utility, and provide service to end user.

or could it be like the one ring?


"the Panasonic CF-AX3 is refreshingly straightforward in this regard — after removing the main battery and removing 19 small screws on the bottom-side, the rear panel of the laptop lifts off easily. "

Sarcasm? I cursed our hardware designer for making me remove half as many screws. I mean, my motorcycles' gas tank is secured by a single bolt and you'd think that would be more important to keep in place ...


Are there any caveats to disabling the IME?

This side of computing can become daunting in the context of the direction that the world is heading. So many acronyms and backronyms lie beneath the chassis of our devices running commands and loops; looping and commanding and checksumming and checking sumthin’ out.

Checking what out and sending it where?

- “We need to verify that the code is signed for your safety.”

- “But it came from your App Store, emperor of the mononymic enterprise.”


Certain DRM will no longer work so you may not be able to play Netflix or whatever.


Unfortunately it lost me at the risk to brick my computer. Intel needs to be brought to court to stop enabling IME, not with hacks. If i have to use IME, the system I use will be considered insufficient for secure purposes and i'll just use another system for secure matters.


The risk of bricking isn't so bad as long as you keep a copy of the original firmware. If the patched firmware doesn't boot, you can always revert back.


Cool technique but it would be cooler if you could cut the Rpi out of it. Do they make IC clips that can connect to the USB port of a laptop? If so that would be easier and less error prone than mapping the IC clip wires to the GPIO pins of the RPi and having to go through the trouble of connecting a KVM to what is, effectively a $35 throw-away computer. (Or, I suppose the best case would be an IC clip that plugs into the USB-C port of your phone, and you can run the software in an app...)


That's why I never use the onboard Ethernet chipset, ever.

Even if it's BIOS-disable.

Just buy a decent Intel (or even RTL) Ethernet NIC PCI card, or two.


fwiw another commenter posted about how the firmware for the ME includes drivers for other intel network cards


Yes, but those ME-capable Intel Ethernet NIC require CPU-based processor OOB installation of ME soft-drivers which BSD and Linux do not carry, thankfully: which is why I never use onboard Ethernet.

On the other hand, it is a real technical possibility, for the Intel WiFi NIC PCIe cards to be initialized by its on-board ME processor which can then do this back-PCI driver setup from this ME processor even if BIOS-disabled (or if CPU is dead or halt'd); by and for the (IME/PSP) Intel/AMD CPU-based Motherboards. Not using Onboard Ethernet nor loading its driver takes out that NIC access, but not the possibility of still using any "found" Intel WiFi NIC adapter: do not install NIC with any built-in management engine (ME), including wireless ones like Bluetooth, NFC, WiFi, Zig.

use the plain ol' ME-free Ethernet NIC adapters while not using the onboard Ethernet port (bonus, disable driver to onboard Ethernet) and you will be fine on that topic of concern about ME processor (onboard ones).

Of course, there is always a possibility of unadvertised ME-capability (commonly in WiFi but also seldom in wired) Ethernet NIC adapters made by shady manufacturers but we can mostly find those out and avoid them. Dependable Ethernet NIC adapter manufacturers likes to tout its ME capabilities (because larger market comprises of Enterprise users).


Good thing there's nothing in like that in the Apple chips... or is there? :think:


There is - it's called the "Secure Enclave." However, it is just another block on the processor and isn't this always-running ghost system underneath you. It cannot be shut down once started without a reboot - but it is completely up to you whether to start it in the first place. So, if you don't start the Secure Enclave and load its Apple-signed firmware, it will just sit there dark and unused.


Having a huge government schlong up the backend shouldn't be a concern to us/eu citizens. It's only the slack left from the multitude of schlongs that foreigners could use that should be of concern


Disclaimer: A security researcher working on IME. Following is my own opinion.

Used to have really strong uninformed opinions on many things, as often found in young techy-nerd person. It is terrifying how intelligent people are so prone to jump on conspiracy wagons.

The current thread and alike are truly humbling experience for me as a person, I wonder how my comments look like to the in the know personal. If they ever had the misfortune to ever see it, did they at least had a good laugh?

I know I didn't give any arguments. Sadly I can't get into details so I will be pretty handicapped anyway. I do believe good faith dailog is possible, but feel that it is impossible on the wide internet. Would be great to have a serious conversation on the subject in person with the more cool-headed commenters here.

I must thank all of the HN commenters that usually try to be thoughtful, insightful, thought provoking and respectful at the same time.


Anyone know if me_cleaner etc. work on the new 12th generation chips? It's not clear from the link.


Intel's spyware is one big reason I look forward to switching to Apple Silicon soon.


I am not an expert in Apple hardware / firmware, but I admire your trust that the US government could not exert the same influence on Apple as they did on Intel.

Intel probably had to disclose the existence of IME due to collaboration with mainboard vendors. Apple does not face this constraint, so it is a lot easier for them to keep such subsystems under wraps.

Of course I'm just speculating here, but a product typically mirrors its environment.


The IME was never a secret. Anyone can decap an Intel chip and point to it.

I find it implausible that the A/M series chips have an independent subsystem that is so obfuscated that the expert attention which each Apple die receives has turned up no trace of it.

The company has its own approach to secure compute with the T2 modules, but no, I don't believe Apple would be able to hide something like IME on their CPUs without it being detected as such.


In recent decades it has become much harder in most countries to get access to the red fuming nitric acid necessary to decap epoxy-encapsulated chips; it's considered a "drug precursor" and/or "explosives precursor". I hear that a few years ago someone figured out that boiling the chip in colophony for a few hours also works? At the boiling point of the colophony, that is, not water. I haven't tried it myself.


Which is to say, it's hiding in plain sight. The secure enclave and T2 modules can do things to the processor. Who's to say "things" doesn't include ME-like capabilities?


It might be useful to go over Wikipedia's entry for both platforms, here's the IME:

https://en.wikipedia.org/wiki/Intel_Management_Engine

And this for the T2:

https://en.wikipedia.org/wiki/Apple_T2

Neither of these are obscure products, they are of great interest to reversers and other security researchers. The list of shady things IME does which the T2 isn't known to is extensive.


The people who reverse-engineered the secure enclave firmware can say that.


“Anyone can decap a chip” made me laugh. I am curious how many people can do that and then understand what is going on.


The point is that the answer is "everyone who needs to be able to".

The number of expert and curious people, with the means, is higher than the number of new chip types Apple or Intel produces. There's always a detailed die photo available within the first few weeks of a product launching.


I see your point and agree. Anyway, I am already using “anyone can decap a chip” for the listeners' amusement in my daily conversations.


Couldn't the T2 chip (or other Apple security chips) do similar things?


Isn't T2 there because apple didn't trust intel me at all?

There is no one trust about this sh*t except intel themselves.

The only difference is apple have the power to ask intel get rid of it but we don't.


> There is no one trust about this sh*t except intel themselves.

AMD has this also, they appeared around the same time, and there is no reason to believe (if) this was outside of their control (i.e. mandated) that Apple isn't also playing ball here.


Trusting Apple...doesn't make a lot of sense. They're almost entirely security-by-obscurity. You have nothing to go on but their promises.


Apple doesn't tell us everything, but they do say a lot so I don't think I'd call it security by obscurity.

https://support.apple.com/guide/security/secure-enclave-sec5...

https://help.apple.com/pdf/security/en_US/apple-platform-sec...

They give us the architecture diagrams and tell us how the locks on their doors work, but they don't gives us the keys for it.

Remember: You don't actually own any iOS device because you can't run unsigned code that you wrote on it.


If the builds aren't verifiable and you can't put what you want on there then it's just promises, which are worth nothing.

> Remember: You don't actually own any iOS device because you can't run unsigned code that you wrote on it.

We agree about that!


I'm with you on the general idea that we shouldn't blindly believe everything a for-profit corporation says but at the same time we shouldn't allow fact-free speculation, rumor, or just plain cynicism to masquerade as facts either.


I don't think it's controversial that trust in Apple's extremely locked-down ecosystem basically comes down to "we promise". If it's closed source you can't verify. Even if it's open, if it's not a reproducible build (or your own build) that you install yourself then who knows what's on there and what it does?


Oh, the irony... Remember the hardwired 'Find My' geolocation function built into the permanently-on T2 chip?


You believe there are not non-architectural cores in Macs?


Do you have evidence otherwise?


Yeah, I do. Every system has tons of non-architectural cores for security, power management, and for other purposes. Apple advertises some of theirs as for example "secure enclave" and, on older Macs, the T1 and T2 security processor which runs the proprietary closed-source BridgeOS and has unfettered access to everything on the system.


Which one of these cores perform the same functions and present the same attack surface as the IME?


Closed source, so we can speculate (or try to reverse engineer/break it).


So at best we have cynicism / paranoia regarding Apple's T2.


By a 'zero trust' security philosophy, anything short of completely open source is inherently untrustable.

You may not be practicing that philosophy, but that doesn't make those who do "paranoid" any more than corporations implementing PCI-DSS controls.

Security does not work retroactively, only proactively.


That's all anyone has against IME, also. And BridgeOS isn't any more secure. There are tons of known flaws in it.


Part of it runs bridgeOS. The Secure Enclave runs something else altogether called sepOS.

https://support.apple.com/guide/security/secure-enclave-sec5...


The decision to create such an engine is so unwise, it's evil.


It may not have been a decision, at least on the part of Intel.


Many Dell PCs have a “SERVICE MODE” jumper that also disables IME


In short IME is a hardware spyware ? That's it ?


out of band networked/remote hardware management.


EFI, anyone? MBR works perfectly ok for me.


@dang, maybe we should merge this? seems to be a dupe https://news.ycombinator.com/item?id=33344458


There was one relevant comment in that thread. I've moved it hither. Thanks!


That post has a broken link, and this one is the resubmission.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: