It won't stop anyone who has the skill to author cheats themselves (i.e. someone with basic reversing and programming experience) but it does do a good job of detecting and banning players who are using widely distributed "commercial" cheats.
I look at EAC as being very analogous to a virus scanner, very easy to bypass if you know what you're doing, but good at catching the common and soon-to-be common threats. It also puts up enough roadblocks to make some novice attempts at cheating inconvenient at least.
Source: I deal with all the anti-cheat stuff (including EAC) on a semi-popular multi-player game. I can verify that it does make a very tangible difference to our player base in terms of the number of cheaters they are exposed to on a per match or per session basis.
EAC is actually pretty terrible against commercial cheats, it's only use is against free cheats pretty much. If you're willing to pay, you'll find many for EAC games.
Many major titles are using EAC currently, Fortnite, Apex Legends, PUBG, etc. There's a pretty large market for cheats in any of those titles. Just dropping "<game> aimbot" or "<game> cheats" into google is likely to turn up a multitude of commercial cheat developers, many of which are legitimate and will bypass the current anti-cheat.
Generally the legitimate commercial cheat developers offer status pages detailing any of their cheats that are currently detected and offer additional tools to do things like bypass hardware ID detection if you did get banned in the past.
Maybe I'm clueless but as I understand it some of these shooters (e.g. PUBG) are games that you pay for. If one of these commercial cheats gets detected by EAC isn't the result a permaban of your paid for account? And then you have to buy the game again and make a new one, if that's possible at all? Seems like that would be a serious deterrent, although obviously it won't stop everyone. (Encountering a cheater once out of every ten matches is probably acceptable. Encountering one in every other match probably isn't.)
If you are into cheating, new licenses are just a cost of business. Even a new, full price AAA game is about the cost of a round of golf. Furthermore, with free to play and microtransactions/ongoing ways to get revenue from players, there's a growing incentive to either give the game away (Fortnite et all) or offer the game at a fairly low price (eg rainbow 6 siege, can be had for as little as $5).
If you are willing to pay for a cheat then rebuying the game every few months when there is a ban wave is not that much effort. Especially if you buy it from some shady CD-Key reseller.
Do these anti cheat systems detect popular products of popular dev tools to create cheats typically? Or is the cheat world way beyond that phase and everyone just codes to the native OS APIs directly, making that hard to detect?
Is there still cheating in console players? Maybe with jail broken devices? Do these companies really have to think about cheating with consoles? Do you think the eventual future of these kind of games is some sort of locked down console-like system, which is what apple seems to slowly be going to with their hardware?
My warframe account was banned a while ago for having cheat engine running in the background. I was using cheat engine with total war, a single player series and never hooked it into warframe. So some systems at least just looks for running processes to create cheat tools.
In EU phone verification should basicaly prevent cheating as phone:person has almost 1:1 correspondence, so once you ban (or shadowban) there's not easy way to bypass.
No anti-cheat will ever completely stop cheaters. There are plenty of cheat tools that are basically impossible to prevent. The question is whether EAC is good enough? I don't know, I play very little multiplayer these days.
In the end, games need to be built better by sending the least information needed to clients, ie. zero-trust. This is a big part of things like CS:GO, Valorant, and RTS games like starcraft/League/Dota since they can implement a 'fog of war' and only network information if their client actually needs it[0].
The only cheats that still plague Dota are scripts that perform tasks automatically, such as disabling an opponent as soon as they're visible (beyond human ability), and those are taken care of by heuristics[1] and a community-voting-based 'overwatch' system in CS:GO and Dota 2[2].
The problem with zero trust and only sending what the player can actually see is lag. For example, sure, you could eliminate wall hacks by only sending positions of players that you have line of sight of, but that essentially eliminates any client-side movement prediction or other lag compensation and means you have to send each frames data quickly enough. This may work for some games that are less lag sensitive and as internet speeds improve, but its not currently a solution for most players.
Also most fps games use the sound of a player to indicate generally where a player is. You'd need the player xyz on the client regardless of line of sight of they are in aural proximity.
>games need to be built better by sending the least information needed to clients, ie. zero-trust
That isn't possible in real world for every type of competitive gameplay; trade-offs need to be made. Even games you list are not "zero trust" - things like looking direction are left to the client because of the latency.
Moreover, it's not enough. What's also needed is a hardware chain of trust for input devices. On-device cryptographic signing of mouse events would be sufficient and would guarantee that input comes from a fair play-compliant device.
SGX is a super-privileged, encrypted, and isolated enclave in the main CPU memory that can be used for anything. Signed mouse input is much more benign. But sure, I see your point - this is DRM, and it can be used to control the access to your own mouse, even outside of online gaming.
Still, it's the most logical step, and it would probably happen in several years, after Microsoft started demanding TPM 2.0. Valorant already requires TPM to be enabled on Windows 11 to run.
By the way, A4Tech has on-device DRM for more than a decade, they are using it to stop people pirating their software. Which is, ironically, designed for cheating.
> SGX is a super-privileged, encrypted, and isolated enclave in the main CPU memory that can be used for anything.
My position is s/"a super-privileged, encrypted, and isolated enclave in the main CPU memory that can be used for anything"/"a failed experiment in computer pseudoscience".
And the nightmare scenario I imagine is a more lubricated path to framing political dissidents by forging attestations that they were searching for child abuse imagery or some similar scenario.
Heuristics are great in theory but if they could be used reliably then you wouldn't need an overwatch system. Also the overwatch system depends on people being accurate judges of whether people are cheating which we know from pro players going undetected for so long (KQLY) and from pro players being accused for so long and being clean (flusha) that they are not.