I don't know enough about 'real time' netcode for games. However I have read several HN articles over the years so I've got at least a basic understanding.
Why can't the servers distrust the clients? What should a 'client side anti cheat' actually prevent?
The way I think I'd tackle such things is to have multiple copies of each character model moving in different locations and different ways. Such that trying to spy on the state of the game from one client's viewpoint yields mostly false data. New 'threads' would fork off of the existing threads and would only be culled when there are too many or they're about to make a side effect that would be visible if they were real. In that way the server would be responsible for feeding misinformation to clients but maintaining the state of the true game as a secret to itself.
> Why can't the servers distrust the clients? What should a 'client side anti cheat' actually prevent?
There are two issues. One is the user seeing things that the server is hiding, such as enemies hidden behind obstacles, by going into "wireframe mode". The other is superhuman performance via computer assistance, or "aimbot hacks".
The first is a performance issue. The server can do some occlusion culling to avoid telling the client about invisible enemies, but that adds to the server workload. The second is becoming impossible to fix, since at this point you can have a program looking at the actual video output and helping to aim.
(You can now get that in real-world guns.[1]) Attempts to crack down on people whose aim is "too good" result in loud screams from players whose aim really is that good.
The only feasible solution is to have high-level players compete in physical tournaments or at verified centers, where the authenticity of the player is replaced with some authority. At a high enough level, there is no way to distinguish a really good player from a cheater.
But it's not really feasible to argue since you need to be on such high level in the first place to honestly engage in 'is this player chesting' conversation. And it's on case-by-case basis
I've watched professional games in SC, CS and DOTA for decades and I definitely agree that pros are indistinguishable from a good cheater (not a rage hacker).
One of the issues around this is cheating within pros too. People that are actually good at the game, but use cheats to get even further ahead. These players are already statistical anomalies and even from an experienced player's perspective, you can't tell if they have an amazing game sense (many really do) or he's wall hacking, as an example.
Competitive games are unlikely to reach the market share necessary for a competitive gaming tournament if their casual scene is inundated with cheaters. Only a tiny handful of games even have a viable competitive scene.
But are cheaters even an issue in unpopular games that don't give out real money for tournaments ?
I have never seen cheaters being an issue (even the few times people set up tournaments with prizes), which makes me think that this might be limited to very few games (in very specific genres) ?
> But are cheaters even an issue in unpopular games
Yes. Every game has cheats. The cheat packages are pretty easy to adapt to new games and people pay money for them.
Why do people cheat? Because it’s fun! If you’ve never cheated it’s honestly worth trying. It’s hilarious. It also utterly ruins the game for everyone else in the lobby.
If games had reliable anti-cheat you’d be shocked at the percentage of lobbies that have a cheater. It’s wildly rampant.
I'm not talking about developer tools - cheats that come with the game, available in single player (and multiplayer if the host allows it).
But a lot of games do also have accessible to everyone replays that show every order given by every player, so catching a cheater that acted on information not available to them (because for instance they had buddies in other team(s)) isn't particularly hard, especially in tournaments with a lot of eyeballs on those replays.
At scale it’s incredibly hard. Impossibly hard even. So hard no one has successfully solved it! Ever!
But what you’re describing is Valve’s Overwatch system for Counter-Strike. It’s a key component of the anti-cheat ecosystem. But cheating is still rampant in CS and one of the biggest complaints.
"at scale" assumes a popular game - and you end up by giving as an example one of the most popular FPSes ever ! Please give an example of a game with, say, less than a million of copies sold / given away ? (And ideally, not an FPS, we all know these have specific extra challenges involved.)
And "at scale" pretty much means that matches are not competitive, because the sums required for entering a tournament game and given for winning it are going to be too small, won't they ?
P.S.: And for non-competitive games, I would expect that this cheating issue (among others) would be aggravated if you insist on playing with total strangers you will never see again (also part of the scale issue) - maybe just avoid that ?
But popular games are the ones people want to play, and are the ones you’re claiming are immune to this. Look at this comments section - it’s people talking about the top 3-5 games on Pc right now, not the 30th entry in the trending FPS section.
Part of the appeal for cheating is doing it where it has impact - in popular games.
Also, I want to insist on one thing : some of the popular games listed are those that are online-only and/or removed the ability to host your own servers (and/or even worse, have microtransactions).
I have zero sympathy for the kind of asshole that gave money to companies engaging in the despicable behaviour cited above. You were warned. You made your own bed, now lie in it !
I’m big into competitive Call of Duty. On that game (and any other shooter that uses a controller), the biggest undetectable cheat is auto recoil adjust. People call it a “chronus” for the same reason people call it Kleenex. You download profiles for the gun you're using and it basically does the recoil pattern in reverse, turning every gun into a laser beam. It’s undetectable because it modifies inputs from a legit controller while appearing completely normal to the console/PC. No computer vision needed, and it’s destroying the integrity of the game.
In the future I kind of hope the handshake from controller<->console becomes a lot more robust, maybe working in a similar way to HDCP.
I don't think it will work. Nothing can prohibit users from desolder the stick and putting a microprocessor with DAC in place of them.
Actually, those kinds of mod is frequently performed by gamers, because lots of people wants to replace analogue potentiometer with hall-effect sensor with microprocessor, which provides much more durability compared to the Alps potentiometer stick. (and no one likes to play with a drifting Dualsense or Joy-Con)
your point about "chronus" or auto recoil adjust cheats is a perfect example of how cheats evolve to bypass detection. By modifying controller inputs at the hardware level, it’s nearly impossible for traditional anti-cheat software to identify such exploits. It shows that as long as there is an incentive, people will find creative ways to gain an advantage, often blurring the line between legitimate skill and unfair advantage.
I think moving forward, a hybrid approach is essential—one that leverages both server-side logic to prevent information leaks and robust client-side monitoring that can detect anomalous behavior patterns. Perhaps more sophisticated machine learning models that analyze player behavior in real-time could help in distinguishing between legitimate skill and enhanced performance due to cheats. It's a constantly evolving battle, and staying one step ahead is always going to be a challenge.
Would love to hear more thoughts on how to effectively balance these aspects without compromising the player experience!
Cheating isn’t a binary thing , it’s a spectrum. The number of people who are willing to install a random script that they drop into a folder that lets them win every Br game is vastly higher than the number who will install a kernel level driver, which is more than will _pay for_ and keep updated with a kernel level driver. Currently, “expensive dedicated hardware that replaces the gaming mouse that I like using” is significantly less of a problem than “install rootkit”
The performance issue you talk about has a little more to it too. If the server is 30ms away from you and the other player, and the server runs at 30Hz there’s 90ms between the enemy pressing a key and you seeing it. That’s before you add real world networking conditions into the mix and have to start adding client side prediction in which adds a few more MS to boot, or errors. But in order to do this prediction the client needs a little more state than is visible on screen - players that are around corners that are about to appear, that sort of stuff. So the client needs that information in order to actually function meaning it’s hard/impossible to tell the difference between good game sense (I know the reload time of this gun is X and that peeking lasts Y frames and they will appear here) and cheating (we’re 2 frames away from showing the player on screen but he’s going to be right here so shoot here)
I think someday, almost all aimbots will be undetectable by anti-cheat systems.
Thanks to the neural network, we have made enormous progress in the computer vision domain. As a byproduct, it invalidates the method we use to separate machines from humans (the image-based CAPTCHAs).
I guess aimbots will switch to CV-based systems to detect enemies rather than dumping game memory to find the enemy's position. This change will force anti-cheat systems to perform an automated Turing test, which is hard. (Telling the bot and human apart only by watching the replay is much more challenging compared to the above CAPTCHA problem. And we are currently losing at the CAPTCHA frontline, too.)
@Animats, you’re spot on about the two main issues—visibility hacks and aimbots. The concept of hiding enemy positions server-side through occlusion culling does present a performance challenge, but it’s essential to balance between ensuring fair play and maintaining server efficiency. And you're right; the rise of external programs that can interpret video output makes preventing aimbots significantly harder.
Delaying UI interaction until it has been verified by a server that runs at 20fps (60 is uncommon on servers unless theres no AI), with a RTT of 60 ms, means your hitmarker will take 110ms instead of 6ms if rendered locally.
Apply that to every interaction that the server has to be authoritative about, movement, reloading.
Your game will be unplayable.
And if you want to combat aimbotting: your viewport and hit point would have to be server authoritative too.
Basically: unless its Stadia or geforce now, this wont work.
Not delaying UI interaction; though conflict resolution (there are at least two involved clients, each with it's own lagged view of the other, and a server that knows it's own truth) might change the outcome of events. THAT is the part of multiplayer net code I know the least about, mostly because I don't think there is a perfect solution but I am not a subject expert on what works well as an approximation.
That is what early days path if exile chose to do, and players hated the rubberbanding. Nowadays everyone uses lockstep instead, because backtrack events feel worse than being blocked right when the issue happens.
My understanding is that it's popular now to use rollbacks in fighting games (in combination with delays so the rollback doesn't get too far). Perhaps something like that would be useful, though of course that would depend on the game (and how much data it needs to send between players).
I think the difference is fighting games are easier to simulate. Part of rollback is to rewind 6 frames and resimulate those 6 frames again with the new input. This basically requires you to be able to run your game at 6x speed consistently. It's also increased memory requirement, because you need to have the game state from those 6 frames ago in memory. These are also reasons you cannot do too many rollback frames without adding delay. I believe the Nintendo Switch never got the rollback update for BlazBlue: Cross Tag Battle because of performance reasons.
Fighting games have two (maybe 4 with assists) characters generally at 60fps. That's relatively easy to do. A worse case would be an RTS game: in a fight when each unit's attack needs to be calculated repeatedly. Valorant runs at 128 ticks/second. For the same latency compensation as 6 frames in a fighting game, you would need 13 frames, so you need to be able to simulate the game at 13x speed.
And rollback still has janky visuals when conflicts happen. The games I've played will let you choose between smoother visuals with more delay or rollback artifacts with less delay. Generally the default setting is the former.
Sending copies of fake character data isn't a thing because there eventually has to be a flag that tells the client to not render that character that the client hack could simply read.
It should be clear that servers already do not trust the client, they do many checks hence you don't see teleportation hacks in games like Counter strike or Valorant. There used to be cheats in the counter strike games like "nospread" where you could have 100% pixel perfect aiming but that was because the the client was trusted however now in most games with some randomness in bullet spray patterns the random seed is different between the client and server so something like "nospread" are no longer possible.
You might be stumbling upon "fog of war" that is not sending data to a client unless the enemy player is close to visible which is a thing. It's widely used and I'd say effective in MOBA/MMORPG/RTS games however in FPS games fog of war is many times more computationally expensive which matters at the scale of games these days. It has been a thing for a long time in counter strike with server plugins like "SMAC anti wall hack" or "server side occlusion culling" however the implementations sometimes have not been perfect and require significantly stronger servers. https://github.com/87andrewh/CornerCullingSourceEngine
Riot games also implements fog of war at scale in Valorant and has a blog post covering some of the issues they overcame. One thing you can see the gif at the end of the blog post, even though fog of war is effective it is only effective in reducing the effectiveness of wall hacks and wall hacks still provide a significant advantage.
https://technology.riotgames.com/news/demolishing-wallhacks-...
There would be no such flag. The clients would cull the characters that are out of position. Yes that's some client load for the culling, but it's probably less overhead than 'anti cheat'.
The important reason I suggested MULTIPLE clones of a character and only forking new paths off of existing characters in the world is that it should eliminate any information oracle about which of those is the real character.
There is a high level of server load for this as well, not only placing these fake characters but making them move and act like real human players so they are believed and then culling them (server would cull them not the client to be clear because how would the client know to cull them without a flag?) only right before they are visible while taking account of lag (ping), interp, packet loss etc..
I definitely could see someone games doing this as a one-off to just catch specific cheaters they are suspicious of to confirm they are cheating (Many 3rd party anti-cheats in counter strike and the 1st party valorant anti-cheat do manual bans based on replay reviews) but also since they already do fog of war someone with wall hack seeing an enemy player pop in for 1 frame before disappearing would make it not effective on a wide scale.
Because the popular cheats aren't "the client says the player shot the enemy".
The popular cheats are "the client says the player just clicked at (1030, 534) on the screen", which is a totally valid move, except it's calculated by the cheat instead of the player.
The client need to have more state info than the player to render accurately, for example, to render an opponent passing through a window without lag. And also, there are also cheat that doesn't need to spy on the state, like aim assist tool or HUD improvement.
* Pass through a window without lag - That's why the server is sending multiple copies of potential movements and paths through the level for each character, but terminating the ones that are about to reveal their effects (no longer be culled by walls / objects) when they'd send false information to non-cheating players.
* Aim Assist - what's that supposed to work with for the assist? I guess it might help someone target a player once they're exposed, or once they've locked on. For that I think that extremely top tier players might behave within fuzzing distance of tool assist, at least some of the time. Dodging might have similar issues. I could even see ML assisting inputs just based on frame-grabs off the screen video output. -- So I'm not sure what client side anti-cheat is supposed to do here.
Aim Assist falls into a category of cheats that are more or less undetectable and unavoidable over the internet: skill assists for something a computer does better than a human. How central these are to the game depends on the game. For a game like Chess, the impact (of consulting a computer to suggest moves) is devastating, but the online community survives. I think it's typical in such communities for truly high stakes competition to happen in person, and for the online scene to be seen as more of a social / practice scene. I like this solution: prevent theft by reducing the value of what can be stolen.
Games that turn heavily on aiming have a similar central security flaw in that it is hard to prevent cheating at the game's central skill. (Though I think in the case of aimbots, sometimes webcams are substituted for LANs, with some success.)
On the other hand, some games are practically cheat-proof. A puzzle game in which you submit actual solutions doesn't require any trust of the client at all. CTF games generally run along these rules - almost anything you can do to solve the puzzle (googling, teaming up, writing tools, bringing AI assistants) is considered fair game. What might be considered a cheat in another context is just advancing the state of the art.
HUD improvements depend on the game. But as a simple example, I play a game where leading a moving target is a major skill; a HUD that gave you an aimpoint for a perfect intercept would be a pretty big cheat.
I think anti-cheat is one of those problem spaces where there is a danger of overemphasizing technical solutions to social problems. Technical solutions are nice, but there are also gaming experiences that are only practical on a private server, with friends, on the honor system. A wise friend once observed that removing griefers and jerks from a community also did a lot to address cheating. I think it is best thought of as a social problem first, though I agree it all depends on the context.
That monitor seems to dynamically do things based on data the game legitimately shows a player.
For some data, like the health bars, a skill / accessibility leveling feature might be to just let the user pick HOW the game displays that data, to customize the UI layout to their needs.
Enemy position highlight based on the minimap vs present location? Yeah, that crosses a clear line, but it's abusing some data the game probably shouldn't have told the player to begin with. What if the minimap reflected the known shape of the world, but only updated with the visible area (standard 'fog of war' mechanic)? Again, it might be within accessibility features to highlight enemies within sight, so I don't see too much issue if the minimap's render state is restricted to the immediate area + what the camera direction could see.
There is a fog-of-war mechanic, called 'vision'. In the game discussed in the monitor article (League of Legends), what is shown on the minimap is restricted to what your team can see at that moment.
The monitor is akin to having an experienced coach watch you play live. Is that also cheating? I think it is.
I also think it's impossible to detect, unless the player suddenly becomes extremely much better at the game. That's the best they can do to catch cheaters at chess. But chess is orders of magnitude easier to monitor, because the game state and input are small and simple.
When I first read about the monitor I realized that for many types of games cheating will become unstoppable. Although sad, the bright side is that it drove me away from online gaming even more, to the benefit of my overall health.
>Pass through a window without lag - That's why the server is sending multiple copies of potential movements and paths through the level for each character, but terminating the ones that are about to reveal their effects (no longer be culled by walls / objects) when they'd send false information to non-cheating players.
So the client must render multiple possible scene to be prepared ? They already have issue to have steady fps.
> So I'm not sure what client side anti-cheat is supposed to do here.
Anti-cheat will check other running processes to prevent it. Of course, you can have totally external system for that, but it will be much more expensive. The goal is not to be perfect but to prevent most of the player from cheating.
>HUD improvements - like what?
Highlight items, show life percentage in games that doesn't, highlight barely visible opponents...
Not trusting the clients and redoing all calculations server-side would require massive processing on the server side.
Your idea then multiplies the load on the server.
We do have massive processing on all sides. And the actual processing done for the game stuff is not really that big. Mostly it goes to graphics, sounds and so on.
How much though, i mean you're basically doing vector work, and only when its visible within range, and then when its visible within the player and the player hitbox is.... oh I see.
Opening the console and seeing rollback net-code pretty much says enough. Also the game itself feels great to play even with 160ms ping which is only really possible when using architecture similar to overwatch.
The first rule in any software backed by a server, but especially multiplayer games is, you never trust the client. You could have a perfectly deterministic game where every action is validated on the server, be defeated by running the game at half speed.
well because realtime online multiplayer game need to be "FAST", I mean really fast
sure if you develop platform today we can check token user now with hashtable we have in database but in games ?? You cant verify calculated damage numbers users gave, not fast enough
You absolutely can do that and (almost) all games do that already.
This type of cheats are DECADES in the past.
Today is all about
a) enhancing normal behavior with artificial precision, not making any 'illegal' (from game perspective) actions.
b) giving player information he isn't supposed to have but that is passed to client for latency sake
Sorry but I don't think you worked for a multiplayer game in the past 15 years. Verifying damage numbers is no-brainer. The programmers won't discuss "should we verify damage numbers" at all. It's the norm today.
Then all the cheater would need to do is move close to the area of these ghosts and find out which the real player is. Its also going to be very taxing for the server to create realistic ghost players that move around dynamically.
The reason multiplayer servers implicitly trust clients is because it's a cheaper and proven (less risk) solution.
The traditional anti-cheat can be just slapped after the game is developed in most games. If the game is very successful then you can just update the game with extra paid protections provided by the anti-cheat tool.
The alternative is local game engine that works with a partial game state which is a challenge on it self. If you still can make it work, you will still have to deal with people "modding" the client to gain an advantage. E.g.: enemies are painted red instead of camouflage.
As someone working in AAA game development, I come across comments like these often, and they never fail to get under my skin. It’s like watching that infamous "Two idiots, one keyboard" scene from CSI—full of confidence, but completely detached from reality.
I don’t mean to sound harsh, but it’s tough to tackle this kind of misconception because it’s stated with such certainty that others, who also might not know any better, just take it as fact.
Here’s the thing: Multiplayer servers trust clients mainly for performance reasons. In AAA game development, anti-cheat isn’t something we focus on right from the start. It typically becomes a priority post-alpha (and by alpha, I’m talking about an internal milestone that usually spans about a year—not the "alpha" most people think of which is usually closer to an internal "beta", and "public beta" is more like release candidate 1). During that time, the tech team is constantly working on ways to secure the game. (make it work, make it correct*, make it fast).
If we were to bake in anti-cheat measures from the very beginning of a project, it would force us to scale back our ambitions. Some might argue that’s a good thing, but the truth is, we’d also risk missing critical milestones like First-Playable or Vertical Slice. You simply can’t tackle everything at once—focus is a measure primarily of what you are not doing, after all.
Back when I was working on The Division, we had some deep discussions about using player analytics and even early forms of machine learning to detect "too good" players in real-time. This was in 2014, well before the AI boom. The industry's interest in new anti-cheat methods has only grown since then, I promise you this.
At the end of the day, games are all about delivering an experience. That’s the priority, and a solid anti-cheat system is key to ensuring it. Endpoint security is currently the best solution we have because it doesn’t bog down the client with delays or force awkward mechanics like rollbacks or lock-step processing. Plus, it lines up with the (very heavy) optimisations we already do for consoles.
Nobody in this industry wants to install a rootkit on your PC if we can avoid it. It’s just the best trade-off (for all parties, especially gamers) given the circumstances. And let's be clear—these solutions are far from cheap. We pay a lot to implement them, even if some marketing material might suggest otherwise.
Did the division have an anticheat when it was released? I remember it being really bad some time after release, like a few steps above most other games in both the number of hackers and their abilities (not just the usual aimbot/esp).
Yes, we did, but it wasn’t good enough (it was the machine learning system I talked about). We later added EAC as well, the situation improved but cheating was still rampant.
Makes sense, ineffective AC and little server side checks, I think the community consensus was that there was no AC at all. I played dark zone quite a bit, kinda first in the raid looter shooter genre. Had a lot of fun with the jumping jacks "bug".
Its really hard to tell if someones cheating based on the things you can check because it can look like low ping or just a slightly better than average player. In those cases, our genuine best players might accidentally trigger. (which has happened)
There are egregious examples of cheating, sure, but those people are always banned within the hour.
The real killer was the free weekends, it makes it so that there is no “cost” to cheating for a while since being banned on a fresh account has no meaning.
>It’s just the best trade-off (for all parties, especially gamers)
I fail to see how pimping out my PC to code that no one can verify is a good deal. The takeaway is, have a separate hardware to play games on and don't let it touch anything private?
> because it's a cheaper and proven (less risk) solution
I mean... didn't you just essentially say he's right? Things are done the way they are because of performance (aka "cheaper") and to meet project goals (aka "less risk")
Those aren't bad reasons at all, and it makes perfect sense, especially when you consider already locked-down platforms like consoles. But it seems to me, from what I read here, that the reasons are ultimately cost and risk.
Sure, but you'll need to do more than "read several HN articles over the years" (presumably completely unrelated to anti-cheats?) to get even a basic understanding of how anti-cheats work, as he went on to demonstrate.
I also just thought it was unintentionally funny, like a comedic setup for a stereotypically cocky HN user to comment with great confidence on something way outside of their field of expertise.
(not saying that's the case for mjevans)
<sarcasm> oh dang you should be a multiplayer engineer. Sounds like with barely even thinking about the problem space you've solved what thousands of extremely talented and knowledgeable engineers never could! </sarcasm>
Servers very much distrust the client. Obviously. That's literally rule #1. Don't trust the client!
Comments like yours are extremely irritating. Please don't behave this way with your co-workers.
Anyhow, there's all kinds of types of cheats for different kinds of games. There's a variety of mitigations for each kind. I don't think there's a multiplayer shooter on the planet that has fully solved aimbots. For however clever you think you are I promise the cheat makers are much, much more clever. :)
Touch grass buddy. In their first sentence they openly admit this isn’t their area of expertise. Hence them asking a question to people who know more than them. “Don’t behave this way to your co-workers” is much better advice for your comment than for GP’s.
Went outside with my dogs and successfully touched grass. It’s growing back in nicely after a week of rain.
“Why can’t you just” guys are extremely irritating. I implore OP to not be a “why can’t you just” guy at work. What is a WCYJGuy? Someone who has no knowledge of a domain but proposes solutions under the implication that there is a simple solution that they are oh so clever to have instantly discovered. It takes a lot of time and effort to explain “no you can not just” to someone who doesn’t have the pre-requisite knowledge.
Why can't the servers distrust the clients? What should a 'client side anti cheat' actually prevent?
The way I think I'd tackle such things is to have multiple copies of each character model moving in different locations and different ways. Such that trying to spy on the state of the game from one client's viewpoint yields mostly false data. New 'threads' would fork off of the existing threads and would only be culled when there are too many or they're about to make a side effect that would be visible if they were real. In that way the server would be responsible for feeding misinformation to clients but maintaining the state of the true game as a secret to itself.