It's so much easier to make cheats today than it was, say, 10 years ago.
It's also easier because more and more games are sharing common infrastructure like game engines, as compared to the past. What works in one Unreal game may save you a lot of time developing a cheat for another Unreal game.
These days, many online games encounter serious cheats within the first couple of days of release - if not the day OF release.
1. Determine minimum human reaction times and limit movement to within those parameters on the client side. (For example a human can't swing their view around [in a fps] in a microsecond so make that impossible on the client) this will require a lot of user testing to get right, get pro players and push their limits.
2. Build a 'unified field theory' for your game world that is aware of the client side constraints as well as limits on character movement, reload times, bullet velocities, etc. Run this [much smaller than the real game] simulation on server.
3. Ban any user who sends input that violates physics.
Now cheating has to at look like high level play instead of someone flying around spinbotting everyone from across the map. Players hopefully don't get as frustrated when playing against cheaters as they assume they are just great players. Great players should be competitive against cheaters as well.
Cheating and anti-cheat used to rely a lot on the pure technical parts (like "is something sneaking some reads from the memory the game engine uses to clip models?"), which is ultimately not something you will win as a game developer (DMA/Hardware attacks or even just frame grabbing the eDP or LVDS signal and intercepting the USB HID traffic has been on the market for quite a while).
But implausible actions and results for a player can only be attributed to luck so many times. Do 30 360noscope flick headshots in a row on a brand new account and you can be pretty sure something is wrong.
If we can get plausibility vs. luck sorted out to a degree where the method of cheating no longer matters, that's when the tide turns. Works for pure bots as well. But it's difficult to do, and probably not something every developer is able/willing to develop or invest in.
What I would try is to hire a red team & blue team and put them in a sandbox environment. The red team cheats on purpose. The blue team is guaranteed to be playing legitimately. Both teams label their session data accurately. I then use this as training & eval set for a model that will be used on actual player inputs.
The only downside is that you will get a certain % of false positives, but the tradeoff is that there is literally nothing the cheaters can do to prevent detection unless they infiltrate your internal operations and obtain access to the data and/or methods.
Anything that makes assumptions about player's skills runs into problems too. For any online PvP game, the skill ceiling will rise with time. What once may have been considered improbable may soon become what's consistent for the top 1% or even 0.1% of the playerbase given a few years.
As well, it can run into problems as rebalancing occurs and new abilities are released.
0: https://www.ign.com/articles/final-fantasy-14s-latest-raid-s...
The only group you'd punish with that is skilled players that lose their account (and create a new one), but if you use a moving skill window they can grow back into their plausibility pretty quickly, and it's a small cost compared to everything else. And you could even mitigate that by making things like the first 10 matches require a different plausibility score than the matches after that.
And with different I don't mean "no scoring at all" or something like that. But a cheater tends to not cheat "a little bit". You might have togglers, but that sticks out like a sore thumb (people don't suddenly lose or gain skill like that). And even if that fails (lots of "cheating a little bit" for example), you've still managed to boot out the obvious persistent cheating.
And that's just with 1 example and 1 scenario. Granted, that bypasses the fact that it is still difficult and doing it broader than one example/scenario is even more difficult, but that's why I ended the previous comment pointing out the difficulty and associated cost, which goes hand in hand with the balancing difficulty you pointed out. Even tribunal-assisted methods (not sure if Riot games still does that) have the same problem.
And - what about experienced players who cheat?
In some scenes, it's actually more often that cheaters are some of the best, most experienced players who have a strong competitive lean and feel they 'deserve' to win, so use cheats to get an edge. It's far more common than you'd think.
That's the problem with any anti-cheat system. It's all the what-ifs. Every single 'clever idea' that has been theorized under the sun has been tried and most have failed.
The world is much more complex now that YOLO-based aimbots exist, and I think the real answer is that anti-cheats are now defeatable, period.
You can craft a private binary that has no hash registered to any major anti-cheat service on the client-side, and on the server-side you’re limited to what is allowed by game rules.
Since there’s no mechanisms for preventing super human reflexes, and there probably shouldn’t be, it’s an issue that cannot be solved anymore.
So you need community judgement, and that too is boring. Good players being accused of cheating in Counter Strike is a years old and entertaining problem.
No, those are still just as vehemently hated as “closet cheaters”, for example the whole XIM / Cronus infestation on any game that has controller AA.
It’s still possible to, on average, spot if it’s a closet cheater or an actual good player due to things like movement and gamesense, but for the average player it will be much less obvious, leading to a huge amount of rage towards good players because they are by default suspected as “just another closet cheater.”
Now add in that I'm running a physics-heavy game with 120 tickrate, (considering higher after more tests), with fine motor control action combat, aimed to scale to mmorpg size, and it really becomes a challenge!
Take a moment and think about how you would design cheats that would be undetectable. Hot keys, real time adjustments, all the options and parameters you could provide cheater to dial in their choice experience while also keeping them looking legit.
Then realize cheat developers thought of all that decades ago and it is waaayyyy beyond what you can dream up in a few minutes. Hell cheats nowadays even stop cheaters from inadvertently doing actions that would out them as cheaters.
Because they're 10 years behind the curve and don't understand that a game's lifespan is contingent on anti-cheat. Once it becomes clear to the casual player that a hacker is going to effect every gaming session, the game dies quickly. Many games have gone so far as to obfuscate the presence of hackers so that players are less likely to notice them (CoD)! Other games build from the ground up with anti-cheat in mind (Valorant). Other games have an ID verified 3rd party system for competitive play (CSGO).
Personally, I think there is a middle ground between root level hardware access, and treating cheating as an afterthought. I'd lean more heavily on humans in the process... Use ML models to detect potential cheaters, and build a team of former play testers to investigate these accounts. There is zero reason a cheater should be in the top 100 accounts; An intern could investigate them in a single day! More low hanging fruit would be investigating new accounts that are over-performing. I'd also change the ToS so legal action could be persued for repeat offenders. Cheaters do real economic damage to a company, and forcing them to show up in small claims court would heavily de-incentivize ban evaders. This probably sounds expensive and overkill, but in the grand scheme of things it's cheap; it could be done on the headcount budget of 2-3 engineers. It'd also be a huge PR win for the game.
Yeah, most games have builtin aimbot, called "aim assist". I do not like it, in fact, I find it annoying as a player, too (I come from Quake 3).
You cannot and should not rely on that, depending on what account really means, e.g. in ioquake3 games, having a new GUID (you delete a specific file to get a new one) makes you a new player.
Experienced players who cheat will still be subject to plausibility. Say there is a normal amount of variance in humans but suddenly some player no longer has variance in their action. That's not plausible at all. Or a player looking at things they cannot see, that might sometimes be a coincidence, but that level of coincidence is not plausible to suddenly change a drastic amount.
Again, this sort of thing doesn't catch all subtle cheaters, but those are also not the biggest issue. It's the generic "runs into a room, beats everyone within 10ms", and "cannot see, but hits anyway all the time" type of cheat you'd want to capture automatically.
A what-if in a tournament or the top 1% of players is such a small set of players, you'd be able to do human observation. Even then someone could cheat, but you're so far outside of the realm of general cheating, I wonder if that's worth including in a system that's mostly beneficial inside the mass market gaming players.
Either way, this sort of detection is usually done in the financial and retail world, and results in highly acceptable rates and results. It's not perfect with a 100% success rate or something like that, but it's pretty successful. Just not something studios or publishers seem to want to invest in. It's much simpler to just buy or licence something (like Easy Anti-Cheat). Broad internal expertise isn't something the markets are rewarding at this point.
A CS:GO player with good gamesense will habitually keep their crosshairs at head height and aim at corners where an enemy is likely to emerge. They'll have an intuitive sense of how long it takes to run from one point on the map to another. They'll listen through walls for footsteps to try and decode where the enemy are, where they're headed to and what strategy they might be about to attempt.
To the uninitiated, it looks a lot like cheating - you peek through a window and instantly get headshotted before you've had any chance to react. To the guy who hit you, it's just basic gamesense - you did a predictable thing and he punished you for it.
But I guess the documentation and standardization are even more advanced ?
the what ?!?
> A smurf is a player who creates another account to play against lower-ranked opponents in online games.
Happens in many games, including League of Legends on which people typically spend a lot of money.
There's also the multi-world randomiser community, where people network a bunch of emulators together, and finding an item in one game can actually unlock something else in another player's game.
The problem isn't cheating itself, the problem is players feeling like they have been cheated (and thus not buying micro transactions in the future).
If you can limit player action to things that look plausibly human, less players will feel cheated and will be less likely to drop out.
This system would be put in place on top of existing systems and if implemented as I have described could be done so fairly cheaply from a operational perspective (getting it off the ground will require a good bit of dev time).
If you had ELO based matchmaking (that dropped matches where the player performed far below what they had previously done to prevent sandbagging) a cheater with "perfect play" would end up only playing against other cheaters after a time.
How does CoD accomplish this, or other games that use similar strategies. I can't wrap my mind around how you could do this effectively while also not identifying hackers for the purpose of banning. Banning = Cheater buying another license to the game, I thought they like banning people for that reason :/
Or you could spend a huge effort on cheatproofing only to find that no-one plays your game in the first place, e.g. Concord. I imagine getting cheaters in your game often falls into the "nice problem to have" category and it is easy to kick the can down the road.
Ha ha, you mean paying for the game and holding your Steam account as collateral?
You just described most competitive games (even vaguely so), and 100% of esports.
Any game I pay for that pressures me to pay with micro transactions already makes me feel like I've been cheated. "Free" to play games might be motivated that way though.
Although I doubt it would stop cheating, making sure that players can't do impossible things is absolutely a good idea and something that should have been done ages ago.
The best solution to avoid cheating is to play with people you know. Expecting a good time when playing with internet randos from all over the globe is maybe too optimistic.
Demomen on the other hand use an aimbot so they can hit you with those parabolic projectiles in the face, even if you're behind a wall and they can't see you at all.
The only trace of it is that your account profile will show that you have vac bans on record, but you don't have to show your profile.
I suppose that matters less if we're doing checks on the actual data, but for the player base, you cannot rely on what the game reports about the experience of your opponent, which makes for very confusing matchups (and the accusations that go with it).
Like level up without getting XP by playing? That renders it pretty useless.
Speaking of, I hate games that are "pay to win".
But if the anti-cheat is able to advance to the point that a cheater can merely rise up the ranks by 10%, then, if you think about it... in a lot of ways the problem is solved. When I'm playing in a match, and one of the players is in the 80th percentile by their own merits, and another is "naturally" a 60th percentile player but is cheating their way up to an 80th percentile player somehow... and if they can't see through walls or insta-headshot across the map or do anything other blatently violating the rules, they just play a little better... what's the actual difference?
There is some. It's not zero. If you can't get those cheaters under control in tournament play the situation will normalize to everyone using a cheat just to keep up in a Red Queen's race, and that's still bad for other reasons.
But it isn't the same impact as playing with Sir Snipes-A-Lot who headshots you through three walls the instant your spawn invulnerability wears off, either.
Hilarious, and shitty.
https://help.steampowered.com/en/faqs/view/571A-97DA-70E9-FF...
> Q: Can I use bans in other games to block users from playing in my game?
> A: No. VAC and Game bans should only prevent the user from playing on VAC secured servers in the game they received a ban in. A permanent ban should only be issued for your game if the user was caught cheating in your game.
https://partner.steamgames.com/doc/webapi/ICheatReportingSer...
It's complicated. Valve has conflicting guidance on this. What is Valve's actual position? The 13 year olds who cheat also buy IAP. In their opinion, if there are a lot of cheaters, sell pay to win items.
Otherwise, the consensus is hellbanning, meaning putting all the cheaters together in a server, and VAC queries are used to achieve that.
One was from letting my friend use my steam account, I wasn't using it and when I wanted to use it my password was changed and I had a vac ban in CS 1.6. He said it wasn't him, I'm not convinced.
The other was in Dungeon Defenders. The game had a confusing policy where you were allowed to cheat on the "Open" servers but not on "Ranked". You could copy your stuff from ranked to open, so I copied it and used cheat engine to test some things. Turned out you were only allowed to cheat using mods from the Steam Workshop or something like that, so I got vac banned.
Both bans are over 10 years old so things might have changed but I have never noticed any negative effects other than obviously I can't play DD or CS 1.6 online.
The cheating server situation is a similar concept to hell banning but poorly executed.
Hell banning is the status quo. If you try to play Overwatch they probably query VAC and might match make you with other people with VAC bans.
It’s hard to know without working for the game studio.
There is no hard technical solution to preventing cheating for many games. It depends how you describe insurmountable DRM or anti piracy measures, such as operating the only copy of the game’s backend server code. If people have no viable alternatives to playing on your remote servers, then you have an anti cheat solution. The net result is that all games, in a Darwinian way, start to look like this. Similarly on PS5, you cannot pirate their games practicably, so there is a vibrant single player business.
It all goes back to: are the only valid limitations on users insurmountable DRM? If we enforced copyright infringement in this or any country it would be a different story.
Seems strange that they would discriminate based on vac bans in game but not for the people selected to judge others. Then again maybe my bans were too old.