I appreciate that "tools" that are used to build the final version of a module/cli/service are explicitly managed through go.mod.
I really dislike that now I'm going to have two problems, managing other tools installed through a makefile, e.g. lint, and managing tools "installed" through go.mod, e.g. mocks generators, stringify, etc.
I feel like this is not a net negative on the ecosystem again. Each release Golang team adds thing to manage and makes it harder to interact with other codebases. In this case, each company will have to decide if they want to use "go tool" and when to use it. Each time I clone an open source repo I'm going to have to check how they manage their tools.
My personal estimation is that this will be noticeable in the first six months of 2025 in the USA big tech organizations.
I think this is actually already in motion in board meetings, I'm pretty sure executives are discussing something like "if we spend Z$ on AI tools, can we avoid hiring how many engineers?"
Yes. Some data-source that already takes that into account, that issue has been already solved.
Most uncertainty in temperature does not come from measured temperature (weather stations) but from temperature estimations from indirect sources. In other words, last 50 years data is pretty much all good.
I was intrigued by the value so did some research.
I would guess the 15$/hour value was chosen to approximate an average gross salary. The annualized payment would be 31200$[1] and it seems the average annual salary was around 30359$.
Updated to 2022 values the annual gross pay would be 10033€ [3], current average annual gross salary is 20483€ [4].
Apple has a top notch logistics and security processes which had mitigated the issue of supply chain attack in China which his willing and capable of producing such attacks.
Moving some production to the USA might induce some sloppiness in this due a perceived inferior risk.
Also, some security measures requested by Apple to manufacturers in other countries are probably illegal in the USA.
The reason multiplayer servers implicitly trust clients is because it's a cheaper and proven (less risk) solution.
The traditional anti-cheat can be just slapped after the game is developed in most games. If the game is very successful then you can just update the game with extra paid protections provided by the anti-cheat tool.
The alternative is local game engine that works with a partial game state which is a challenge on it self. If you still can make it work, you will still have to deal with people "modding" the client to gain an advantage. E.g.: enemies are painted red instead of camouflage.
As someone working in AAA game development, I come across comments like these often, and they never fail to get under my skin. It’s like watching that infamous "Two idiots, one keyboard" scene from CSI—full of confidence, but completely detached from reality.
I don’t mean to sound harsh, but it’s tough to tackle this kind of misconception because it’s stated with such certainty that others, who also might not know any better, just take it as fact.
Here’s the thing: Multiplayer servers trust clients mainly for performance reasons. In AAA game development, anti-cheat isn’t something we focus on right from the start. It typically becomes a priority post-alpha (and by alpha, I’m talking about an internal milestone that usually spans about a year—not the "alpha" most people think of which is usually closer to an internal "beta", and "public beta" is more like release candidate 1). During that time, the tech team is constantly working on ways to secure the game. (make it work, make it correct*, make it fast).
If we were to bake in anti-cheat measures from the very beginning of a project, it would force us to scale back our ambitions. Some might argue that’s a good thing, but the truth is, we’d also risk missing critical milestones like First-Playable or Vertical Slice. You simply can’t tackle everything at once—focus is a measure primarily of what you are not doing, after all.
Back when I was working on The Division, we had some deep discussions about using player analytics and even early forms of machine learning to detect "too good" players in real-time. This was in 2014, well before the AI boom. The industry's interest in new anti-cheat methods has only grown since then, I promise you this.
At the end of the day, games are all about delivering an experience. That’s the priority, and a solid anti-cheat system is key to ensuring it. Endpoint security is currently the best solution we have because it doesn’t bog down the client with delays or force awkward mechanics like rollbacks or lock-step processing. Plus, it lines up with the (very heavy) optimisations we already do for consoles.
Nobody in this industry wants to install a rootkit on your PC if we can avoid it. It’s just the best trade-off (for all parties, especially gamers) given the circumstances. And let's be clear—these solutions are far from cheap. We pay a lot to implement them, even if some marketing material might suggest otherwise.
Did the division have an anticheat when it was released? I remember it being really bad some time after release, like a few steps above most other games in both the number of hackers and their abilities (not just the usual aimbot/esp).
Yes, we did, but it wasn’t good enough (it was the machine learning system I talked about). We later added EAC as well, the situation improved but cheating was still rampant.
Makes sense, ineffective AC and little server side checks, I think the community consensus was that there was no AC at all. I played dark zone quite a bit, kinda first in the raid looter shooter genre. Had a lot of fun with the jumping jacks "bug".
Its really hard to tell if someones cheating based on the things you can check because it can look like low ping or just a slightly better than average player. In those cases, our genuine best players might accidentally trigger. (which has happened)
There are egregious examples of cheating, sure, but those people are always banned within the hour.
The real killer was the free weekends, it makes it so that there is no “cost” to cheating for a while since being banned on a fresh account has no meaning.
>It’s just the best trade-off (for all parties, especially gamers)
I fail to see how pimping out my PC to code that no one can verify is a good deal. The takeaway is, have a separate hardware to play games on and don't let it touch anything private?
> because it's a cheaper and proven (less risk) solution
I mean... didn't you just essentially say he's right? Things are done the way they are because of performance (aka "cheaper") and to meet project goals (aka "less risk")
Those aren't bad reasons at all, and it makes perfect sense, especially when you consider already locked-down platforms like consoles. But it seems to me, from what I read here, that the reasons are ultimately cost and risk.
For me, this recently started working! Do you have both Portugese and English configured as languages in your iOS settings? I believe that that's what motivates Siri to try and interpret a message in the other language as well when reading notifications.