I suppose the memory being mapped twice could be detected by anti-cheat though. You can then also make more mitigations to prevent detection of the mapping (e.g. hooking the syscall to check the active mappings), but it’s always a cat and mouse game.
You are right on the first point, but I don't think you are right on the syscall part - a kernel level module can just read the PT directly without resorting to a syscall, no? You get access to CR3, and besides kernel-level PTs have a fixed logical (and if memory serves, physical - though maybe just on windows) address.
When in the non-PAE mode, I think one can still practically trigger page faults on attempted reads on the PDEs mapped by a kernel-level aimbot, force flushing of TLBs when anti-cheat tries to read the PT, and effectively conceal the cloning (although if the anti-cheat is doing this often enough, the performance impact might be too much?).
When in PAE mode, I do not know of a practical way to do it, but I haven't been researching such exploits for a few years now.
I think at this point, the most practical way to implement an open-source, undetectable aimbot proof-of-concept would be to perform static reversing of the game engine to get the network protocol, peform a MITM to listen in and recreate state on a separate process or machine, do a PnP input injection via a real or fake mouse/keyboard.
Reversing the code (as opposed to memory structures) is very hard these days, though, and not because of anti-cheat software, but because of the high-end anti-piracy runtimes and layers upon layers of abstractions which are annoying to analyze in assembly. (But sure, not impossible, and I am sure people are doing this considering the crazy amounts people are willing to pay for private aimbots).
For a system programming geek its all very interesting and intellectually stimulating, but boy does it ruin the fun of multiplayer gaming :-( I think the best way to protect against cheaters would be to run streaming-only servers where all the processing happens server-side.
I don’t think speculation about AGI is possible on a rigorous mathematical basis right now. And people who do expect AGI to happen (soon) are often happy to be convinced by much poorer types of argument and evidence than presented in this paper (e.g. handwaving arguments about model size or just the fact that ChatGPT can do some impressive things).
I think the biggest fallacy in this type of thinking is that it projects all AI progress into a single quantity of “intelligence” and then proceeds to extrapolate that singular quantity into some imagined absurd level of “superintelligence”.
In reality, AI progress and capabilities are not so reducible to singular quantities. For example, it’s not clear that we will ever get rid of the model’s tendencies to just produce garbage or nonsense sometimes. It’s entirely possible that we remain stuck at more incremental improvements now, and I think the bogeyman of “superintelligence” needs to be much more clearly defined rather than by extrapolation of some imagined quantity. Or maybe we reach a somewhat human-like level, but not this imagined “extra” level of superintelligence.
Basically the argument is something to the effect of “big will become bigger and bigger, and then it will become like SUPER big and destroy us all”.
Why is it all about doing more and not about doing less? I sincerely doubt that a technological solution is possible that would not require any reduction/change in our current lifestyles. Where is the evidence that we only need a sufficient amount of science and engineering to prevent disastrous climate change?
Moreover, merely developing the technology is not necessarily enough to get it actually adopted. Lobbyists will fight to protect established interests from industries like oil and gas. There will be narratives about how such new technologies destroy jobs, in an attempt to sway voters.
Ultimately, not enough change is going to happen without political will. That is because fighting climate change is not going to make everyone happy. There will be losers, and there is never going to be some magical piece of technology that will let us have our cake and eat it too. Creating political will requires more than just clever engineering, and people like Greta Thunberg can help motivate and inspire large groups of people to create that political will.
> Moreover, merely developing the technology is not necessarily enough to get it actually adopted.
The city I live in runs on 100% renewable power. It's wind and solar, generated both locally and remotely.
One of the solar farms is not far from where I live. Year by year more people are putting solar on their homes. People like generating their own power.
Look what's happening around the world. Pick some countries. Kenya's power generation is 80% renewable:
You can have some seriously in-depth discussions in sociology or philosophy as well, that would be hard to follow for someone with just a STEM background. The kind of sociology or philosophy one would discuss without preparation or reading with friends is not really comparable with the academic disciplines. It is just armchair philosophy/sociology.
I have a feeling many technically minded people would struggle to read and write an essay about Heidegger or Habermas. In fact, I doubt most people are even aware of that type of philosophy due to not having actually delved into it very deeply, and yet as outsiders to such fields they feel confident in making claims that those fields are "easier." (And usually there is also an implication that this being "easier" is an indication of those fields being less serious or worthwhile.)
If my non stem sample would be able to engage with armchair science in the same way that the stem sample can engage with armchair philosophy, I would agree with you.
However, it's not the case.
I also find more stem people reading entry level materials in philosophy around me (like Moral letters to Lucilius), that I will find non stem people ready entry level materials in science (like A brief history of time).
The ratio is hugely disproportionate, and while I understand this is all anecdotal, I have a large and very diverse social circle to observe, spanning on wide areas of age, social backgrounds, political preferences, sexual orientation and jobs.
I think times are also just different now. Now a lot of the talent just gets vacuumed up by the Big Tech companies where they can be a cog in some wheel.
You seem to be ascribing some moral failure to the younger generations whereas there are also factors at play that go beyond individuals. It might be more sensible to say that the internet could not be invented in the current economic/social climate.
One thing that does always strike me about such libraries is that they are often relatively easy to use for common scenarios like that, and the mental model is really not so complicated or abstract as to require category theory or something like that. When you use such libraries, I don't think you are really doing deeply category-theoretic thinking. More so just seeing how the types fit together and maybe thinking of some higher-order functions that you want to combine and apply.
If you take Haskell and remove all the types (e.g. Scheme or JavaScript), you can have all the same abstractions, with basically the same syntax and operational behavior. But then of course the operational behavior is just applying/composing functions in the end... And because there are no higher-kinded types to deal with, category-theoretic language is less likely to sneak in.
I say this as someone who has used Haskell for years and studied category theory / PL theory as well. And I like all of those things, just think it's sometimes not really necessary to get the mental model at all and be able to use those tools.
I think that some code duplication is also not always the worst thing. Just because strings can be "combined" through concatenation and numbers can also be "combined" through addition does not necessarily mean you need one general notion/function to capture both, especially because if you take that to an extreme then you end up with some function that "sqoogles the byamyams" (i.e. the abstraction has become so general that the only way people really understand/use it is by looking at its concrete instantiations).
> If you take Haskell and remove all the types (e.g. Scheme or JavaScript), you can have all the same abstractions, with basically the same syntax and operational behavior.
This isn't generally true, as such a language would have to be lazy by default and the property is more important than having all the types, because certain abstractions that can be expressed "naturally" in Haskell is a byproduct of non-strict semantics. Also it's the property that sometimes helps avoiding worst-case complexity of evaluations - something that eager languages have to live with at all times, no matter the abstractions.
I think that it's also important to realize that maximizing "health" is not some kind of absolute goal. Not every aspect of life needs to be optimized to the highest level.
Of course, obesity is a huge issue (especially in the U.S. compared to many other "developed" countries) that can affect people's lives negatively and causes further medical issues such as diabetes, and ultimately can prevent people from leading a life that is as fulfilling and meaningful as they would have liked.
But we are still dealing with people here, not rats in a laboratory experiment, and I think the issues that directly follow from being obese are already bad enough that it does not help to pile on more shame by treating those people as being "weak-willed" or something of the sort, or denying them basic human dignity and respect for being outside the sacred norm. Do we really have to add artificial negative consequences for being overweight? Does that help those people have a more fulfilling and meaningful life?
I don't think people will just forget the direct negative physical/social consequences of being overweight by not being reminded of them all the time in a moralistic tone (and even just reminding people of such information can be moralizing, depending on the context in which the information is provided).
I am personally not a big believer in the idea that art is just a means to an end (even if that end is something we consider very valuable). This article uses the words "purpose" and "goal" a lot, and it is that aspect that I disagree with, although I would agree that art is phenomenon that can only be considered properly inside of its social context.
That is, saying that a painting is just a tool for the painter to evoke certain emotions or experiences in the viewer (or social goals for the painter, like wealth and status), just as a hammer is a tool for driving nails into wood, is to reduce the painting to a mere instrument.
Art is meaningful to people, and I think that cannot be reduced to art being just a goal-directed tool we use to manipulate/control the world or people around us.
While artists may hope their art will also fulfill other goals, I wouldn't want to live in a world without art, where those goals are instead fulfilled by expending equivalent effort to turn the crank of some machine. I suspect neither would the author of this article.
I agree. I often see the top 1% who are really successful in art, using art as the means rather than the end. note how mediocre their art is versus other great skills they have like networking (i.e.: current pop music) with very rare exceptions.
It's not the case though that what's most "productive" in the economic sense is also necessarily the most socially desirable outcome. The economy might be optimizing for some objective of productivity (though it's probably imperfect at that by any measure), but I don't think we can reduce it to something simple like "more overall money / goods / services = better", unless we also start looking in more detail at how those are distributed among different groups of people, and what kind of real value it is providing to them.
Of course. But that is again, in the purview of the political arena. Nevertheless, society’s resources are limited, need to be allocated, and hence the work that many do in finance.
A better way is always welcome, but I have yet to read about it or see it.