> If we make something that will functionally become an intellectual god after 10 years of iteration on hardware/software self-improvements, how could we know that in advance?
There is a fundamental difference between intelligence and knowledge that you're ignoring. The greatest superintelligence can't tell you whether the new car is behind door one, two or three without the relevant knowledge.
Similarly, a superintelligence can't know how to break into military servers solely by virtue of its intelligence - it needs knowledge about the cybersecurity of those servers. It can use that intelligence to come up with good ways to get that knowledge, but ultimately those require interfacing with people/systems related to what it's trying to break into. Once it starts interacting with external systems, it can be detected.
A superintelligence doesn't need to care which door the new car is behind because it already owns the car factory, the metal mines, the sources of plastic and rubber, and the media.
Also it actually can tell you which door you hid the car behind, because unlike with the purely mathematical game, your placement isn't random, and your doors aren't perfect. Between humans being quite predictable (especially when they try to behave randomly) and the environment leaking information left and right in thousands of ways we can't imagine, the AI will have plenty of clues.
I mean, did you make sure to clean the doors before presenting them? That tiny layer of dust on door number 3 all but eliminates it from possible choices. Oh, and it's clear from the camera image that you get anxious when door number 2 is mentioned - you do realize you can take pulse readings by timing the tiny changes in skin color that the camera just manages to capture? There was a paper on this a couple years back, from MIT if memory serves. And it's not something particularly surprising - there's a stupid amount of information entering our senses - or being recorded by our devices - at any moment, and we absolutely suck at making good use of it.
Maybe the superintelligence builds this cool social media platform that results in a toxic atmosphere were democracy is taken down and from there all kinds of bad things ensue.
There is a fundamental difference between intelligence and knowledge that you're ignoring. The greatest superintelligence can't tell you whether the new car is behind door one, two or three without the relevant knowledge.
Similarly, a superintelligence can't know how to break into military servers solely by virtue of its intelligence - it needs knowledge about the cybersecurity of those servers. It can use that intelligence to come up with good ways to get that knowledge, but ultimately those require interfacing with people/systems related to what it's trying to break into. Once it starts interacting with external systems, it can be detected.