This seems very much the beginning of the situation predicted by Aschenbrenner in [1], where the AI labs eventually will be fully part of the national security apparatus. Fascinating to see if the other major AI labs also add ex-military folks to their directors or whether this is unique to OpenAI.
Or conceivably his experience is genuinely relevant and unrelated to US national security going forward, completely unrelated to the governmental apparatus and not a sign of the times.
At least 12 exabytes of mostly encrypted data, waiting for the day that the NSA can decrypt it and unleash all of these tools on it.
Whenever that day happens (or happened) it will represent a massive shift in global power. It is on par with the Manhattan project in terms of scope and consequences.
Soon if not already they can just ask questions about people now.
"Has this person ever done anything illegal?"
Then the tools comb through a lifetime of communications intercepts looking for that answer.
It's like the ultimate dirt finder, but without the outsized manual human effort required to ensure that it's largely only abused against people of prominence.
It's less about the NSA having AI capabilities and more the inverse - the NSA having access to people's chatGPT queries. Especially if we fast-forward a few years I suspect people are going to be "confiding" a ton in LLMs so the NSA is going to have a lot of useful data to harvest. (This is in general regardless of them hiring an ex-spook BTW; I imagine it's going to be just like what they do with email, phone calls and general web traffic, namely slurping up all the data permanently in their giant datacenters and running all kinds of analysis on it)
I think the use case here are LLMs trained on billions of terabytes of bulk surveillance data. Imagine an LLM that has been fed every banking transaction, text message or geolocation ping within a target country. An intelligence analyst can now get the answer to any question very, very quickly.
> I suspect people are going to be "confiding" a ton in LLMs
They won't even need to rely on people using ChatGPT for that if things like Microsoft's "Recall" is rolled out and enabled by default. People who aren't privacy conscious will not disable it or care.
Probably, but so did a lot of people. Computer vision and classifier/discriminator models were pretty common in the 2000s and extremely feasible with consumer hardware in the 2010s.
Or conceivably his experience is genuinely relevant and unrelated to US national security going forward, completely unrelated to the governmental apparatus and not a sign of the times.
[1] situational-awareness.ai