Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems very much the beginning of the situation predicted by Aschenbrenner in [1], where the AI labs eventually will be fully part of the national security apparatus. Fascinating to see if the other major AI labs also add ex-military folks to their directors or whether this is unique to OpenAI.

Or conceivably his experience is genuinely relevant and unrelated to US national security going forward, completely unrelated to the governmental apparatus and not a sign of the times.

[1] situational-awareness.ai



LLMs are exactly what that NSA datacenter in Utah was built for.

It's gonna be wild to see what secret needles come out of that haystack.


At least 12 exabytes of mostly encrypted data, waiting for the day that the NSA can decrypt it and unleash all of these tools on it.

Whenever that day happens (or happened) it will represent a massive shift in global power. It is on par with the Manhattan project in terms of scope and consequences.


I've thought the same.[0]

Soon if not already they can just ask questions about people now.

"Has this person ever done anything illegal?"

Then the tools comb through a lifetime of communications intercepts looking for that answer.

It's like the ultimate dirt finder, but without the outsized manual human effort required to ensure that it's largely only abused against people of prominence.

[0] https://news.ycombinator.com/item?id=35827243



You don't really need a person inside the LLM provider to just use the LLM tech. This is more than that.


They’re already filled with foreign spies, so we may as well have our own in there too…


The nsa had AI usage long before LLMs were here


It's less about the NSA having AI capabilities and more the inverse - the NSA having access to people's chatGPT queries. Especially if we fast-forward a few years I suspect people are going to be "confiding" a ton in LLMs so the NSA is going to have a lot of useful data to harvest. (This is in general regardless of them hiring an ex-spook BTW; I imagine it's going to be just like what they do with email, phone calls and general web traffic, namely slurping up all the data permanently in their giant datacenters and running all kinds of analysis on it)


I think the use case here are LLMs trained on billions of terabytes of bulk surveillance data. Imagine an LLM that has been fed every banking transaction, text message or geolocation ping within a target country. An intelligence analyst can now get the answer to any question very, very quickly.


> I suspect people are going to be "confiding" a ton in LLMs

They won't even need to rely on people using ChatGPT for that if things like Microsoft's "Recall" is rolled out and enabled by default. People who aren't privacy conscious will not disable it or care.


Why do you assume NSA have ChatGPT queries?


Why wouldn’t they, after the Snowden revelations?


Because ChatGPT is a sizable domestic business, and most large data collectors are enrolled in the NSA's PRISM program whether they like it or not.


Probably, but so did a lot of people. Computer vision and classifier/discriminator models were pretty common in the 2000s and extremely feasible with consumer hardware in the 2010s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: