Cyber Security is a matter of national security, but currently we sacrifice our national security for the convenience of companies.
The disconnect is that companies are both (1) the only entity in control of their system and how it is tested and (2) not liable if a security breach does happen.
I believe we need to enable red teams (security researchers) to test the security of any system, with or without permission, so long as they report responsibly and avoid obviously destructive behavior such as sustained DDoS attacks.
A branch of the government, possibly of the military (the Space Force?) could constantly be trying to hack the most important systems in our nation (individuals and private companies too). The bad guys are doing this anyway, but hopefully the good guys could find the security holes first and report them responsibly.
Again, currently this doesn't happen because it would be embarrassing and inconvenient for powerful companies. We threaten researchers who do nothing more than press F12 (view HTML source) with jail time and then have our best surprised Pikachu faces ready for when half the nations data is stolen every week or major systems go down. Actually, we don't make faces at all, half the nation's data is stolen every week--no, actually we don't even take notice, we just accept it as the way things have to be. Because, after all, we can't expect companies to be liable, but we can trust companies to have exclusive control over the testing of their security. How convenient for them.
CISA offers services to public and private providers of infrastructure deemed critical that include pen testing, but they don't have the resources to offer it to all who want it.
Isn't this what the NSA is for? Also, I think we have plenty of reason to believe they regularly try to penetrate powerful companies, they just don't necessarily tell us when they do.
I don't have citations on hand, but it's commonly held that NSA fixed the S-boxes in IBM's "Lucifer" cipher design for DES to improve its resistance to (then publicly-unknown) differential cryptanalysis.
Of course they also crippled the key length to 56 bits...
They absolutely have bugs up their sleeve, but if they tell the companies to allow them to fix them then they can't use the bugs for spying (or at least, not as effectively)
The disconnect is that companies are both (1) the only entity in control of their system and how it is tested and (2) not liable if a security breach does happen.
I believe we need to enable red teams (security researchers) to test the security of any system, with or without permission, so long as they report responsibly and avoid obviously destructive behavior such as sustained DDoS attacks.
A branch of the government, possibly of the military (the Space Force?) could constantly be trying to hack the most important systems in our nation (individuals and private companies too). The bad guys are doing this anyway, but hopefully the good guys could find the security holes first and report them responsibly.
Again, currently this doesn't happen because it would be embarrassing and inconvenient for powerful companies. We threaten researchers who do nothing more than press F12 (view HTML source) with jail time and then have our best surprised Pikachu faces ready for when half the nations data is stolen every week or major systems go down. Actually, we don't make faces at all, half the nation's data is stolen every week--no, actually we don't even take notice, we just accept it as the way things have to be. Because, after all, we can't expect companies to be liable, but we can trust companies to have exclusive control over the testing of their security. How convenient for them.