Reading my comment over, I realize that I wasn't so clear.
There are two almost unrelated issues:
AT&T has poor security - agreed.
Security through obscurity is a universal evil - not so fast. Quick example - you have ciphertext where you don't know the key vs. the same ciphertext where you don't know the key AND you don't know the algorithm. The latter is more secure, because it's harder to brute force.
The reason security through obscurity is usually bad is because it causes people to make poor assumptions - "He'll never guess I encrypted it with rot-15 instead of rot-13," but for a given secure system, adding obscurity will make it harder to break. But it's the poor assumptions that do you in, not an inherent flaw in adding obscurity.
The reason you use widely published encryption algorithms is because they've been vetted for poor assumptions. They need to be open to be vetted, not to be secure, and we've found that's always been a good tradeoff.
"The reason you use widely published encryption algorithms is because they've been vetted for poor assumptions. They need to be open to be vetted, not to be secure, and we've found that's always been a good tradeoff."
True. Most people (including Schneier, Ferguson, Rivest, etc) agree that the NSA is secure. This is because they have a veritable army of cryptographers at their disposal. Peer review is the most important part of cryptographic development. The key part of this is that there is probably no other entity in the United States that can satisfy these requirements. AT&T certainly does not have an impressive cryptographic department and they shouldn't pretend like they do.
"The reason security through obscurity is usually bad is because it causes people to make poor assumptions - "He'll never guess I encrypted it with rot-15 instead of rot-13," but for a given secure system, adding obscurity will make it harder to break. But it's the poor assumptions that do you in, not an inherent flaw in adding obscurity."
I don't think anyone would argue that the obscurity in the algorithm is the weakness. However, obscurity can never make a secure algorithm more secure. If your algorithm and key space are sufficient to prevent decipherment before the heat death of the universe, the two months it takes to reverse engineer the protocol are as close to zero as makes no difference.
"However, obscurity can never make a secure algorithm more secure."
If you're talking about the security of the algorithm, fine. But you're talking about the security of the system, and the algorithm is seldom the problem. If it takes two months to find the problem with the key management, then your obscurity that added two months just doubled the time to break in.
I still say you should use publicly vetted systems - but the community is in denial over the value (second rate, but still value) of security through obscurity.
Case in point: when Slashdot first released their source code, they didn't escape quotes in passwords, so it was possible to log in as an admin using an appropriately modified SQL statement. Sure, you could have figured what the command needed to be via trial and error before the code was released, but I was lazy. Releasing the code meant that I could now break into something I wouldn't try to break into before. The obscurity protected them from a certain threat model. It was still much better when they fixed the bug, of course.
Please give the public origins of the notion that security through obscurity is broken a closer look. Until you understand what that means, you will keep making arguments like "keeping your key" (such as a password) "secret is just security through obscurity".
I recommend starting with Kerckhoffs' Principle.
Basically, you can regard "security through obscurity" as any violation of Kerckhoffs' principle -- which translates to any reliance on keeping secrets beyond the key itself.
You're making an argument by assertion: Kerckhoffs' principle says don't keep secrets other than the key, so therefore you have to not keep secrets other than the key. Huh?
Kerckhoffs' principle is a great idea - but understand it. It doesn't say that extra secrecy makes you less secure. It just says that when you're designing a system using encryption, the key should be the single point of failure.
Let's say I'm locking a door. So you shouldn't be able to get in without the key - but it's going to be harder for you if you also can't find the keyhole.
When you're designing locks, don't try to hide the keyhole - spend all your effort getting a good, unpickable lock - but still, don't deny that hiding the lock isn't pointless.
No, that's not an argument by assertion. It's an argument by pointing out that your "definition" of security through obscurity is apparently at odds with the very origins of the concept.
I'm not saying that hiding the keyhole harms security. I'm saying that pretending hiding the key is the same as hiding the keyhole is an exercise in something so silly I can't even think of the word.
There are two almost unrelated issues:
AT&T has poor security - agreed.
Security through obscurity is a universal evil - not so fast. Quick example - you have ciphertext where you don't know the key vs. the same ciphertext where you don't know the key AND you don't know the algorithm. The latter is more secure, because it's harder to brute force.
The reason security through obscurity is usually bad is because it causes people to make poor assumptions - "He'll never guess I encrypted it with rot-15 instead of rot-13," but for a given secure system, adding obscurity will make it harder to break. But it's the poor assumptions that do you in, not an inherent flaw in adding obscurity.
The reason you use widely published encryption algorithms is because they've been vetted for poor assumptions. They need to be open to be vetted, not to be secure, and we've found that's always been a good tradeoff.