I don't think so. I once took on the challenge of writing as many unit tests as possible for a project at work (the project did not have unit tests - but was well covered with other types of tests).
The two key takeaways I got from the effort:
First, I had no idea how coupled my code was until I tried writing many unit tests for it. If it's hard for you to instantiate your class without involving N other libraries/objects, your code is very coupled. No one in my team would have looked at the code and said "It's highly coupled". The real proof was "Can you test this in isolation?" If not, you're strongly coupled.
I had to redesign a lot of bits to succeed, and as another commenter pointed out, in my attempts to do so my code really was a lot better. I discovered good principles of design in doing it.
The second thing I learned, which may appear to disagree with the first: There are always easy ways to write code to be unit testable. Most of those easy ways are bad and reduce the "quality" of your code. Forcing yourself to not redesign just "for the sake of writing tests", while still ensuring you have 100% code coverage, will really force you to think heavily about your code, architecture, failure points, etc.
So a 100% code coverage really doesn't tell you if you have good code. But less than 100% does indicate potential problems.
It's both project and language specific. A trivial example is C++ classes with private methods. Some people do a hack that when compiling for testing converts all private methods/attributes to public. This way it's easy to test private methods individually. Please don't do this.
Unit testing C++ is not as easy as in some other languages. The language is fairly rigid. Some people make heavy use of friend classes to assist with testing, but this can be overdone (indeed, many C++ developers are against any use of friend).
I'm not. One of my side projects for example is a programming language where I have 100% code coverage.
From a career perspective, I have seeded new infrastructure services with 100% code coverage. At core, I believe achieving great reliability requires a solid foundation.
I don't claim these services are perfect, but when a bug is discovered then I can usually use the logs to figure out what state the program is in and then test the bug with a unit test. Then, I can protect future engineers from that issue.
Now, don't get me wrong, there are LOADS of silly tests for languages like Java to make code coverage tools happy. Like "new ClassThatOnlyHasStaticMethodsInIt()", but the key is that you can alarm on code coverage less than 100% but once you let the paper cuts build up it is hard to manage.
I'm a big believer in "slow is smooth, and smooth is fast" when it comes to building services for others.
What I often see is people want 100% coverage AND want tests to run fast. And often that leads to using mocks and other techniques. Once you go down that road you end up with brittle test and your developers spend a lot of time writing and repairing tests.
I'm not against testing but feel people waste a lot of time writing tests that are more a liability then an asset. I'm hoping we get better tools soon and people look back and wonder WTF were people thinking.