Really? You want to strive to test that logging logs and observability observes? You want to test constructors, getters and setters?
Testing trivial code brings negative value, why would you do that?
> You want to strive to test that logging logs and observability observes?
You're asking this rhetorically, but I often find interesting problems once I start really exercising the latter, like:
- Non-global collectors we forgot to register, or fail to get registered on some code paths.
- Collectors that get registered twice or overlap with other collectors only in some conditions (e.g. connection addresses that are sometimes the same and sometimes not depending on how your service gets distributed).
- Collectors that are not really safe scraping concurrently with the rest of the program; our test suites include race detectors.
- Metrics that don't follow idiomatic naming conventions (can be found via e.g. `promtool check metrics`).
- Metrics with legitimate difficult-to-notice bugs. We had an underflow counting the size of a connection pool because `Close()`ing one twice was explicitly allowed.
> Testing trivial code brings negative value
Agreed in the abstract, but if we were good judges of what code was trivial or not, we'd write a lot less bugs in the first place!
That being said, this is also a discussion about coverage, not assertions. Even if I'm not actually checking any of those things, those code paths (and logging) are still getting exercised via any decent E2E/functional/behavioral test suite.
I noticed the bug when I went to look at a dashboard with that metric on it, and noticed it had the wrong name. If go's Prometheus library had an easy way to run "metric.CurrentValue()", I would have tested it... but it didn't, so I didn't. And then had to patch it, release it, and update all my servers. Writing the test (with that API) would have taken less than a second. Finding the bug in production and fixing it took an hour.
(That codebase has 100% line coverage, but of course, I know that 100% test coverage is a meaningless metric. That the code ran is a prerequisite to the program behaving correctly, but it also has to do the right thing.)
Arguably you should be testing behaviour rather than getters and setters. If a codepath requires a name formatted correctly, then that forms the basis of a test and an assertion. Otherwise you're introducing fragility by coupling your 'under the hood' stuff with your 'API' (so to speak).
> You want to strive to test that logging logs and observability observes?
Sure, why not? All that requires is running tests with the log level turned up. And it’s not unheard of that e.g. somewhere in a log statement someone forgot to check that the object they’re calling toString() on is not null.
You probably don't need to test any of those things but I would still expect them to be exercised. You can't very well test a class without constructing it.
> You want to strive to test that logging logs and observability observes?
A test that calls code (which counts as code coverage) does not have to verify that something specific happened. Often, this is a weakness of a test. The test does not verify what you think it would for various reasons...visibility, access to resources, etc.