Testing is a tool that sometimes makes your life easier. IME, many (not all) tiny subunits do actually have better tests when examined at that level. You just want to avoid tests which will need to be updated for unrelated changes, and try to avoid writing code which propagates that sort of minutia throughout the codebase:
> while not improving test quality
The big wins from fine-grained testing are
1. Knowing _where_ your program is broken
2. Testing "rare" edge cases
Elaborating on (2), your code probably works well enough on some sort of input or you wouldn't ship it. Tests allow you to cheaply test all four Turkish "i"s and some unicode combining marks, test empty inputs, test what happens when a clock runs backward ever or forward too slowly/quickly, .... You'll hit some of those cases eventually in prod, where pressures are high and debugging/triaging is slow, and integration tests won't usually save you. I'm also a huge fan of testing timing-based logic with pure functions operating on the state being passed in (so it's tested, better than an integration test would accomplish, and you never have to wait for anything godawful like an actual futex or sleep or whatever).
> makes refactoring and adding new features hard
What you're describing is a world where accomplishing a single task (refactoring, adding a new feature) has ripple effects through the rest of the system, or else the tests are examining proxy metrics rather than invariants the tiny subunits should actually adhere to. Testing being hard is a symptom of that design, and squashing the symptom (avoiding tests on tiny subunits) won't fix any of the other problems it causes.
If you're stuck in some codebase with that property and without the ability to change it, by all means, don't test every little setup_redis_for_db_payment_handling_special_case_hulu method. Do, however, test things with sensible, time-invariant names -- data structures, algorithms, anything that if you squint a bit looks kind of like parsing or serialization, .... If you have a finicky loop with a bunch of backoff-related state, pull the backoff into its own code unit and test how it behaves with clocks that run backward or other edge cases. The loop itself (or any other confluence of many disparate coding concepts) probably doesn't need to be unit tested for the reasons you mention, but you usually can and should pull out some of the components into testable units.
The problem is there is rarely a clear interface for your subunit. As such you will want to refactor that interface in ways that break tests in the future. If you are writing another string you can probably come up with a good interface and then write good tests that won't make refactoring hard - but string should be a solved problem for most of us (unless you are writing a new programming language) and instead we are working on problems that are not as clear and only our competitors work on so we can't even learn from others.
> while not improving test quality
The big wins from fine-grained testing are
1. Knowing _where_ your program is broken
2. Testing "rare" edge cases
Elaborating on (2), your code probably works well enough on some sort of input or you wouldn't ship it. Tests allow you to cheaply test all four Turkish "i"s and some unicode combining marks, test empty inputs, test what happens when a clock runs backward ever or forward too slowly/quickly, .... You'll hit some of those cases eventually in prod, where pressures are high and debugging/triaging is slow, and integration tests won't usually save you. I'm also a huge fan of testing timing-based logic with pure functions operating on the state being passed in (so it's tested, better than an integration test would accomplish, and you never have to wait for anything godawful like an actual futex or sleep or whatever).
> makes refactoring and adding new features hard
What you're describing is a world where accomplishing a single task (refactoring, adding a new feature) has ripple effects through the rest of the system, or else the tests are examining proxy metrics rather than invariants the tiny subunits should actually adhere to. Testing being hard is a symptom of that design, and squashing the symptom (avoiding tests on tiny subunits) won't fix any of the other problems it causes.
If you're stuck in some codebase with that property and without the ability to change it, by all means, don't test every little setup_redis_for_db_payment_handling_special_case_hulu method. Do, however, test things with sensible, time-invariant names -- data structures, algorithms, anything that if you squint a bit looks kind of like parsing or serialization, .... If you have a finicky loop with a bunch of backoff-related state, pull the backoff into its own code unit and test how it behaves with clocks that run backward or other edge cases. The loop itself (or any other confluence of many disparate coding concepts) probably doesn't need to be unit tested for the reasons you mention, but you usually can and should pull out some of the components into testable units.