Then I have to integrate a monster of a library that needs a whole battery of polyfills, and does it's own thing rendering modals somewhere, and I'm not in the mood anymore.
Despite the niche obsession with 100% test coverage, tests aren't an all-or-nothing thing.
Test what can easily be tested. Architect your code such that business logic, etc is modular and testable. Don't worry too much about testing the hard stuff, especially if it's not likely to break.
IMO there are 3 main purposes for tests, and knowing which one you're doing helps decide if a test is worthwhile, as well as how in-depth the group of tests needs to be:
1) Helping you work through a complicated piece of logic.
2) Encoding some sort of requirement so future refactoring/bugfixes/features - perhaps written by a new developer - don't break that requirement.
3) When fixing a bug, ensuring the bug doesn't reoccur.
Tests that fall under (2) often feel the most useless, but I've found to be the most useful. They're typically the simple ones that don't feel like they need a test, but years down the line not every developer knows these requirements. Documentation is easily missed or ignored, but a test that's started failing? Sure there's still a chance they'll just remove/change/skip the test, but they can't just ignore or forget about it like with documentation.
Tests that fall under (3) are very similar to (2), except it's not an external requirement known from the start. These are ones that I've seen people occasionally write while they're fixing the bug, then remove afterwards so as not to clutter up the tests. Or do manually in the shell and never write a test in the first place (I'm definitely guilty of that). But whatever happened here was just complicated enough that the previous develop(ers) missed the conditions that caused the bug - so future changes to this part of the code have a good chance of reintroducing it or something similar. These are worth keeping.
Tests that fall under (1) are definitely useful while the code is being written, and typically people want to keep these because the logic is complicated (or even only write them because of the complicated logic, even if they didn't need it to write the code), but I'd say there's a further question here: How likely is this code to change (ever)? If you didn't write tests with the initial code, it all works, and it's something relatively generic that is unlikely to change... it might not be worth it. If it's likely to change it could end up falling under (2) or (3) in the future, so it might be worth a detour in writing the tests. If the tests already exist because you needed them for case (1), then it shouldn't hurt to just not delete them.
(I'm sure there's other purposes that don't fall under these three, but these are the main reasons for tests in my mind)
Problem is, often you get an easy integration route, where lib and UI come in one package or a hard route, where you have to build a UI around the lib.
To save time, you use the full package, but then testing becomes a nightmare.
And the sad thing is, the stuff that's hard to test is crucial to test.
I guess in this case what matters is the estimated lenght of the product lifecycle.
I'm right now writing open source library in ts hoping that a ui will pick it up. I may need to write the UI myself at the end, but that will be much easier after I put every feature in my library I can think about :)
Then I have to integrate a monster of a library that needs a whole battery of polyfills, and does it's own thing rendering modals somewhere, and I'm not in the mood anymore.
Mocking that whole thing? Hmnothanks.