Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess technically the term test driven development does not specify unit-test driven development.



This is a common misconception. I practice TDD at work and side-projects and find it very productive. TDD is best done with end-to-end tests (or automated integration tests, whatever you wanna call it). You write an end-to-end test (I give input A to the entire system, expect output B), first the test fails (because it's unimplemented) and then you implement and it passes.

It works because then your tests become the spec of the system as well, but if you only write unittests there is no spec of the system, only modules of your code. Which is not useful because if you refactor your code and change this module you need to rewrite the test. Whereas in TDD your tests should never be rewritten unless spec changes (new features added, new realization of bugs etc). This way "refactor == change code + make sure tests still pass".

You're of course free to write unittests as well, when you see fit, and there is no need to target a religious X% coverage rate at all. I think coverage targets are cargo-culted, unnecessary, time-consuming and religious. The crucial thing is, while you're writing new code (i.e. implementing a new feature or solving a bug) you need to write an automated test that says "if I do X I expect Y", see it fail, then see it pass, such that "if I do X I expect Y" is a generic claim about the system that will not change in the future unless the expectation from the system changes.

In other words, the example in this comment chain: "run a game, see 'opcode X doesn't exist', implement X, rinse repeat" is actually how TDD is supposed to work.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: