Hacker News new | past | comments | ask | show | jobs | submit login
“Tests should be poorly factored”
6 points by alecbenzer on Jan 18, 2019 | hide | past | favorite | 2 comments
This is a piece of advice I'd seen somewhere once, but can't remember where (or if this is exactly how it was worded). Does anyone know of an article or something that talks about this, or a more well-known term for this idea?

The basic idea: Say you have some code like:

  def square(x):
    return x ^ 2
A test like:

  def test_square():
    for inp in [3, 4, 5, 50, 100]:
      assert square(inp) == inp ^ 2
is "well-factored", in that it doesn't have a lot of repetition, but isn't a great a test, since it's basically testing that the function's code does what the function's code does.

A better test would be something like:

  def test_square():
    assert square(3) == 9
    assert square(4) == 16
    assert square(5) == 25
    # ...
because, for one, it would expose the bug (^ is XOR in Python, not exponentiation).



I wouldn't call the first version well factored since "x ^ 2" is repeated. So you could simplify to

    assert square(inp) == square(inp)
...at which point, if it's not 3:00 AM, you hopefully notice that it's not actually testing anything and include both test input data and output data.


I meant the test itself being poorly factored. But, even in that case, I'm not sure realizing that square(inp) == square(inp) is silly would direct you to go to manual input/output lists and not something like square(inp) == inp ^ 2.

This is of course a bit of a contrived, very simple example, so it might seem silly, but I think you can imagine more complex functions where it's less clear that "computing" the expected output in some way "cheats" the test.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: