I’ve been thinking about unit tests lately. Here’s an observation I made - I call it the law of partial test coverage:
When your test coverage is less than 100%, the most annoying bugs will be in the untested part.
Why is this? An obvious reason is that untested code is more likely to have regressions. A more subtle reason is that it’s not random what parts of the code are tested and what parts are not.
Usually there are two kinds of untested code: trivial and hard to test.
Trivial code is not a problem. For example, in Python, maybe you
haven’t bothered to write tests for all your
__repr__ methods, or
maybe nothing exercises the
if __name__ == "__main__": main()
line. If there’s a bug, you’ll likely spot it immediately.
The other kind of untested code is the tricky part: the code that is hard to test. There might be some complex I/O operations involved, for example. If the code is hard to test automatically, it’s often hard to test manually. It might even be hard to understand.
This is means that you just won’t have any idea of whether your code works after you make some changes. You will only find it out by running the code in production. There will be bugs.