Automated software testing, part 1: reasoning

Navigation: part 1 | part 2 | part 3 | part 4 | part 5 | part 6


Let's talk about automated software testing. I realize that a lot has already been said on this subject, but it seems that there's still a significant amount of misunderstanding in this important area of software development. I'm going to attempt to shine some light on a few of the core concepts of testing and how they should be applied in a typical, non-formal software project.

It should be noted that this post is not about TDD. Although they're undoubtedly related, there is a world of difference between that and what I'm talking about here.

Note to experienced devs: The part 1 and part 2 posts are intended for people who aren't very familiar with automated testing. If you're looking for more in-depth information on specific test types, stick around for part 3.

Why test?

Or, rather, why bother with automated tests? I know, I know, it's the obligatory introductory speech... in written form... but not everybody understands this, so it's important to mention.

There are many reasons why automated tests are valuable. Here are three:

Let's go through these one by one. Early detection of problems is good because the last thing you want is your customers complaining that you released crap. Manual testing helps, of course, but it's far from perfect, and many issues can and will be missed, so huge gaps will exist in coverage of your code - gaps in which bugs are bound to occur. Adding automated testing fills a lot of these gaps. Also, manual testing is much more expensive than automated testing, since an automated test can be run any number of times essentially for free, while you have to hire people to do manual tests.

Having automated tests increases your trust in the codebase. When you know that the critically important functionality of your product is tested every time somebody commits code, you feel more at ease. (Tests could be run by a build server when it sees new commits, and they can be run locally as part of a normal build process.) You don't dread releases and the inevitable issues that QA or your users are going to find and complain about. Maybe you even have more pride in your software.

Maintainability and extensibility are made immeasurably better, faster, and generally easier when you have automated tests. Simply put, there is very little, if any, concern that the new feature you're adding (or the bug you're fixing) is going to negatively impact other areas of the code. This is very much related to the aforementioned trust in the codebase: when you're confident that there is sufficient testing, you are not paranoid about what could possibly happen when you change some obscure value.

When does the above not apply?

The three reasons mentioned above don't magically become true when you add automated tests. They are only true when a lot of things come together. If you have terrible automated tests, they will not detect problems early. If your automated tests are so brittle that changing a single constant breaks half of them, you won't have easier maintainability. If they randomly fail for no discernible reason on every third run, you may even stop caring about them. And when you don't believe that the tests in your codebase are valuable, you just won't trust the codebase any more than you would without any tests whatsoever.

How do you make it apply?

That's the difficult part, isn't it?

Let's start by looking broadly at the automated testing landscape. What major types or phases are there? You've got unit testing. There's integration testing. And don't forget about system testing. Others exist as well, but for the most part, these are the essential ones every non-toy software project needs.

Check out part 2 for an exploration of these types.

 
comments powered by Disqus