How to Write Better Test Cases
by Dianne L. Runnels, CQA, CSTE
Interim Technology Consulting
Investing in test cases
What is it worth to improve test cases? What risk would impel you to invest in better test cases? As long
as they cover the software requirements, isn't that good enough? The answer to these questions is that
poor test cases do indeed expose you to considerable risk. They may cover the requirements in theory,
but are hard to test and have ambiguous results. Better tests have more reliable results as well as
lowering costs in three categories:
1. Productivity - less time to write and maintain cases
2. Testability - less time to execute them
3. Scheduling reliability- better reliability in estimates
This paper describes how to avoid the losses that are inevitable with poor test cases. It will look under the
hood of different kinds of test cases and show where and how to build in the quality that controls risk. It
will give practical advice on how to improve productivity, usability, scheduling reliability, and asset
management. Once you understand the whats and whys of test cases, you can use a checklist of
standards, like the one attached as Appendix A, to identify areas of risk and improve your current and
future test cases.
The most extensive effort in preparing to test software is writing test cases. The incentive to build robust
test cases is spurred by the likelihood they will be reused for maintenance releases. Over half of all
software development is maintenance projects. How can you write quality test cases that will deliver
economical testing the first time plus live again as regression tests? Let's get started with the answer by
lifting the hood of a test case and looking at what's inside.
Looking inside test cases
Elements of test cases
For our purposes, a test case is a set of actions with expected results based on requirements for the
system. The case includes these elements:
• The purpose of the test or description of what requirement is being tested
• The method of how it will be tested
• The setup to test: version of application under test, hardware, software, operating system, data files,
security access, time of day, logical or physical date, prerequisites such as other tests, and any
another other setup information pertinent to the requirement(s) being tested
• Actions and expected results, or inputs and outputs
• Any proofs or attachments (optional)
These same elements need to be in test cases for every level of testing -- unit, integration, system, or
acceptance testing. They are valid for functional, performance, and usability testing. The "expected
results" standard does not apply to diagnostic or other testing of an exploratory nature. Even diagnostic
testing needs the other elements in its cases. However, if the test measures performance that should fall
in a range, this is an expected result.
An alternate description of test cases is that the description, purpose, and setup is the case or
specification. The steps to accomplish it are called a script. Yet another view calls the purpose or
description a scenario or use case. These views are all compatible with the quality assessments and
improvements suggested in this paper.
Quality of test cases
There is a misconception that quality of writing is subjective, like looking at a painting, where beauty is in
the eye of the beholder. In fact, quality of writing is objective and measurable. It is simple to set up an
objective checklist, like the one in Appendix A, of the structural elements of test cases -- purpose,