Writing test cases is a very common and crucial activity in software testing, that’s a given. What is less clear is what a test case exactly means. Depending on your source, you get different interpretations of what one understands as “a set of conditions or variables under which a tester will determine whether an application, software system or one of its features is working as it was originally established for it to do” (Wikipedia).
It is indeed a test product which describes a specific system behavior, and the tester uses this during the actual test execution on an application to run this script. There are of course nuances between a test case, test script, test scenario, test suite and the like, but let’s not make things more complicated than needed for now. This is the internal kitchen of the tester. At the end of the day, a tester produces a set of test cases ready to “attack” the system under development. If, of course, one had decided to prepare test cases up front (as this is rather common in waterfall environments). In a more agile context one could opt for exploratory or automated testing, or no test cases at all!
These test cases are accessible to all other stakeholders in the project. Project managers, analysts and business representatives can check the progress of testing and the quality of the application – together with detected bugs – for an important part by means of these designed test cases. So a test case is really the tester’s visit card.
But the way of writing such test cases is not that clear for the testers! It is a recurrent discussion, despite the existence of good practices and a lot of tips & tricks. For example: a lot of verifications in one test case, or in separate test cases, high level or very detailed, requiring a lot of business knowledge or dummy proof, mixing test types in one test case (GUI, functional, integration,…) or not, , using one or more test design techniques (or even none…), etc. So many questions to answer in each test project.
The use of a standard format (often by means of a template) can already address many of these questions. The value of using a uniform structure is to have all written test cases in a similar format, although testing different aspects of an application. Hence it is quite easy to “work” with these artefacts: any tester can read or execute them, other stakeholders can understand what it means to still execute a number of test cases, and it is clear in the reporting what a test case stands for. It is the essential mean for a tester to assess the quality of an application.
How do testers decide on the right template or what is the right test design process then, could be your next question. Well… it depends. It depends on the project context, the application itself, the maturity level of an organization, the standards to comply with, etc. Despite these project specific characteristics, a real tester will luckily be able to read and understand any well written test case from someone else. Writing a test case is, thus, certainly not an exact science, there is a wide range of variations possible.
So the tester can really make a huge difference writing good or bad test cases. Of course, vital is the coverage these test cases achieve, but the way test cases are written also matter a lot. Hence, it can be seen as a real art. And as with all arts, some have a lot of talent for it, other have less. How do you evaluate your own skills for writing test cases?
Geert Vanhove is a passionate and pragmatic test professional with over 11 years of test experience in multiple domains, started as hands on operational tester and evolved to test coordination and test management roles. During these years Geert has taken up different tasks such as test analysis, test design, test execution, test coordination, test management, coaching, setting up test processes, process improvement, etc.
Expert Leader Testing Services – Sogeti Bélgica