All IT and most business organisations will be familiar with the idea that they have to test systems. A basic tool in this is the test script.
A test script sets out a succession of a step-by-step instructions to carry out a test. It is crucial that testing be scripted to an extent commensurate with the risks of not testing properly, but when looking at either corporate standards or individual projects it’s striking how seldom even the minimum standards are met. Do your scripts state the expected result, so you tell unequivocally whether the test has been passed or failed? In my experience, most don’t. Nor do they tell the user what preconditions must be satisfied before the test can begin (navigate to…, using these privileges…), or tell you how to check the result, and so on.
So here is my recipe for a truly complete test script. You won’t want to use it because it’s long and complicated and you can’t see the point of it, but if you can tell me which fields are not needed by anyone with a legitimate interest in your testing, feel free to take them out.
The script falls in five sections:
- Document control
- Setup information
- Execution
- Outcome
- Result
- Document control data
- Identifier.
- Test name.
- Reference no.
- Parent identifiers.
- Author.
- Authoriser.
- Preparation date.
- Version.
- Set up information
- Planned execution date.
- Tester
- Function.
- Summary of test’s objective or purpose.
- Test condition(s) implemented.
- Positive/negative test?
- Start point (i.e., navigating to appropriate screen/process/field.)
- Set up/Initial conditions.
- User ID/privileges required.
- Preceding actions/tests to prepare the application.
- File selection, parameter settings, etc.
- The preconditions for the test as a whole (Eg, account no., currency, etc.)
- Execution (probably in table format)
- Step no. (if sequence is significant.)
- Location (Screen/form/field name at which testing should begin enter, for example, “Go to screen…”, “Select ‘Reports’ menu”, etc.)
- Input data.
- Data to be entered.
- Option(s) to be selected.
- Test actions (Step by step, checklist-style – e.g., “Enter data”, “Select option A”, “Click on Submit”, etc.)
- Outcome
- Actual test time/date.
- Actual result.
- Checked boxes (against each test step, to confirm completion.)
- Notes, with a general prompt to record anomalies, unexpected results, unplanned steps, & unusual system behaviour.
- Narrative/commentary (to support re-runs & regression testing.)
- Sign off.
- Tester’s name & signature.
- Test result
- Expected result.
- Method for checking actual against expected (if not just “Check actual vs expected results” - including automated file comparisons, etc., as appropriate. E.g., checking back-end systems, end-of-day report, messages, etc.)
- Pass/fail.
- Cause of failure.(Eg, “Comm320 failure”, “Data feed”)
- Defect reference field (to locate defect reports, anomalies, etc. my be needed at both step and script levels.)
Try it. Really, it works.
No comments:
Post a Comment