1. When should testing be done?

· there is no "one place" to do testing.

· testing must be integral with each process.

· products of processes must have defined exit criteria.

· products must be shown to meet their exit criteria.

Note: a product is a document, design, module/object code, system etc.

2. The V-model - a common approach to test management - very popular in Germany.

3. Change WILL occur - it MUST be MANAGED.

· change must be applied to the EARLIEST applicable stage - if it's a requirements change then this will affect all other stages.

· subsequent products must be re-tested.

· time must be found to do this!


4. Appoint test specialists for each stage and product.

· avoid people testing/reviewing their own work.

Developing exit criteria - one approach - it depends on your design approach and methods.

· User acceptance tests from requirements specification - Use cases.

· User system test from user level design - state machine interface model.

· System test from technical design - data/object model.

· Unit test from detailed specification/design - VDM, Z/code.

5. Documenting tests.

Test scripts:

test objective: _______________________________

test environment: _____________________________

Test number







6. Generating tests.

· use exit criteria from specs/designs etc.

· use sample data - make sure there is lots in any databases - stress testing.

· use boundary values - extreme input cases.

· error guessing - what could go wrong when being used - daft entries, sudden rebooting, etc.

· performance testing - check critical attributes - number of simultaneous users, simultaneous actions etc.

7. Test generation techniques.

7.1. Functional attributes.

Black box: uses specification - can be automated.

· use cases - example interactions by actors.

· state models - user interface/screens - drive system through all paths in the state machine.

· VDM/Z specs - category partition data, test preconditions, test error trapping.

· object models - as above - test methods using category partition etc.

· client supplied samples of data or structured random data for populating databases.

White box: uses code.

· visit every decision point.

· check every path in code.

Test in an exact replica of the clients environment. Do not assume that the client has all the standard libraries that you have (e.g. Visual basic, Java etc. are a real problem).

Watch out for problems with printer drivers, other packages that are used - e.g. word processors - make sure it will work fully in the clients offices.

7.2. Non-functional attributes.

· performance - test under realistic conditions with realistic data and numbers of users, etc.

· check ease of use criteria - try it out on friends with similar backgrounds to the users.

· check ease of installation - you can do it - can anyone else?

· check the user manual - you can understand it - can anyone else?

8. Defining the test strategy.

· Assign priorities :

mandatory - must test this aspect;

desirable - should test this aspect;

beneficial - may test this aspect.

· Decide on method :

dynamic testing;

code review;

docent review;

a combination of these.

· Document all your testing - sign it off, make sure the results are clear - · Track any consequential actions - debugging, re-testing - document these.

9. Unit testing.

· test the flow of information into and out of module, test local data structure, boundary conditions, independent paths, error handling paths.


Are the number of input parameters equal to the number of arguments?

Do parameter and argument attributes match?

Are the number of arguments transmitted to called modules equal to the number of parameters?

Are the attributes of arguments transmitted to called modules equal to the attributes of parameters?

Are the unit's system of parameters transmitted to called modules equal to the units system of parameters?

Are the number of attributes and order of arguments to built-in functions correct?

Are there any references to parameters not associated with current point of entry?

Are file attributes correct (for I/O units)?

OPEN/CLOSE statements OK?

Buffer size matches record size?

End-of-file conditions handled?

I/O errors handled?


Local data structures should be tested for:

improper typing, erroneous initialisation errors, underflow, overflow, addressing exception etc.

You may need to write stubs in order to test units in the absence of crucial parts of the system. make sure these stubs are properly specified and that someone has checked that they are legitimate.

Check that error messages are understandable and can help the user rather than confuse them.

10 Integration testing.

10.1 Top down integration.

This has a partial top level system and full collection of functional units to cover some aspect of the system - perhaps related to the use of a particular screen/form. Drivers (temporary top level control code) and stubs (code to replicate missing units) may be needed to compensate for the missing parts of the system.

10.2 Bottom up testing.

Here all the bottom level units are developed and tested, these are integrated into higher level units and tested, the process continues until the full system is available.

11. GUI testing.

Some common problems:

mandatory fields not mandatory,

database not accessible,

data not validated,

wrong fields retrieved by queries,

incorrect search criteria,

field order wrong,

data on screen not current,

control state fails to match window state,

menu options do not match window state,

Be systematic and record what you do.

12. Debugging.

This is always difficult since it is not always easy to relate an error with the code causing it.

Think laterally, if necessary.

Write down your assumptions about the way you think the code is working and show a colleague.

Every bug fix must be tested.

Fixing one bug may introduce another.

Once debugging is done redo the user tests.

Cautionary thought. Testing is never done - there will always be unintended bugs in code. make sure that they are inconsequential!