User-Centred Requirements Handbook |
Controlled experiments can be used to test the usability of discrete elements of a prototype or simulation of a future system. Representative users are asked to perform tasks with the prototype to help clarify the details of a user requirements specification. The elements should be capable of being tested separately from the product as a whole; for example input devices and icon sets might be examined.
Typical Application Areas
Useful for testing user behaviour in response to a prototype.
Analysis Methods
User testing often results in a wealth of information. This must be systematically analysed to clarify user requirements. It is important to establish the events that will be identified and analysed before embarking on controlled testing.
Benefits
User trials will indicate how users will react to the real system when built.
Provides experimental evidence to show the problems that users might envisage with the future system.
Enables the design team to compare existing products as a way of considering future options.
Limitations
If the trials are too controlled, they may not give a realistic assessment of how users will perform with the system.
Can be an expensive method, with many days needed to set up the trials, test with users and analyse results. Its inputs to the design process may be too slow for the intended timescales.
What you need
The method of controlled testing depends on recruiting a set of suitable subjects as users of the system. It also requires equipment to run the system and record evaluation data. This may include a video camera, video recorder, microphones, etc. Also it will be necessary to set up the tests in a suitable environment such as a laboratory or quiet area.
Process
To prepare for controlled testing, the following items are required:
1. The description of the test tasks and scenarios.
2. A simple test procedure with written instructions.
3. A description of usability goals to be considered and criteria for assessing their achievement (see section 3.9).
4. A predefined format to identify problems.
5. A debriefing interview guide.
6. A procedure to rank problems.
7. Develop or select data recording tools to apply during the test session (e.g. observation sheets) and afterwards (e.g. questionnaire and interview schedules).
8. Distribution of testing roles within the design team (e.g. overseeing the session, running the tests with the user, observation, controlling recording equipment etc.).
9. Estimate of the number of subjects required (possibly firmed up after the pilot test session).
Once trials are run, data is analysed, and problem severity is prioritised in an implications report.
Practical guidelines
• Conduct the tests in an informal and friendly atmosphere.
• Allow enough time between sessions for overruns.
• Make arrangements for telephone calls to be taken by someone else rather than interrupting the session.
• Make it clear that it is the system being tested.
• Make it clear beforehand how much subjects will be paid for the whole session. Avoid flexible payment based on time spent in the session.
• Avoid prompting the user too much during the session.
• If the user does get completely stuck, do not force them to continue but help them or move on to the next task.
Further information
Lindgaard (1994), Maguire (1996), Nielsen (1993).
Refer to RESPECT deliverable D6.2 for further information on performing
controlled tests involving users with impairments and disabilities, as
well as elderly and young users.