In the current climate of cost cutting and downsizing, training professionals are under increasing pressure to provide positive, measurable evidence of the bottom line contribution of training. Yet evaluating the effects and results of training is notoriously difficult. Here are a few thoughts on evaluation, collected from our own experiences, which you may find useful in planning what to evaluate, and how to go about doing it.
The purpose of evaluation
It can be argued that the only training which organizations should be providing is training which contributes in some tangible way to both performance improvement and the achievement of business needs and objectives.
Organizations which are working towards the Investors in People (IIP) initiative will be aware that evaluation is one requirement against which their training performance is tested as part of the recognition process:
The whole concept of Investors in People revolves around the idea that an organization which invests sensibly in developing its human resource will be better able to compete, and to meet its objectives effectively.
Expenditure on training, therefore, is an investment for the future - and like any investment, it needs to be able to show a demonstrable return. If training is to be perceived as a worthwhile investment there must be:
Evaluation is concerned with collecting and presenting appropriate information in order to prove the relationship between training activity and benefits to the business.
Evaluation is the process by which training professionals can collect, analyse and present the information needed in order to prove the business value of any training which has been delivered within their organization.
A common mistake which many organizations make is that they leave evaluation considerations until after they have planned and delivered their training. For evaluation to be truly effective the evaluation strategy needs to be planned and designed in conjunction with, rather than after the training planning and design stages.
Key questions to ask include the following:
Answers to Question 1 will inform the content and design of the training itself.
Answers to Questions 2 and 3 will link the training to identified business needs and objectives.
Answers to Question 4 will inform the design of the evaluation strategy, and the evaluation methods to be used.
Levels of evaluation
Training can impact on the organization at various levels. One of the first steps which training professionals will need to go through when planning their evaluation strategy, is to decide what they will be measuring at each level, when they will be measuring it, and how it will be measured.
Although it has been around for nearly 20 years, the Kirkpatrick Model (1979) is still the model most widely used to determine and plan different levels of evaluation:
Whilst the Kirkpatrick Model is useful as it stands, it can be further enhanced by the addition of a Level 0 and a Level 5.
Level 1 (Reactions) is concerned with the measurement of people’s immediate attitudes to the training provided:
‘Happy sheets’, feedback during the training and assessments by the trainer of the materials used are most commonly used at this level.
Level 2 (Learning) is concerned with measuring the learning achieved as a result of the training:
Questionnaires, quizzes, and practical tests to check for any change in knowledge, skill or attitude are useful at this level.
Level 3 (Behaviour) is concerned with measuring how actual workplace performance has changed as a result of the training:
Evaluation at this level usually requires the involvement of Line Managers in setting post-training assignments which require and test newly acquired learning, or in observing and giving feedback about changes in day-to-day workplace performance.
Level 4 (Results) is concerned with measuring the extent to which changes in performance have contributed to improved business results of the more effective achievement of business objectives.
Measurement at this level needs to focus back to the identified contribution which the training would make to the performance and/or needs of the organization as a whole. If, for example, one of the reasons for the training was to assist the organization in its need to reduce production costs, then the measurement would need to involve some comparison of pre- and post-training production costs. As with Level 3 there would need to be some involvement from those outside the training function, in putting together and providing the required figures for comparison. Level 0 (Pre-training performance) is important, since it provides the initial benchmark against which training effectiveness can be measured. Pre-training knowledge, skill and attitude checks are invaluable in providing a point for comparison with Level 2 and Level 3 evaluation data.
Level 5 (Return on investment) is also important, since it provides a financial value to the organization, of having delivered the training, i.e., a cost benefit ratio:financial value of change or effect achieved ¸ cost of achieving it x 100/1 = % return on investment.
It is important not to get ‘carried away’ with an evaluation effort which is disproportionately greater than the investment made, or the benefit likely to be achieved. The IPD study ‘Making Training Pay’ (1997) suggests that the scope of an evaluation strategy should be carefully weighed against the following considerations:
Training which is critical, uses unproven methods, will be repeated in a roll-out to all staff, and which has been costly to produce, obviously merits a far more extensive evaluation effort than, for example, a one-off course delivered to a small, select group.© DBA 1998