The Kirkpatrick Model is a widely-used, four-level training evaluation method that benefits both learners and educators by allowing them to understand the value and impact certain training has had on a team.
Donald Kirkpatrick first published his ideas about training evaluation in 1959, but it wasn’t until 1975 when he further defined them in his book, Evaluating Training Programmes, that they began to command industry attention.
Since then, awareness of his ideas has gradually increased and has been bolstered by a redefinition and update in his 1998 book, Evaluating Training Programs: The Four Levels.
The rest as they say is history, and today, Kirkpatrick’s Evaluation Model has arguably become the industry standard within the learning and development community.
The four levels of Kirkpatrick’s Evaluation Model are:
The extent to which learners find the training agreeable, relevant and engaging.
Trainee satisfaction levels are usually assessed using a feedback form, often referred to as a ‘Happy Sheet’.
Verbal reactions and post-training surveys can also be used to assess reactions.
What’s great about this level of assessment is that it’s quick, easy to do and inexpensive.
The increase in knowledge and capability experienced by the student.
This is usually assessed by conducting and comparing the results of tests carried out before and after training.
Assessment can also be done via interview or observation.
Like Level 1, it’s relatively easy to set up and is useful for assessing quantifiable skills.
The extent to which students apply their learning in the working environment.
Compared to Levels 1 and 2, Level 3 requires much more participation and skilled observation from line managers.
Behaviour is assessed via observation and interview over a period so as to assess behaviour change, how relevant that change is, and whether it is sustained.
The overall impact that the trainee’s performance has on the business or working environment.
This represents a fundamentally different challenge to levels 1 to 3 as individual assessments are carried out.
It’s about relating the trainee’s behaviour change to real bottom-line improvements and organisational performance metrics in a credible and believable way.
A unit of change in learning should be directly linked to a specific improvement in a key organisational metric.
It appears a sound and attractive theory, so how does a willing L&D practitioner apply the Kirkpatrick Model in the workplace?
Even though Level 1 Evaluation, (training feedback forms), are unlikely to raise an eyebrow in the boardroom, they should not be dismissed as lightweight.
They have an important role to play in helping you develop engaging training, without which learning will be impaired, and the higher levels of training evaluation will be compromised.
For example, a lack of post-training learning from Level 2, might be due to poor training delivery that can be easily identified in Level 1.
Key criteria that you’ll be looking to assess in Level 1 can include some of the following examples:
Once you have collected all the data, it’s crucial that you act on it where appropriate, delivering constructive changes based on feedback and suggestions from your learners.
Level 2 learning evaluation looks at knowledge acquisition.
You can also make modifications to Level 2 evaluation processes, by incorporating modern gamification tactics and processes.
By gamifying this part of the evaluation process using leaderboards and badges; you can reward learning and create a healthy sense of competition that will boost learner engagement and learning too.
Interestingly, the way that you assess and recognise knowledge acquisition in Level 2 can ultimately enhance learning.
The following are examples of what can be used to measure the learner’s knowledge:
This level of learning evaluation comes with significantly greater challenges than Levels 1 and 2.
You’ll need to look at how well your students have modified their behaviour because of the training they receive.
Are they applying the learning in practice?
You’ll also need to be aware that behaviour change can only be expected to happen in a conducive environment.
For example, let’s say that you miss out Level 1 and 2 assessment and just focus on post-training office behaviour and note that no behaviour modification has occurred.
It would be easy to assume that the training didn’t work and that the learners didn’t learn anything.
This could indeed be the case, but it could be that learning did take place, but the learners are simply not applying it.
There are various reasons why learning might not be applied, such as the manager not allowing them to apply the new knowledge, or not providing supporting opportunities for them to practice.
The reasons can also be more intrinsic such as the employee having no desire to apply the knowledge or lacking the confidence.
An important ongoing enabler of Level 3 evaluation is, therefore, to create a work environment that promotes the application of new learning.
Managers should be actively encouraged to consider linking reward and recognition programmes to applied learning by awarding and publicly praising staff for deploying new skills, techniques, and behaviours.
Managers will have a big role to play in creating a learning culture in their organisation that involves longitudinal observation and data collection. Despite this, managers are also under time pressures and may need learning professionals for support to devote time to motivating line-managers to prioritise this activity.
In Level 3 you’ll be looking to answer some key questions:
1. Have the learners applied any of their learning?
2. Are learners able to train others with their new knowledge, behaviour, or skills?
3. Do learners seem aware that their behaviour has changed?
For Level 3 evaluation to be successful you’ll need to get managers on-board and make the evaluation process as effortless, easy, and straight-forward as possible to carry out.
The following are examples of what can be used to measure the learner’s behaviour change:
In practice, the final stage of the Kirkpatrick Model of Evaluation will require the biggest investment of time and resources.
You need to make a credible link between macro benefits, otherwise known as results in the business, and specific training, to assess the true organisational impact of that training.
The kind of outcomes you may consider trying to link training to are:
There is one still piece of the training and learning evaluation jigsaw left, and this was added by Jack Phillips who built an important fifth level of training evaluation.
It shows L&D practitioners how to calculate the ROI of training using the data gathered from Kirkpatrick’s Level 4 Evaluation, in a more actionable format.
You’ll need your calculators and basic algebra for this stage of analysis.
The Level 5 evaluation equation looks a little like the following:
ROI % = (£ Benefit of Training – £ Cost of Training) / Cost of Training
Here’s a case study example to help you get a feel for this model.
Let’s say that by introducing a new eLearning system you predict that productivity will increase by 20% over the next two years, yielding an additional £100,000 in profit.
This £100,000 is our £ Benefit of Training.
Then, you need to identify how much your training will cost. Let’s say that your LMS implementation costs are £20,000.
Also, let’s say you lose 50 coders with an hourly charge out rate of £100 an hour for an hour’s training, at a total lost opportunity cost of £5,000.
This means your £ Cost of Training is £25,000.
If we run all these figures through the Phillips equation, you are left with your ROI figure to impress the boardroom.
In this case, it is a 300% ROI, recouping three times the original investment in training.
ROI 300 % = (£ 100,000 – £ 25,000) / £25,000
This is quite a simplified look at the ROI calculation and there are greater levels of detail and refinement that will need to be explored in real-world practice.
As you would imagine, this analysis would ordinarily be deployed retrospectively as confirmation of the effectiveness of your training intervention for purposes of recognition and securing future budgets.
However, the equation also plays a role in planning as it can be used to develop ROI forecasts and projections and enable your organisation to make more informed training investment decisions.
This article has covered how useful the Kirkpatrick Model is and all the benefits of using it. Here, we look at some of the potential limitations of using the model for training evaluation.
Self-assessment forms are necessarily subjective, and subjects may be completing them in haste so they can get back to their desks or leave work for the day. In addition, any focus group exercises conducted by the training provider may suffer from the subjects’ natural bias towards pleasing the trainer.
On the other hand, if trainees are given post-study surveys much later on, when its more convenient and possible for them to be objective, there is the risk that the training won’t be as fresh in subjects’ minds.
Another type of bias that needs to be combatted is the tendency of trainees to rate their experiences based on how they concluded, rather than averaging out their experiences over the whole duration of the training.
When assessments are performed following training, they further eat into the time that trainees must make themselves available. It can be expensive to run assessments, and not all trainees will perform as well in standardized tests, since anxiety levels, memory, reading ability and cognitive skill all play a part in how well one can perform during an assessment.
Another way to run such assessments is during the training, with a short quiz after each component or module. Most online training requires participants to complete brief tests on each topic before moving on to the next, to avoid participants having to complete lengthy tests at the end.
Here’s where evaluation can become extremely time-consuming and challenging, since it requires ongoing, periodic observation. A manager may simply not have capacity to engage in this sort of oversight. Even where they do have time, and the necessary enthusiasm to devote themselves to the task, such behavioural studies generate reports which usually require actions to be taken.
There’s also the question of how to avoid skewing the results, should staff become aware they are being assessed. Finally, it requires a level of expertise which a manager might simply not possess. To ensure workplace observations like this are carried out to a high standard, it can be wise to retain the services of the training provider.
As with any experiment, you must move from correlation to causation. Just because an improvement has been noted in a particular work area, doesn’t mean that training was the precise cause. Other potential causes might need to be eliminated first, so that you can prove a tangible connection between training and outcome. This requires expertise and a scientific approach to analysis.
Doing this level of evaluation could again prove time consuming and costly. If it occurs too far in time from the training, there’s a danger its relevancy will be questioned. It’s also possible that corporate strategy may have moved on to other concerns.
Although it has been updated, the Kirkpatrick Model is now more than sixty years old. It was designed for a traditional, active, group training model, whereas a large proportion of training in the 21st century happens individually, during ordinary work activities and virtually.
Despite the caveats presented above, the Kirkpatrick Model for training and development evaluation remains the best we have, and it’s still widely adopted in order to validate training endeavours.
If you’re an L&D professional looking for engaging eLearning content, Skillshub can help you with our off-the-shelf courses or can help you to create bespoke eLearning content.
We also offer an online learning platform for businesses to house all their training materials, that is capable of creating personal learning plans and can be accessed on any device.
If you’re ready to level up your L&D strategy further with the help of an eLearning company, get in touch with us today!
Sean is the CEO of Skillshub. He’s a published author and has been featured on CNN, BBC and ITV as a leading authority in the learning and development industry. Sean is responsible for the vision and strategy at Skillshub, helping to ensure innovation within the company.
Updated on: 7 February, 2022
Originally published: 10 April, 2018
Would your connections like this too? Please share.
You might also be interested in…
Understanding and tracking the right L&D KPIs is essential for driving impactful learning experiences and achieving business success. In this .
eLearning content creation is essential for transforming traditional educational materials into dynamic, interactive learning experiences. At our eLearning company, we .
For millions of employees across the country, WFH (work from home) became the norm in 2020, as businesses were forced .
Understanding the essential elements for a winning L&D strategy is crucial for any organisation aiming to create growth and innovation .
Health eLearning has become increasingly popular in the healthcare industry and as technology continues to advance, the accessibility and flexibility .
Training & Development is vital for nurturing a skilled workforce capable of driving organisational growth. As an eLearning company, we .