- Get Started
- Write Code
It’s impossible to talk about evaluating the effectiveness of learning solutions without discussing Donald Kirkpatrick. In 1959, Kirkpatrick defined four levels of training evaluation that are commonly used today to guide and categories learning evaluation activities. In fact, Kirkpatrick’s four levels have become so ubiquitous in the L&D world that people will often simply refer to “level 2 evaluation” without even mentioning Kirkpatrick’s model, assuming the person they’re talking to will recognise the model.
In case you’re not familiar with the model, here it is:
|1||Reaction||What did the learner feel about the learning experience? Was it enjoyable? Did they like the trainer?
This level is normally captured by surveys following the training.
|2||Learning||Did the learner actually learn anything? Did their knowledge and skills improve?
The level is normally captured by assessments at the end of the training, and sometimes at the start to illustrate a difference. With much e-learning content, level 2 is the only level that’s measured.
|3||Behavior||Did the learner actually do anything different as a result of the training? E.g. if the training was designed to encourage salespeople to spend time discussing the customer’s problems before proposing solutions, do salespeople who completed the training now do that?
This level is sometimes evaluated by surveying the learner and/or their manager sometime after the training. Often it is not measured at all.
|4||Results||What was the effect of the training on the business as a whole? E.g. has there been an increase in sales?
This level can only really be measured by looking at business data relating to the training. This data is normally already captured by the business but it is often not compared to training data and L&D departments may not have ready access to it.
A common criticism of Kirkpatrick is not the model itself, but how it is applied in practice. Organizations generally do well at evaluating levels one and two and either don’t get round to or aren’t able to evaluate levels three and four.
Tin Can makes it easier for you to evaluate at all four levels, especially levels three and four. You can use Tin Can to record learner behavior either by integrating Tin Can directly into business systems to record their activity, or providing mechanisms for learners to record and reflect on their performance. Some organizations, for example, are giving learners mobile apps to photograph or video their work to be assessed by a supervisor or mentor. These assessments can then be compared to data from the learning experience itself to measure the effectiveness of the experience. Business systems also contain data about the impact on the business, and you can use Tin Can to pull this data into a Learning Record Store alongside your data about the learning experience and other evaluation data.
Collecting data at all four levels allows you to analyze not just the relationship between behaviour or results and the training itself, but also the relationship between the levels. Perhaps the training successfully changed behaviour and got the sales team focused on the customer’s problem but that didn’t result in an increase in sales. This finding would challenge the assumption that focusing on customers problems was a desired behaviour. If you only had the level four data, you might have assumed that the training had failed to change behaviour whereas in fact the training worked, but the behaviour didn’t. Evaluating at all four levels tells you where the problem lies.
Kirkpatrick’s four levels of learning evaluation have been desperately needed in learning and development for a long time. Now with Tin Can, putting them into practice is possible.
Watershed LRS is a learning analytics platform that gives your actionable insights from your learning and performance data. Watershed clients are using the LRS to evaluate their learning solutions at all four levels.