![I had long wanted to research a PhD and in 2009 I got the chance It was one of - photo 1](/uploads/posts/book/394524/images/title.jpg)
![I had long wanted to research a PhD and in 2009 I got the chance It was one of - photo 2](/uploads/posts/book/394524/images/cover.jpg)
I had long wanted to research a PhD and in 2009 I got the chance. It was one of those moments that come along sometimes. I was just about to finish a secondment with a UK government department. This particular department is responsible for spending a lot of taxpayers money on training. My job was to see whether there was any evidence of a return on that investment. I had spent my year-long secondment looking at what evidence there was and talking to a range of experts and practitioners. Whilst I had found some studies showing the positive impact of training in general, as well as some that were less positive, there did not really seem be that much around. What I also discovered was that there were few attempts by businesses themselves to evaluate the impact of their investments in training. That seemed odd given that UK companies spend over 40 billion a year training their staff.
It seemed to me that while there was a lot of faith in the benefit of training, there was little evidence of its benefit. That is where the opportunity to study came in. The department was keen to fund some original research to see what the effect of their investment might be and I was keen to do a PhD so off I went.
An economist by background, I naturally started by looking at economic approaches to training evaluation. We will be coming across one of these later in this book but I started with Cobb-Douglas production functions. Cobb-Douglas production functions, named after their inventors Charles Cobb and Paul Douglas, have been around since 1927 and are still widely used by economists to measure productivity changes including the impact of training. However, once I started looking properly at them a problem arose a big one. They did not work. They are unable, for instance, to directly measure training activity. Where training was shown to have potentially increased productivity they were not able to explain why it had. Economic approaches to workplace learning also largely focus on the returns on formal qualifications. Much of the learning activity I was interested in did not result in a qualification. In fact as we will see in the majority of the learning that takes place in organizations is not planned. I needed a different approach.
Next I decided to look at the methods learning and development (L&D) and human resource (HR) professionals actually use to evaluate training. They should allow clear outcomes to be measured, I reasoned. This inevitably meant looking at Donald Kirkpatricks (1959) four-level model, which it is fair to say has dominated the field for at least five decades. I also looked at Hamblins five levels; Warrs Context, Input, Reaction and Outcome (CIRO) model, Kraigers learning outcome approach, Phillips and Holtons Return on Investment (ROI) model and a whole lot more. In fact I was staggered by just how many approaches, methods and models are out there. I was also surprised that although there has been an increase in training evaluation activity since the 1980s, few very few, organizations actually evaluated the impact of their investment in training.
Jack Phillipss (1991:5) observation made over two decades ago still holds true today: when it comes to measurement and evaluation, there appears to be more talk than action. Perhaps not surprisingly more than one commentator has described training evaluation to be in crisis (Kirkpatrick and Kirkpatrick, 2010).
Practitioner approaches appeared to be another dead-end so I started looking at the other disciplines, besides economics, that studied workplace learning. There are a lot of them: human resource management, programme evaluation, anthropology, occupational psychology, the sociology of work, learning theory and organizational theory. As I read my way through literally hundreds of academic articles four things struck me in particular:
There is a recent but rapidly growing body of knowledge about what makes workplace learning work.
Very little of this knowledge finds its way to where it really matters those who are responsible for commissioning, planning, organizing, delivering and assessing the value of training (see ).
None of the disciplines seemed to talk to each other and all had different, in fact sometimes conflicting, ways of looking at the same thing (training).
If I were a busy L&D or HR professional or manager I would struggle to apply the techniques used. Here is an example picked at random (the article happens to be on my desk as I write this). It is from an evaluation of a capabilities training scheme: [t]he results show, that based on the coefficient of determination R and the goodness of fit determined by the statistic F, from the six regression models analysed only two were significant. I am not picking on this particular article or authors; it is actually a very useful piece of research, which considers the impact of training on a range of performance measures including employee commitment. My point is that the article is written for fellow academics in a journal few practitioners probably have access to. This is true for much research, as we will see, and it has created a substantial research to practice gap.
By 2011 my PhD became part-time as I was spending more of my time actually carrying out evaluations. The focus of my research had also shifted. I was increasingly thinking about how busy practitioners could evaluate their organizations investment in training in ways that were scientifically robust but also practitioner friendly. This meant, I thought, approaches that are rigorous, in that they tell you something meaningful and reliable about training, but are also applicable in that they are practicable and recognize the constraints on practitioner as well as meeting the needs of the audience for the evaluation (see ).
While I was working for the government department we organized a seminar to discuss the business case for training. During the seminar quite a senior person from a UK skills agency told me, in so many words, that I was wasting my time (to be fair he was a bit politer than that). Learning in the workplace is just too complicated to be able to show what its effects have been, he said. He was right that workplace learning is, like pretty much everything else that goes on in the workplace, complicated. He was wrong though to say that this meant we could not evaluate training. Approaches to evaluation that do not grasp this complexity are always in danger of not delivering the goods.
This book is a combination of my PhD research, experience with evaluation and discussions with practitioners. It aims to provide anyone with an interest in training evaluation with the information they need to find out what difference training has made.
Training evaluation
Complete evaluation
A workplace learning model
What to evaluate be pragmatic!
What causes positive utility reactions?
Smiley face rating scale
Numerical and word rating scale
Star scale
Evaluation results
The learning culture continuum
Trainings impact on job performance bar chart
Trainings impact on job performance pie chart
Learning development effectiveness ratings
WordCloud slide
Variants on Kirkpatrick
Examples of evaluation approaches
Organizational evaluation audit
Decision matrix analysis
Example of a learning log
UMs health and safety training evaluation survey
Observation grid
Training stakeholder grid
Example of a stakeholder outcome survey
Multiple stakeholders
Next page