Evaluating Coaching by Kim Stephenson (guest)
In training about, or even discussing, coaching I frequently emphasise that how, what, when etc. are useful questions. They’re non- threatening and can encourage examination of methods, models, ideas.
But why is a lot more powerful – hence it’s often a threat to people.
Asking why can sound like a challenge “why (on earth) did you do that”, “why would you believe something (stupid) like that”. See what I mean?
But it’s often a good way to get at the motivation, the rationale behind something and therefore it’s a very powerful tool.
As it is with evaluation of coaching. Asking how to do it, or what to do are good. But asking why can open a whole different conversation (or can of worms – it’s powerful remember).
Whether you swear by Covey and “begin with the end in mind” (http://www.amazon.co.uk/Habits-Highly-Effective-People/dp/0684858398), copy Oprah and ask “What do you want”
(https://www.linkedin.com/pulse/simple-life-changing-question-hardly-anyone-can-answer-oprah-winfrey) or agree with me that “why are you going to college” is a good way to start a book on handling college finance (http://www.amazon.com/Finance-Personal-Making-College-Beyond/dp/1440834369), you end up with asking “why are you doing it”.
So why evaluate coaching, what is your end in mind in doing it, what do you want in evaluating it?
There are two dominant reasons for evaluation.
Understand the impact your coaching has, to maintain and improve quality
Justify the coaching to a client, internal or external.
Quality
If you’ve done a degree (or other research) you’ll know your research “should” be a double blind, placebo controlled study, with large, stratified samples.
If you’ve studied human sciences, you know that is a lovely idea, but impractical.
You can’t do placebo studies or deceive people about the purpose of the study (the “blind” won’t pass ethics boards), not know who is in which group (the “double”), and you don’t know whether what you did affected them in the way you expect, because you can’t lock them in a lab for several months as if they were lab rats, to control all the external factors (ethics again!). And you can’t usually afford a big enough sample, or get it adequately representative, you have to have an “opportunity sample”.
One method is have a “wait group” as a control. You measure the performance of both groups initially, have your coaching intervention with one, measure both again, coach the other, measure performance again. Finding changes if, and only if, the group has had coaching, suggests it has worked because it removes noise that comes in from other performance improvement initiatives, calculated by reference to your control.
But you still can’t say it has worked, even with big sample. Only that it suggests it may have done.
Before you start, you also have to find measure of target behaviours. A common model (derived from training) is Kirkpatrick’s four level model. The trouble is, while you can coach people to pass tests of knowledge (Kirkpatrick level 2) the higher levels (like Kirkpatrick level 3, behaviour, and 4, bottom-line impact), are hard to measure anyway.
How do you know behaviour has changed as a consequence of coaching? You can use a wait group and you still don’t know for sure. You’ll get different degrees of change (even if you can identify “behaviour change”) from separate individuals in the same situation (function, company) let alone widely different individuals in different functions, in separate industries and companies, and being coached for different purposes (why again!)
So you can forget doing a meta-analysis (grouping together studies to give you big numbers, hence greater power), unless you think that quality is enhanced by thinking of some numbers and doubling them (in which case your maths skills are not really adequate anyway).
Of course, you can, with sufficient funds, create a “file-drawer” study. That way you repeat the study over and over until you get the result you want, publish that one and file the 10,000 studies that showed the opposite. Ever seen the size of the “old research” files in a large pharmaceutical company?
Generally coaching budgets don’t run to that, but it’s a measure of quantity (of money), not quality (of coaching) anyway!
But then behaviour (level 3) is easy, compared to “bottom line” (level 4).
Like double blind placebo studies, Return on Investment (ROI) is a gold standard for measuring stuff like this. ROI is great (I’ve designed and taught it on a finance for non-financial mangers MSc), what fun we can have with hurdle rates, internal rates of return, present and future value calculations and so on.
Trouble is, it’s intended for capital projects.
Accounts have conventions that are accepted, but nonsense. Assets don’t depreciate in straight lines, for example. Everybody knows and accepts that these are approximations, that some figures are guesswork. But most of the time it works well (given Enron, Northern Rock etc.).
So capital projects like building a widget factory, with known costs, price for widgets, time to build (and penalty clauses for cost and time over runs etc.) have an element of guesswork and approximation, but everybody knows it, and you can useably approximate your ROI.
Look at an example of coaching managers. You coach them to use a more coaching style.
So previously the manager yelled at staff, gave the answer to queries and in 20 seconds everybody was back at work. Now they say, "interesting, what do you think you should do" and discuss it for 20 minutes. Eventually the staff become less reliant on the boss. But in the short run, both are tied up for longer doing less. So productivity goes down. So at what point is the crossover, and how do you know?
If what you’re doing is coaching purely mechanical skills, it’s still tricky to work out what comes from what you’ve done. If it’s something abstract and complex such as “managerial skills” and not purely declarative knowledge, it is impossible to be really accurate.
And you can’t work out a Present Value (PV) for future improvements in “performance” so that you can correctly work out your ROI in terms of today’s coaching spend as you don't know when the benefits will start showing through, can’t say exactly what the behavioural changes are and don’t know which bits (and how much of them) comes from coaching or what they’ll contribute to bottom line?
ROI isn't designed to do that sort of thing, it includes too many guesses.
So if you genuinely want to perform a quality evaluation of your coaching:
Learn (or buy in some help) to control the variables as far as possible, use a good research design and understand the statistics.
Use a good framework for evaluation.
Understand what it is that you’re trying to do, what, exactly, is the coaching supposed to achieve?
Justify
Over the last few hundred years (since Descartes), there’s been a growing belief that logic and reason dictate human thought and action. The assumption is that intuition, “gut instinct” etc. are inferior to logic.
To evaluate the quality of your coaching, for continuous improvement etc. that assumption is useful. Feeling you’re a good coach, people saying you are or paying you are valueless for evaluation. You need scientific, rational studies, as mentioned earlier.
But if you’re trying to justify it to other people, ignore that. People don’t make decisions logically, with evidence. They make gut instinct decisions, then look for facts to justify them.
Produce all the data you want, if they don’t want to believe it, they won’t – if information convinced people to behave differently nobody would smoke, over-eat, drive when drunk etc. People believe what they have decided to believe (creationist Presidential candidates being an example).
I’d suggest reading Influence (http://www.amazon.co.uk/gp/product/006124189X) and Risk Savvy (http://www.amazon.co.uk/Risk-Savvy-Make-Good-Decisions/dp/0241954614) if you want to understand why.
If you go to your company CFO or the Operations Director of a client with data, e.g. ROI, they will make their decision and use the data to justify their decision to:
a) buy coaching or
b) not buy the coaching.
We all think we’re logical and decide on the data. We don’t. We make the decision, unconsciously and rapidly, then go back and produce a post hoc reason if needed.
It’s been known in sales for millennia – that’s why sales people sell the sizzle not the sausage (effective adverts have no meaningful content)!
When you try to justify coaching with logic, ROI, wait group studies etc., your audience will have made up their mind and use the study to support their decision. So if they’ve decided that coaching isn’t the answer, they will use your figures to “prove” it.
The best way to justify to people that coaching is worthwhile is to get them to see the problems of failing to act, the benefits from coaching. Allow them to “own” the problem and decide that they want a solution.
In that case, assuming that you do have a reasonable coaching solution, they will make the decision that they want it and will make up or manipulate the figures to “prove” the rightness of their decision.
In conclusion
You can suggest different purposes for evaluation other than assessing quality and justifying use. And if you do, you’re getting back to “why”.
I certainly wouldn’t ignore articles or discussion on “how to” and “what are the techniques for”. But, as with questions in coaching itself, I’d still suggest that “why” is the most powerful question. And since it’s a question to yourself and consequently carries no risk of somebody being offended by implied slurs on their reasoning, I’d seriously consider asking “why am I even thinking about evaluating my coaching” before you start looking at how, what or when to evaluate.
Kim is a former financial advisor and an Associate of the Chartered Insurance Institute (hence the interest in costs, ROI etc.) and now a Chartered Psychologist, coach and tutor/assessor in neuroscience. He’s written two books on the psychology of personal finance and can be contacted on kim@stephenson-consulting.co.uk or via the website, www.tamingthepound.com.