Tony Karrer's eLearning Blog on e-Learning Trends eLearning 2.0 Personal Learning Informal Learning eLearning Design Authoring Tools Rapid e-Learning Tools Blended e-Learning e-Learning Tools Learning Management Systems (LMS) e-Learning ROI and Metrics

Tuesday, May 09, 2006

Intermediate Factors in Learning

I apologize on this post, but you are going to have to bear with me as I take you through my statement that:

The most important work we do in corporate learning is to understand Intermediate Factors and how to positively impact them.


I've recently read several comments and posts that talk about "only measuring the outcomes in learning." To point to one, take a look at: From Product Focus to Audience Focus.

In the corporate world we should only really care if the learning is transferred to the job…period! Is the output increased, or of higher quality because of our learning intervention. This has always been a problem for training departments because we look at everything we do as a product, and we “evaluate” if the product had impact. The approach is totally wrong.


While I completely agree that corporate learning is all about driving human performance that leads to business results, I have a real problem with the implication that you should only measure the outputs.

I've worked on several learing solutions that were tied closely to employee and customer surveys and so I've had a nice opportunity to work with some really geeky, analytic wonks who use things like regression analysis and command serious dollars to figure out what Intermediate Factors ultimately drive results.

For example, we might be working with a bank who knows that loyalty, share of wallet, and customer profitability are the ultimate business measures. But, the real question is "What drives these factors?"

A survey research person will create a Causal Model that attempts to explain Intermediate Factors that ultimately drive these numbers. For example, Satisfaction with the Advice the bank provides, Knowledge of Staff around products, etc., may be Intermediate Factors in their models. Each of these are then tied back to Survey Questions. Then, through analysis of survey responses in comparison to actual numbers, we can determine the correlation between these intermediate factors and the business drivers. This is INCREDIBLY IMPORTANT information!

Why is this so important? In most cases, I can't change the ultimate output directly. However, I often can aim my solutions at the Intermediate Factors. I can improve advice, increase knowledge. These are the performance drivers I go after.

Ah, well, the people who talk about "only measuring output" will say that all we've done here is understand the "outputs" and there's definite truth to that argument. I would certainly agree that these are important "outputs" to also measure. In fact, I would say that these can be used to define and measure the performance that we are going for.

However, even if I now recognize that Providing Quality advice is the performance measure (as measured in survey responses), I still have more work to do in order to determine the drivers of this. As learning professionals, we then spend our time understanding what practices drive this performance. We identify a series of additional intermediate factors (product knowledge, solution selling, etc.) that we need to attack.

The bottom line is that for us to be able to have a systematic method for improving the "output" we need to be able to define all of these intermediate factors AND we need to measure the impact we are having on intermediate factors. Again,

The most important work we do in corporate learning is to understand Intermediate Factors and how to positively impact them.


Yes, I care about the end result, but unless you can tell me the intermediate factors, how you will impact them, and ideally measure your impact on them, then why should I believe that your learning solution is going to work.

Brent said in his post:

The process is continuous and if our “training solution” is organic, dynamic, and flexible, it is very difficult to measure using the current method of measuring learning products. My point is “who cares”. If we have set up environments that help people collaborate, and support their informal learning, we should see output improvements.


"Who cares"? Well I do. And, actually, the business does. If you create an "organic, dynamic, flexible" learning solution but can't explain how it impacts the end numbers, then: (a) you won't get credit, (b) you won't know if you can repeat it successfully, and (c) you won't know if its really working.

Keywords: eLearning Trends, Informal Learning

No comments: