Evidence-Based Management

Module 14 Assess - evaluate the outcome of the decision taken

May 25, 2022 Season 1 Episode 14
Evidence-Based Management
Module 14 Assess - evaluate the outcome of the decision taken
Show Notes Transcript

This episode accompanies Module 14 of the course, which is about evaluating the outcome of the decision we’ve taken or the solution we implemented. This is the last stage of our evidence-based management process and is vital to ensuring that we learn from what we’ve done.

Assessing outcomes is vital, because otherwise, how do we know if what we did was effective, and how can we learn and develop our approach to decision making?  Did we capture a baseline before the decision was implemented? And was the decision implemented as planned? If we assess outcomes without these two, then conclusions could be very suspect.

We continue our case study of the large trial at pharmaceutical organization Sandoz (part of Novartis Group), and find out how the D&I interventions were assessed.

Denise Rousseau's 2020 paper: Making Evidence based organisational decisions in an uncertain world 

After action review paper: https://www.cebma.org/wp-content/uploads/Guide-to-the-after_action_review.pdf 

Host: Karen Plum

 Guests:

Additional speaker, courtesy of CIPD: Niamh McNamara, Global Head of People & Organisation, Novartis 

Find out more about the course here:   https://cebma.org/resources-and-tools/course-modules/ 

00:00:00 Karen Plum

Welcome to the Evidence-Based Management Podcast. This episode accompanies Module 14 of the course, which is about evaluating the outcome of the decision we've taken, or the solution we implemented. This is the last stage of our evidence-based management process and it's vital to ensuring that we learn from what we've done. 

I'm Karen Plum, a fellow student of evidence-based management, and in this episode we hear from three of the guests that contributed to episode 13, Eric Barends, Managing Director of the Center for Evidence-Based Management, Professor Denise Rousseau from Carnegie Mellon University and Stefanie Nickel, Global Head of Diversity and Inclusion at pharmaceutical manufacturer Sandoz, part of the Novartis group. 

Also in this episode we'll hear from Niamh McNamara, Global Head of People and Organization at Novartis. 

Let's hear what they have to say about assessing the outcomes of decisions and solutions. 

 

00:01:11 Karen Plum

In this episode, we will hear more from Stefanie about the trial that she did on some interventions designed to strengthen her organization's approach to diversity and inclusion. You may remember that she explained the background to the trials and what they did, in episode 13

But firstly I wanted to link back to one of the resources from the course. It's a paper written by Denise Rousseau in 2020 called Making Evidence-Based Organizational Decisions in an Uncertain World. If you haven't read it, it's absolutely worth your time, and I've put a link in the episode show notes so you can find it easily. 

I asked Denise to outline the key theme of the paper, but believe me, this only hints at the wealth of detail it contains. 

00:01:55 Denise Rousseau

I think the idea - a core idea - is that there are different kinds of decisions that we make. And the decision research highlights that different processes are required to make particular kinds of decisions. The good news is there's only like 3 different kinds, and we can remember that.

So the first issue is that many decisions they're actually kind of routine. We make them in different guises over and over again. I think was Peter Drucker, who said that most management decisions are routine decisions wearing the guise of uniqueness. I love that phrase! So this has to do with making selection decisions, or looking for vendors, or buying new equipment, or the features of the decision  have a lot of elements in common - over and over and over and over again. 

When you make a certain kind of decision, repeatedly, let's call that a routine decision. There is very likely to be kind of a critical path to making that decision well, that certain kinds of features will reoccur over and over and over again in terms of what you have to know about the kind of equipment that you're trying to purchase, what you have to know about the price and the availability, issues of delivery. There are structured features of that relatively routine decision. 

If you lay those out, make sure you don't drop any one and kind of almost checklist it, you can come up with a pretty consistent, high-quality outcome. 

And over time by kind of routinising that decision you’ll improve it, because as Frederick Taylor told us, like in 1912, if you want to improve something, you got to standardize it. But just to say you've got to identify its common features so that you keep attention to each of those things, make your decision, you go back and you look at your process. 

Was there anything I missed, anything I should have added over time? That routine gets better and the idea that a lot of what we make decisions on in organizations are repeats, tells us that we should probably not be treating them like one-offs, but be good to have a checklist, a framework so we don't leave out any important bits. And over time we can look at our decision process and see if things have changed and we should update something. 

So that's the probably the most common kinds of decisions are those repeats and most MBA programs give no attention or training to helping managers make routine decisions, even though that might be, you know, 50% of their time. And I think that's unfortunate because of course, if this were medicine, a lot of attention goes to the frequently recurring / presenting set of symptoms and how they should be treated, because they're a cold or they’re the flu or they’re hypertension or this that or the other.

In management we don't have that and that means that we're wasting a lot of prior experience that we're not relying on, in making a good decision. We're also not improving what we could. And wasting time treating things as if they're exceptions when they aren't. That's one of the key ideas from that paper. 

00:05:12 Karen Plum

Perhaps the issue with routine decisions is they aren't as interesting, as sexy or challenging as the ones we perceive to be new or innovative ones. Perhaps we'd rather consider our decisions as unique because we feel we add more value to them. The other thought here is, are we really clear about the problem we're trying to solve? Are we turning a routine decision into something different because it plays into something else we'd like to achieve? 

Thinking back to earlier parts of the process, Denise emphasizes that reflection before action is incredibly valuable, and this is borne out by research into decision making. 

00:05:51 Denise Rousseau

One of the key issues we learn from decision making research is that reflection before action is incredibly valuable. I think it might have been Einstein who said that if he had an hour to tackle a complicated problem, he would spend 50 minutes trying to understand the problem and reflecting on it. And then the last 10 minutes trying to work towards a solution to the problem. And I think that proportion is good. 

We're not talking analysis paralysis here - what we are talking about is before we make assumptions and start on any path, step back and consider the alternatives. In evidence-based practice, the first issue is we try to think about what problems are we really trying to solve, so we don't get so solution oriented we go off target. The second issue here is, after we take time to think what are we trying to make happen, is the secret sauce of making good decisions is taking some time to generate alternative courses of action. 

The research on decision making that has been done in organizations in natural settings highlights that the best decisions an organization has made in the last year or so, are those decisions for which alternatives were considered before the choice was made. 

The worst decisions is where they went with one path - never considered an alternative. The easily available solution was chosen. The idea here is there’s value in that reflection, even if you go with your first choice, by reflection you're more thoughtful about what it could mean., you're more thoughtful about the efforts it will take 'cause you've had a comparator - the other solution you considered. And that that informs the thinking both in the decision being made, but also on the implementation. 

00:07:46 Karen Plum

Assuming you took that time and considered alternatives, you'll have implemented your decision in some way or another. 

So now it's time to consider - what happened? Were the outcomes as expected? What can we learn from the experience? For many managers this will be a new step, as very often no assessment is routinely carried out to evaluate decisions and their outcomes, as Eric explains. 

00:08:12 Eric Barends

The Assess part of the evidence-based approach is crucial because obviously you don't know whether you've made the right decision or whether you solved your problem. People ask all the time, oh, what's actually the evidence for evidence-based management? Well, you can only say when you measure the outcome, and actually that's more general question - do you in your organization, assess or measure or evaluate the outcomes of the decisions you make or the projects you run or the interventions you did or the new methods you applied? 

Unfortunately, most companies have to answer that question with “no we don't”. And that makes it very, very, very hard because then you don't know anything about if you assume there is a problem, how serious that problem is because you haven't measured it. And the same counts for being able to say something about the way of working, or the things you do right now the status quo - does it work? Well, we only know when you measure it. And the point we always always make is get a baseline. You need to have a baseline, because otherwise you can't take an evidence-based approach. 

00:09:40 Karen Plum

If you don't know where you started, then you have no way to know or to demonstrate what the change has been, post implementation. And part of this evaluation is also understanding whether the decision was executed as planned. 

If there have been even subtle changes, those could affect the whole nature of the intervention that was recommended. Maybe the people responsible don't understand why they've been asked to do things in a particular way, so they make adaptations. 

00:10:09 Eric Barends

If a decision is being made, or a new way of working is implemented, or a new approach or model is applied and before you start measuring things the first step you should take is, was it executed and was it executed as planned? In terms of implementation, change management and stuff like that. Often in organization many things go wrong, or can go wrong so it doesn't make sense to measure or assess the outcome if you’re not certain to start with, whether it was indeed executed as planned or in the correct way. 

00:10:48 Karen Plum

This may be easy if you've been involved in the implementation, but if you haven't then getting to the bottom of what's being done could be tricky. Implementation is of course fundamentally important to success, and if it wasn't carried out as recommended then drawing conclusions from the outcomes could be misleading or even risky. 

Although some organisations measure a lot of outcomes, lots do not, and this is one of the reasons in my view that change programs receive such a bad press. People so often say “we don't do change well in this organization” and that could be because not only do we fail to use multiple sources of evidence before making a decision, we fail to consider alternatives and then we fail to learn from what we did. We just move on to the next initiative. The need to learn from what we did is paramount. 

One of the core ideas in the module is the after-action review. A genuine method for learning from the implementation of our initiative. Eric told me there's a paper on the CEBMa website about this topic, which is the most downloaded paper from the site, so it clearly resonates with people. There's a link to the paper in the show notes for this episode. 

00:12:03 Eric Barends

An after-action review is as we explain in the module, it's not your gold standard, it's not your silver option, it's if you can't do anything else, this is what you should do as a minimum, so it's the bare minimum. I think it's the bare minimum in every organization, whatever the decision being made or change applied or new way working method. 

Sit down with your people and do what we refer to as an after-action review. How did we do this? What did we set out to do? What did we do? What can we do better? And I think it's an absolute correct observation that this is about learning. During this after-action review, everyone is equal. Learning experience matters and there are no ranks or sensibilities whatsoever. 

You should speak up freely and this is referred to in the scientific literature as what we call psychological safety. It should be psychologically safe to come forward, maybe as the youngest employee with the least experience and say this is what I notice, I wonder if we could do X or Y, that's OK. It's not about your positions, its’ not about the HIPPOs (the highest paid person’s opinion) again - everyone can you know, share his or her experiences, and it's all about learning. 

And it's a learning tool, maybe not even so much an assessment tool, but I would argue that the assessment and the outcome and the conclusions are OK, but it's even more important to figure out what can we do better? What can we learn from this for the next time? 

But again, it's the bare minimum - it's only a post measure. You should have a baseline - I keep saying this because otherwise you can't say anything It would be great to have a control group etc. But if that's too complicated or it's not feasible, which is often the case in organizations, as a minimum, do the after-action review. 

00:14:12 Karen Plum

I think the after-action review is so powerful. The opportunity for everyone to have an equal voice is very affirming. It's a great expression of psychological safety, and on that note, I think it's time we return to our Sandoz story to hear about the outcomes of the trials that Stefanie Nickel shared in episode 13, so first things first, did they start with a baseline? 

00:14:35 Stefanie Nickel

So we started with a baseline and also having a control group which was randomly assigned, really helped us understand what's the baseline dialogue and because we do speak about psychological safety a lot in the organization, we wanted to capture that, so I think having a control group is pretty important. 

Giving randomization to the different arms is equally important to really then understand causation - is what we are doing responsible for what we are seeing that sparked a whole dialogue around what kind of data are we looking at and how can we be more systematic in our evaluation. I also have to say was interesting when we talked about having a control group, there was a sense are we keeping something good from people, which is interesting, right? 

00:15:20 Karen Plum

It's very interesting that Stefanie's HR colleagues felt uncomfortable about people in the control group not having access to the best interventions. Stefanie explained that on the medical side, this is really a non-discussion, because it's accepted that a control group is needed to understand causality. The way this was resolved was to explain that the test was only for six weeks and then the most effective intervention or guidance would be made available. 

So what outcomes have Sandoz seen so far? 

00:15:49 Stefanie Nickel

A significant uptake in meetings and more meetings were being scheduled in the group that was in the individuation arm. 25% more meetings, so that's quite impressive. We also saw that overall individuation approach really increased psychological safety significantly and that, I think is important feedback to managers so you don't have to know all the answers but please ask your team members, how can I support is an important signal, right to say I'm here with you and you can openly share. So that was quite impressive and I think very good feedback also for managers. 

00:16:28 Karen Plum

It was also important that the trials identified things that weren't effective, so that managers didn't end up spending time on something that didn't work. 

00:16:36 Stefanie Nickel

If we found that something doesn't work, it would have been equally valuable because then we can not do it. If you test something and see it's not effective and we can take that out of the equation or not ask our managers to spend time on something that doesn't work, and I think there’s a huge opportunity yet for organizations to systematically test recommendations, test them in a way that's easy, that doesn't feel invasive, and it also helps us really focus on the things that are effective. 

00:17:04 Karen Plum

Stefanie explained that one of the key things that led to successful implementation was the simplicity of the interventions they trialed. There was no need for an extensive training program to prepare people to adopt the new approach. There was a keenness among managers to be more effective, but to understand that if they only had an hour, how should they spend it? 

This resonates with the simplicity theme that we explored in episode 13, as well as needing to get the best bang for your buck. 

00:17:32 Stefanie Nickel

For me, it's a principal commitment to say rather than just doing things and then measuring something, we want to understand how we can be effective, and that's a very different kind of commitment. And for Sandoz, I mean we’re in the generics business, right? We need to be effective. We want to understand how with the few interventions we can drive maximum impact so it fits our business model. 

00:17:55 Karen Plum

So the approach fits with the Sandoz business model and it's also an opportunity to share learnings with other organisations. 

00:18:03 Stefanie Nickel

For the D&I space specifically, we are working on the same topics as many other companies. We're all trying to increase diversity and equity to make our workplaces more inclusive, so I think there's also an opportunity, which would be a bit more complex but I think hugely interesting, to do multicenter studies and work across organizations with the same kind of questions - how can we collaborate to gather data on the same outcomes that we all want to achieve.

I’m presenting our study to other European D&I Heads – we meet regularly and very openly talk about things and I want to propose this - can we put our heads together and in the sense of multicenter, randomized, controlled studies, which is the gold standard in medicine, right and gather the kind of data that guides decisions and makes us all more effective because it's beautiful about the D&I space that people really, really collaborate, we have the same goals. 

00:18:56 Karen Plum

I love that organizations are collaborating on these types of initiatives, learning and sharing together for the greater good, making the industry and workplaces better. It's not a topic where we need to be secretive or competitive. 

So I wondered where the initiative is going now, as clearly this wasn't a ‘one and done’ as Denise would put it. 

00:19:17 Stefanie Nickel

We are now reviewing the results with our different leadership teams. It's really about what are we seeing? Why did we approach it this way? What can we learn from this? Very much a dialogue right now, but I think going forward this is something to really look at.

 

Also, in the context of ESG, there's a bigger discussion where D&I is just one part how can companies be effective. Because this is seen as value creation, so I think there's a whole dialogue which can be hugely supportive of this kind of thinking and also the discussion what kind of data we need to gather as organizations to understand how to be effective. 

00:20:07 Karen Plum

I think it's a great example of evidence-based management in action. What's so impressive is the size and global reach of the Sandoz trial. 7,000 people, 1,000 teams. With the opportunity to understand many different cultures, roles, teams and situations. The interventions were simple to understand and implement and relatively easy to measure, drawing on feedback questions validated by academic research. 

But what if we aren't operating in routine or familiar fields? What if we're innovating? What if there's no academic research, no organizational data? What then? Can evidence-based management still be useful? Eric says yes! 

00:20:52 Eric Barends

Some of the critique of an evidence-based approach would say, well, that’s very nice but doesn't work for innovations, does it? And then we say, actually, it does work for innovation. First of all, I think the burden is on you to provide evidence that this is indeed an innovation, or just a different way of working. I mean, if it's a different way of working and it does not have a positive impact or has a bigger impact than what we're doing now, it's not an innovation, so measuring stuff is important there. 

But with innovations, if there's no evidence about whether something will work yes or no, when there is no person in your organization or any other organizations that has experience with this and stakeholders can't help you there, no organizational data because we have never done this before and there's no research - then there's only one step left, and that is indeed assess the outcome. 

Make a decision. Implement this new way of working or innovative model, whatsoever, but please then assess the outcome because you don't know whether it works, and because there's no evidence, there's no prior probability about whether it works yes or no, the only thing that's actually left for you to do is assess the outcome and learn from the outcome and make incremental changes and improvements and start building your evidence-based case. 

It's a way of creating evidence, so assessing the outcome creates evidence for future decisions, and it should be part of your instrumentarium in an organization, because sometimes indeed there is no evidence available. We don't know nothing about this new approach, so it's OK to try it, but then I think you have a responsibility to measure whether it indeed worked, yes or no. 

00:22:52 Karen Plum

Before we finish, I wanted to share a final story which comes from Novartis, the Group of which Sandoz is a part. Novartis have been working with CEBMa for some time, researching different topics and developing their approach to evidence-based management. 

Over the last few years, they've been researching and implementing interventions to improve their performance management approach. Let me provide a little background and then we'll hear from Niamh McNamara, who is now Head of Global People and Organization at Novartis, speaking at a CIPD event in 2021. 

So Novartis asked the Center for Evidence-Based Management to research the impact of feedback, meaningful work, recognition and reward, and other drivers of team performance. Taking this together with internal data and practitioner and stakeholder inputs, they developed a diagnostic tool based on validated scales discovered through the evidence; pretested it with 125 teams; set up a community of practice to share experiences; partnered with the Ashridge Executive Educational School to build a team effectiveness program; and created a variety of online tools linked to the diagnostic tool. 

So this was the application of the evidence, but what about the outcomes? Well, they conducted a randomized controlled trial involving 16,000 people worldwide. And as Niamh explains, they are not resting on their laurels, they're continuing to monitor and adjust as time goes by. 

00:24:31 Niamh McNamara

So we have validated our three hypotheses through those rapid evidence-based assessments that Eric and his team completed for us. We also have our data from our organization which told us that our new approach was significantly more favorable than what we had previously, in fact only 5% of those 16,000 associates would go back to having performance ratings and having this more cascaded performance management approach. 

Our new process our new experience, is called Evolve and it launched this January and we will continue to gather the evidence to review how it is progressing in the organization. And we've built a dashboard, where we're collecting internal information regarding our engagement scores, we're looking at the behavioral change through how people are engaging with the system that we have in place, but we're also looking at more qualitative data through the sentiment analysis and also by commissioning a number of e-focus groups as well. 

So just because we have come through the whole journey of the experiment doesn't mean our data story and our approach to evidence-based management disappears. We're really delighted with the results. We think that we're probably one of the first companies to do it at that scale, and that level in terms of gathering the evidence. But the proof is in the pudding. The early feedback from our associates is very positive in terms of the resources that we've built and in terms of the approach that we've taken. 

They have seen, you know, in the post survey what we saw was that in terms of meaningful work, the new approach that we put in place saw almost a 40% increase in terms of the engagement of associates in objective setting. And in terms of the feedback model that we put in place, we saw a 30% increase there in terms of people receiving meaningful and regular conversations from their manager that really helped them to do their job. 

00:26:33 Karen Plum

And as Eric commented at the same event, a randomized controlled trial with 16,000 members is truly amazing. 

00:26:42 Eric Barends

It is an excellent example for so many other organizations to understand how an evidence-based approach looks like, because all the elements are there. What are your hypothesis and assumptions? What's the evidence? What can you learn from that? Multiple sources of evidence. OK apply it, but then measure whether it moved the needle yes or no.

And not just ask people, so what do you think, are we doing OK? No, run a randomized controlled trial with 16,000 members, that's amazing. 

00:27:14 Karen Plum

So if you've been thinking that this sort of thing never happens in organisations, think again. And think of the impact that working in close collaboration with practitioners and stakeholders has had in this organization. Think of the message it sends about the organization's commitment to improve things. I've found it very inspiring. 

And now to wrap up this episode, here are some final thoughts from Denise about the importance of assessing outcomes, and that in a learning sense, failure is our friend, if we operate in a psychologically safe environment. 

And in the next episode, we'll consider how you can go on to build the evidence-based management capability in your organization. 

00:27:59 Denise Rousseau

I think one of the interesting issues in making evidence-based decisions is, so there's two ideas. One is, we need to access outcomes that we don't assess outcomes, we can't know how well things are working, so we can't take appropriate action. 

But when you assess outcomes, you also recognize where the decision wasn't a success, where failure did occur. Failure is an opportunity to learn and the idea of having a psychologically safe environment to say this didn't go the way we thought, but we're going to turn it into something valuable or we're going to try different things. 

We can't be sure if any of them will work, but one thing we're sure of is that we're going to learn from each pilot we do and take action based on that learning. Failure is our friend - used appropriately in the context of evidence gathering, an evaluation of your decision - because it tells you where to focus your attention or what paths to abandon.