Evidence-Based Management
Evidence-Based Management
Module 6 A short introduction to science
This episode accompanies Module 6 of the course, which is an introduction to the scientific literature that can be used as part of our evidence-based management decision making. This module aligns to chapter 5 of the Evidence-Based Management book.
Modules 5, 6 and 7 all focus on the scientific literature, so when you listen to their corresponding podcast episodes, the picture will hopefully become more complete.
In this episode we look at different aspects of the scientific world – what motivates academics to study the topics they research; the pros and cons of lab and field studies; and how to approach academic studies to get the most from them (don’t try to read them cover to cover!). We also discuss the importance of statistical significance and effect sizes in research and their practical relevance in the real world.
Host: Karen Plum
Guests:
- Eric Barends, Managing Director, Center for Evidence-Based Management
- Denise Rousseau, H J Heinz University Professor, Carnegie Mellon University
- Rob Briner, Professor of Organizational Psychology, Queen Mary University of London
Find out more about the course here.
00:00:00 Karen Plum
Hello and welcome to the evidence-based management podcast. This episode accompanies Module 6 of the course, which is the short introduction to science - as part of our understanding of how to use scientific evidence in evidence-based management decision making. There is more on the scientific evidence in modules 5 and 7, and their corresponding podcast episodes.
Module 6 helps us understand more about the scientific literature and how to determine whether study findings are trustworthy and reliable, as they relate to the issue or question we're investigating. The critical thing is, are the findings of practical relevance? Can we use them to understand the relationship between different factors and behaviors that we're evaluating?
I'm Karen Plum, a fellow student of evidence-based management, and in this episode, I'm joined by three of our podcast regulars - Eric Barends, Managing Director of the Center for Evidence-Based Management, Professor Denise Rousseau from Carnegie Mellon University, and Rob Briner, Professor of Organizational Psychology at Queen Mary University in London. Let's hear what they have to say.
When I first saw the title of this module, A Short Introduction to Science, it brought to mind the title of the Stephen Hawking book, A Brief History of Time. I'm not entirely sure why. I think it was because science seemed such a big subject. How could you have a short introduction to something so fundamentally complex?
What became clear was that science is for everyone, you simply have to know how to approach it to identify the nuggets that are useful to you. Like many people, I thought that science is conducted in pursuit of the truth or something absolute, but it really isn't. And Eric explains that science deals in likelihoods and probabilities - things that really aren't used when we discuss decisions made in the management field.
00:02:13 Eric Barends
In science we never talk about certainties and we may give the impression that we know things for certain, but it's always certain to a certain amount. And in daily life we talk about truth and certainties, but never in science. And you may argue that well, but in physics, isn't there a certain level of truth? I mean, isn't gravity a truth? No, it's not - it's still a likelihood or probability. It's a very high likelihood or probability, but it's never 100%.
This is of course something that we should really understand because evidence-based decision making is often making decisions on not hard truth or proof that something will work, but it's based on likelihoods and probabilities. And we managers have not learned to think in terms of probabilities and likelihoods - we are hired to make bold claims. OK, let's do this because then we get this done.
00:03:20 Karen Plum
By contrast, in medicine, doctors talk in terms of probabilities that things will work. That a particular course of action may reduce a risk, for example. But when was the last time a doctor told you that they were certain that something would work? We accept that - we may not like it because we like certainties - but somehow when we discuss interventions in the management field, it's a brave practitioner that says, well, I'm 51% sure this will be successful when their boss is looking for a quick fix.
With this in mind, our search for appropriate scientific literature to support our decision making is all about the pursuit of reliable and trustworthy studies, that show that doing particular things will make an improvement, or that certain actions are associated with outcomes we're interested in. Having identified the appropriate search terms and located a number of promising studies as we discussed in episode 5, now we need to understand how to approach this material so we can determine how appropriate it is to our particular enquiry.
Firstly, I thought it would be useful to ask our guests about academic studies to understand more about why and how academics conduct them. Having discussed constructs in episode 5, I asked Denise Rousseau why some constructs are researched more than others.
00:04:41 Denise Rousseau
The body of evidence associated with any particular construct largely reflects of course researcher interest in the construct or the problem that it relates to. And I do think that one of the ways in which science is relatively democratic is that researchers are free to choose the variables and the factors that they focus on to a great extent. A lot of research in management is unfunded research by researchers and psychology departments and business schools and it's their interests that dictate where we go.
Some constructs I think are seen as more practically relevant, a good example of that is psychological safety, which has been found to be related to the uptake of evidence-based medicine practices in healthcare organizations, and now that's a you know, talk practically relevant - if you can predict the uptake of effective practices based on psychological safety, that's a thing!
On the other hand, I think there are constructs in organizational research that have caught the imagination of scholars, because they seem so connected to many of the phenomenon of basic interest. So citizenship behavior is a good example of a well-studied construct.
And it's connected to so many things. It's an outcome of how well people are treated by the organization. It's a predictor of the extent to which people in the organization are able to respond well to change or new initiatives, because it's an indicator of goodwill that might exist amongst employees. So some measures are basically friendly and gregarious in relation to other ideas, and so they're commonly studied because they serve a role in a broader set of explanations that the researcher’s interested in developing.
00:06:38 Karen Plum
And to broaden our understanding a little further, here's Rob Briner's take.
00:06:43 Rob Briner
There's two reasons. One is because they're perceived to be easy to measure in some way, so it's easily defined, or you can put it into a Likert scale, it seems like it's easy to do. I think the other reason is because it seems important – now it may be important, it may not - but it may seem quite fundamental to a particular field.
So for example, in organizational behaviour, HRM constructs like job satisfaction, organizational commitment, they’re kind of studied to death. You know you can ask questions about how important those concepts really are, they certainly are appealing and they certainly seem to be kind of easy to measure and also I think sometimes constructs are studied where they are seen to predict lots and lots of different things.
So if you have quite a narrow construct, maybe only is relevant to one or two outcomes, it seems to me less studied and have less appeal than something like, for example, job satisfaction or stress at work, or commitment or work engagement, or employee engagement that apparently predicts everything.
00:07:43 Karen Plum
Denise explained to me that in American universities, researchers are generally hired to teach students, and in their non-teaching time they conduct research. So it isn't the case that they have a research budget or need to go after funding and it means that they can research topics that interest them. And as Rob explains, what's more likely to improve their career progression and standing in their field.
00:08:06 Rob Briner
I’d say the vast majority's funded by universities, so you’re right, academics they do get to choose what they research, but how they research it perhaps may be particular topics are governed a lot by what it is they believe they can publish. So of course one of the main drivers for what academics study is their careers and career incentives - it's like everybody in every job. There are the principles that drive you, but then there's a kind of realness of what's actually going to help you get on in this job, to get promoted, to be able to get new jobs.
00:08:39 Karen Plum
I think it's worth recognizing that if you're puzzled as to why there is little research in the area you're interested in, there may be a whole range of reasons why academics aren't conducting that research and not all of them relate to the difficulty of conducting it.
When you look through studies, it's clear that some were carried out with volunteers from the university student body. Some people question whether this type of research - conducted in a lab or controlled setting with students - is transferable to the real world of work. What does Denise think?
00:09:11 Denise Rousseau
The issue we often face with laboratory studies is that when the lab study is the only study that exists on a phenomenon and there hasn't been a field test, we really don't know. But I submit that for most practitioners who are looking in the literature, that you're not as likely to have a question about the first ever lab study ever done on some new social phenomenon.
It's likely you came across a lab study because you were interested in goal setting or you were interested in personality effects on decision making or teamwork. And there have been not only laboratory studies on those questions, there have also been field studies on those questions.
And the issue is putting them together, it's not ignoring the lab study, it's seeing whether or not when you have that, let's say you're doing a literature review and you're going to come across five studies, some in the lab and some in the field, and you pull them together on your practice question - my question to you is, not are the lab studies relevant, but are the lab studies consistent with what you're seeing in the field. My bet - they're going to be consistent.
00:10:17 Karen Plum
This might be in the case of a virtual team where the communication channels are controlled in the lab experiment, whereas in the real world we use lots of different channels, including the so-called back channels to communicate with each other. But nevertheless, as Denise says, the important thing is whether the lab studies and the field studies appear to be consistent. In terms of field studies, sometimes organisations will approach a university to initiate research. Other times the researcher will reach out to an organization to invite them to participate, perhaps offering to share the results and illustrating how it may be of practical use to them.
00:10:57 Denise Rousseau
Certainly I went to look into the field to try to find ways of developing psychological contract measures and looking for organizations that out of their goodwill, are willing to help. More often than not when we make a cold call on an organization and the idea is, I'm interested in doing research if you would be interested in hosting us and very happy to identify areas of common interest so that the research that I do will be of use to you.
That I will say has been the most exciting process, because you're looking for areas where you can be of use to them and where they have interests and their interests are telling you important information in the organization about problems or concerns or priorities that then could inform your research question.
00:11:48 Karen Plum
It feels like this is a great win-win situation. Being close to the organization can give you access to information and insights that may help and inform the research, in turn providing more meaningful results for the organization.
But there are other situations where organisations may want to sponsor research, but there may be little chance that the academic can get that study published, or there may be too much of a commercial interest in the outcome, leaving the authors open to challenges around bias. In either case, it's not going to be a career enhancing piece of research for the academics involved.
00:12:24 Rob Briner
In some cases you'll just go in and do a piece of research and the organization’s not really that involved with it. In other cases, they may want to be very involved in it and the challenge sometimes is doing a piece of work and obviously you want to help the organization, to give something back to the organization, but perhaps there's almost two bits of work going on - one is the bit of research that will get you published, the other is the bit of research that might be useful to the organization.
Now ideally, those things will overlap and overlap quite a lot, but often they we don't - end up almost measuring or assessing certain things within a field study that you're probably never going to use in terms of publishing scientifically, but you're going to do because they're going to help the organization in some way.
So I think that the difficulties are as you would imagine, and in that sense of what's in it for the organization, what's in it for the University or researcher and making sure that both parties are kind of getting something out of it.
00:13:18 Karen Plum
Moving on, I'm sure everyone struggles with reading academic studies. When I was first exposed to this, I vividly remember sitting staring at a paper, rereading a paragraph that I just couldn't make sense of. So where was I going wrong?
00:13:33 Rob Briner
They're meant to communicate important information. So don't approach them as though that it's kind of a Harry Potter book or something. It's not supposed to be interesting. It means to me that when you approach reading them and again, I see this a lot and it's a sort a gag I often use with students is when they first start reading articles they’ll get out a sort of green or yellow highlighter pen and you can see them in the days when people use these paper more, you can look at the papers on their desks and they've highlighted about 90% of the article.
I'm kind of saying that's kind of the opposite of highlighting! What you're doing - you're doing it to make sure you've read every word because it is so hard to read. And that will not help you understand the paper. So what I always say to students is you have to approach reading an academic article with questions. Don't read it like a novel.
00:14:24 Karen Plum
I'm pretty sure I was using a highlighter too, but rather than my inability to read through the paper being an issue of intelligence or focus, it was entirely expected. We are taught how to approach studies during the modules, but how do the experts guide people in this area? Well, nobody says “read it from start to finish”, that's for sure! Here's how Denise and Rob responded.
00:14:47 Denise Rousseau
There also is a skill in reading an article as a practitioner, which is a different competency than reading the article as an academic who might want to replicate that study, or as a doctoral student who's trying to learn from the design. I would say that as a practitioner - you're interested in the claims being made by the article and their trustworthiness for your question - is first and foremost read the abstract carefully so you understand what are the concepts and what are the claims being made about them. And what kind of research design did they use and is that design trustworthy, given the claims that they are making? That may be enough for you.
I tell my students - read the abstract, read the conclusion, the final section (the practical implications in the final section) of the manuscript and see whether or not it tells you enough to use it as information relative to the question you're asking. If you think it's important, skim the method section to make sure you understand the research design and does it track with the kind of evidence you need for the question you're working on? Is it causal or not? Is it descriptive of the population you're interested in describing or not? And that's enough. Notice you never read through the theory section in the front end and you didn't read the results. You've read the parts that are useful to you as a practitioner.
00:16:11 Rob Briner
Approach it with questions. What is this about? What is the argument for why they're doing it? What questions are they asking and why? How do they try and answer those questions? What exactly did they do? What did they say they found? And what do they say it means?
Go with a set of questions rather than reading every word. My analogy for this often is to present students with something like a train timetable and I say to them OK, here's like a spreadsheet of trains going from these two cities over the course of the week. Can you please read it. And they all look very puzzled - how can we? It’s just like lots of numbers and times and dates and what do you mean read it?
I say just read it. And of course they can't read it. And I say, well that's exactly the same as an academic article. You cannot just read it you need to have a question. You have to find out the bits you want to know - so if it's sort of when is the last train from Leeds to London on the Tuesday night? Great! Now you go into that spreadsheet of times and dates and places and pull out the information you want and it is active with academic papers.
You need to know what information you need. And to read it very actively. So that's what I would say. The other thing I would say about reading it, is I always encourage students or anybody reading an academic article is to skim it. And people again feel, oh I haven't read it properly and you know, I go yes, that’s fine, but you've started to read it, skim it, put it down, read another thing. Put it down and go back to it again and it's idea again of it being quite iterative.
00:17:46 Karen Plum
So far from it being just OK to dip in and out of the study, it's recommended. It's more like a repository of information for different audiences to access and use for different purposes. The authors need to make sure the information required by different people can be found in one place.
I love the advice to read it actively, not passively. Seek to answer the questions that you have. Of course, when we're reviewing studies, we can often get flummoxed by the jargon and acronyms. Denise describes this as tribal language - we all use it in our respective professions. In the course we're advised not to get hung up on it. So how do our experts feel about this difference in language? Are they also unable to make sense of some studies?
00:18:34 Rob Briner
Why are academic papers so hard to read? I always say, do you think we find them easy to read? Do you think we find them easy? If we move even slightly outside our fields, most of us, find them very difficult to read and they're very tedious to read.
And the way academics would read an academic article in their field, they wouldn't read it, every single word, they'll flip around all over it, pulling out bits, what's the hypothesis, what's the method, what's the results, what's the findings and then go back to introduction. So they're not reading it like a piece of narrative. And that's a general thing I always say to students - academic papers are not meant to be interesting.
00:19:16 Eric Barends
I would argue that 20% of the studies is almost impossible for me to read or to understand, and it's as even worse, sometimes I ask a very experienced academic that has done research in that area for decades, and even that academic doesn't understand exactly what is going on?
So it's not you. It's the paper or it's the authors. But it's like doctors - they have their own lingo. They have their own jargon; car repair engineers - every profession has its lingo, has a jargon and that's because it's easy for colleagues in the same profession to quickly understand what it's about. And it's not our language because we're not academics, we're not researchers, we're practitioners, so it's hard for us.
But it happens to me all the time and I often make mistakes. I mean, I'm not perfect, sometimes I read through a paper and I think, oh, this is actually the core of the finding. And then someone else, another reviewer or colleague reads and says no, I don't think so, I think actually they did this. Oh really? Oh wow!
So it's hard. It's hard.
00:20:36 Karen Plum
And I think it's worth emphasizing again, that studies aren't meant to be interesting or entertaining. The difference in language makes it less surprising that managers don't make as much use of academic research as we think they ought to. I've sometimes found a small section in papers called “implications for practice” or “notes for managers”, but these never seem very extensive or insightful. I guess the thing is, they aren't written by people who are experienced practitioners, so why would they be?
I asked Rob if he thought there was a way to provide more insight for practitioners within the papers.
00:21:13 Rob Briner
I think the challenge is, in most of the areas of management research that I'm aware of, the researchers that do it are not motivated by practical implications. That's not why they're doing the research. So when it comes to writing practical implications, they can sound very bland and very generic.
There's a good reason for that. To repeat, it's because they didn't do it because they thought about the practical implications. The second point is that most researchers in management are not practitioners, so even if there might be some practical implications, even though it wasn't motivated by practice, they may not be in a position to do it - say what those practical implications are.
So you know its the case that again many areas of management I'm aware of, people may conduct lots and lots of research, but have almost zero experience of what a practitioner in a field that might be relevant to what the researcher is doing, they just – no, because that's not what they do – they are a researcher, an academic. Not what they're rewarded for.
00:22:19 Karen Plum
Essentially, we have different people doing different things, with different drivers and motivations. Broadly speaking, I'm sure there will be exceptions, academics don't see it as their role to help practitioners use their research, which is why this course is so helpful as it enables us to make our own determination about what's going to be useful in our particular context.
With that in mind, I wanted to spend some time on significance. Has the study that we're examining identified something significant? And here's where it starts getting tricky. Studies talk about statistical significance, which is different to the way we use ‘significant’ in everyday life, which is often as a synonym for substantial or meaningful. Here's Rob.
00:23:06 Rob Briner
People are always searching for statistical significance. Which is hugely problematic in lots and lots of kinds of ways, but that's the thing that drives a lot of research. And if it's statistically significant, is seen to somehow be important. So rather than that, people say, well, hold or shouldn't be looking at the effect size, it tells you something about the magnitude of the relationship between things, because that's giving you more sense of well is this thing really important?
Statistical significance doesn't say anything about how important it might be. Interesting, effect sizes kind of don't either, because even if there's quite a strong relationship of a big magnitude between two things, that still doesn't tell you it's practically important.
00:23:45 Karen Plum
This reinforces the importance of context. In terms of navigating study findings, I concluded that we need to look for effect sizes and then consider how practically relevant the findings are in day-to-day life. And to be very careful how we use the word significant so we don't confuse our audience, particularly if we have statisticians present. Using substantial or meaningful seem safer choices.
To round off the discussion on this module, here are a couple of things I found tricky. The first is the placebo effect. It seems that whenever we deal with people, whatever we do, there's a placebo effect. The moment people think something is going to happen, this will have an impact on their behavior.
Put simply, whatever we do always works, because of the placebo effect. What we need to know, therefore, is that it works better than what we're doing now, or whether it works better than an alternative.
00:24:44 Eric Barends
So in proper research, we always have to take into account the placebo effect by having a comparison. That's not easy. It's a very important requirement that there is a control group, that there is a comparison, because otherwise we will be led astray by the placebo effect, and can't say with any certainty that the effect there is because of the placebo effect or because of the intervention we did.
So the term control group is the official term and often the control group is doing nothing. So you control for the intervention or variable, or whatever you're examining, but you could also make a comparison with an alternative intervention or variable or whatsoever.
00:25:40 Karen Plum
I also got a bit blind about the term control group. I got it fixed in my head that it only meant doing nothing, as opposed to either doing nothing or doing something different, with a second or multiple groups. So that's something for me to pay attention to when looking at studies in the future.
And that concludes our exploration of this short introduction to science. What I learned is that there are many things I need to have on my radar when I'm reviewing scientific literature. In a way, you need to be a bit of a detective to figure out some dimensions of the study methodology and whether the findings are substantial enough to warrant your time and attention. None of this is presented to you on a plate. There's no substitute for careful considered application of common sense, something that can get lost in the rush for answers. Well it does for me anyway!
I'll wrap up this episode with a few words from Denise, which I think neatly sums up our journey in understanding the scientific literature.
00:26:42 Denise Rousseau
When I first got into Graduate School, I went to the library of my small town to say I'm going to get a PhD in psychology, this is going to be great! I went and I reached for a volume on the shelf of my library of the Journal of General Psychology. Because I was going to read a psychology article. Pulled it off the shelf, opened it up to the first issue that was in that particular bound volume. And I read the first article in the journal General Psychology.
I read the first paragraph I understood every third word. And nothing else. And I thought, my God, what have I gotten myself into? So when you first start reading academic articles, you have to go through the crust just like that naive undergraduate who hadn't yet joined the island with the natives who spoke the language they speak.
But the language is accessible to us if we learn the words that are important for the conversations we want to have and the conversations that a practitioner wants to have with the research literature are different to the conversation that the academics who are doing the research want to have with each other.