Transcript for The Eco-Normalization Model: A New Framework for Evaluating Innovations

Below is a transcript of the following Academic Medicine Podcast episode:

The Eco-Normalization Model: A New Framework for Evaluating Innovations
November 22, 2021

Read more about this episode and listen here.

Toni Gallo:

Hi everyone. I’m Toni Gallo. I’m a staff editor with the journal. Every year Academic Medicine publishes the proceedings of the annual Research in Medical Education sessions that take place at the AAMC’s Learn Serve Lead meeting.

Toni Gallo:

This year, there were on demand presentations of the RIME papers and live Q&A sessions with the authors. Those recordings are still available through the Learn Serve Lead platform if you are registered for the meeting, and registration will remain open through the end of December.

Toni Gallo:

The complete RIME supplement is available now to read for free on academicmedicine.org, and this year it includes the research and innovation abstracts as well as the research and review papers. I’ve been talking to some of the RIME authors on this podcast about their medical education research and its implications for the field.

Toni Gallo:

In September, I spoke to Drs. Mahan Kulasegaram and Jesse Burk-Rafel about using machine learning in residency applicant screening. In October, I spoke to Drs. Javeed Sukhera, Taryn Taylor, Nicole Winston, Tim Mickleborough, and Tina Martimianakis about the experiences of trainees and physicians from minoritized communities in the U.S. and Canada. You can find both of those episodes in our archive.

Toni Gallo:

For the third and final conversation this year, I’m joined today by Academic Medicine assistant editor and RIME committee member, Dr. Dan Schumacher. We’ll be talking to Dr. Deena Hamza about her paper entitled “Eco-Normalization: Evaluating the Longevity of an Innovation in Context.” I’ll put the link to that paper in the notes for this episode. Let’s start with some introductions. Dan, do you want to go first?

Dan Schumacher:

Yes. I’m Dan Schumacher. Thank you everyone for listening. I’m a pediatric emergency medicine physician and medical education researcher at Cincinnati Children’s Hospital in the U.S.

Toni Gallo:

Thanks. Deena?

Deena Hamza:

Hi everyone. My name is Deena Hamza. I am an implementation and health professions education scientist. I’m also the Research and Evaluation Lead for Postgraduate Medical Education at the University of Alberta in Canada. I am the Vice Chair of CAME, which is the Canadian Association for Medical Education Foundation.

Toni Gallo:

Well, thank you both for joining the podcast today. Our discussion is going to focus on the new evaluation model that Deena and her co-author present in their RIME paper. We’ll talk about why and how they developed this model and the way that they envision it being used in medical education. With that, I’m going to turn it over to Dan to give us some background for today’s conversation.

Dan Schumacher:

Yeah. I’m really looking forward to this conversation because this paper, I think, is a really important paper. I think that all too often, when we are looking to implement something new in medical education, in our programs, I think that we don’t oftentimes think about what meaningful, good, successful implementation would look like. I think more importantly, we don’t have an eye toward how that implementation will succeed over the longer term and become part of our normal practice if you will. I think that this paper hits that right on the head, and I love that.

Toni Gallo:

That’s a great lead in to our first question and Deena, I was wondering if you could just talk a little bit about the gaps that you see in program evaluation. Dan kind of touched on some of them, but what were the challenges that you set out to address with this work?

Deena Hamza:

I think it’s my diverse experiences that led to identifying certain gaps. Prior to academia, I was in industry. I have a design background, I worked in tech, law enforcement simulation training, and then in academia as a clinical researcher, and more recently as a medical education scientist. That’s led me to, I want to say three or four prominent observations or areas of curiosity.

Deena Hamza:

The very first is with innovation, why are people assumed to be… Not just the people, but the system. People in the system assume to be pre-givens. The people are going to do what we ask them to do, and the system is going to accept or be compatible with this innovation. I guess that depends on the industry based on my experiences, like where does the power lie?

Deena Hamza:

Another two questions that lead from that is, how do certain innovations last? I think when we look in the literature, sometimes there’s this abundance of evidence that an innovation has longevity, but really it shouldn’t. And so, when we think of things like de-implementation and how de-implementation is often very political, it’s questions around that. Why does it have to be that way? Why can’t we just evolve and change? Versus innovations that are long lasting and should be long lasting, what are those critical ingredients that they have that made it long lasting?

Deena Hamza:

Another is this idea of, when we innovate, it’s inherently beneficial. We assume that what we’re doing is going to benefit. It’s going to improve the status quo. It’s going to build our competitive edge. It’s going to do something good and so, often there’s this promissory discourse. We have all these promises of what the innovation is going to accomplish. With that, we don’t often evaluate in a way that explores undesirable outcomes.

Deena Hamza:

I’m using the word undesirable specifically instead of unintended, and that’s based on my experiences as a clinical researcher. As an example, I was part of a group that would design, implement, and evaluate these interventions for youth for substance misuse and mental health issues. We would say, “There was a reduction in symptoms of depression and anxiety.” An unintended consequence of this innovation was it built better relationships with their parents than with their peers. There was always this positive lens rather than really exploring, is there something undesirable associated with this innovation?

Deena Hamza:

I think with this promissory discourse, we need that balance of, what is called in the evaluation world, dark logic, where we’re holding explicit space to look at undesirable outcomes. That’s an area of curiosity because we don’t tend to do that. Within the literature, we know that 0.2% of innovation studies actually look at unintended or undesirable outcomes, which is a problem. Those are the gaps from experiences and tying it back to the literature that have driven the development of this framework.

Dan Schumacher:

Wait, this is rich. I feel like your answer to that first question could be a commentary for your paper that you could write for the journal.

Deena Hamza:

I love this idea. Do you want to write it together, Dan?

Dan Schumacher:

I don’t think I’m smart enough to do that. … You’ve already touched on this a little bit, but why did you choose to focus on the longevity of an innovation?

Deena Hamza:

That was a purposeful use of a different term than sustainability. So in practice and experience and also in the literature, there was a recent review by Moore and colleagues in 2017 and they looked at 209 original papers. They found 24 different definitions of what sustainability means in the implementation science literature. Ultimately the most prominent definition related to sustainability is about resources, human and financial capital, and that relates to the in-practice discussions that we have about sustainability.

Deena Hamza:

What we wanted to focus on is, well, aside from those resources… Because resources are going to be limited, we know that. Resources are either going to come to an end or they’re going to be directed elsewhere. It’s not an endless pot, it’s time limited so, how does an innovation continue in the long-term? That’s why we chose the term longevity, was to get away from that contemporary or common understanding of the term related to resources like capital.

Toni Gallo:

I thought it was interesting that you mentioned de-implementation. I think we focus so much on the implementation part of innovations and that’s what most of the literature’s about. I don’t know if I can think of any papers that are about how… like the de-implementation piece. I think that’s really interesting. Could you talk a little bit about how you think about that part of the process?

Deena Hamza:

We actually do see things being de-implemented, and I often refer to the tech industry. For example, there are things like the Segway, which has been de-implemented because it just didn’t meet its mark. It was intended to reduce bottlenecks in traffic jams and it never made it to that stage. It didn’t meet those grand aspirations that they had intended at the outset.

Deena Hamza:

I think it was in 2015 or something like that, there was a photographer on this Segway that ran over Usain Bolt at the Olympics. It was just this uproar in the tech industry. It was like, “We need to de-implement this.” And so, we do see de-implementation happening, but I think it depends on the industry. I’ll come back to that point where I had said, where does the power lie where we assume people are going to accept the innovation and the system’s going to be compatible?

Deena Hamza:

When we look at innovations that require… Or the people have the power, which is often in the case in tech. We want the people to buy into this. We want them to invest and use this product, often that’s where we see de-implementation. Whereas, when we have an innovation that the power is not with the people, the power is with someone else like a health authority, or someone with authority over the people using public funds. Innovation is using, let’s say public funds and the fiscal responsibility associated with that. I think de-implementation becomes much more political and less common.

Toni Gallo:

I wanted to get into the methodology of your paper. You used a critical review, so maybe you can tell us what that is, what you did, and why you chose the methodology that you did.

Deena Hamza:

Our goal from the outset with this paper was to propose a new way forward related to… At the outset, it was really about evaluation, but really this framework touches on design implementation and evaluation. We thought about using a more systematic approach, like typically what you would see of a systematic review or a scoping review to capture evidence of the existing frameworks. The purpose of that is more of reporting the lay of the land let’s say with the scoping review and because I’m already embedded in that field, I didn’t think that it would accomplish our goals of proposing something new. That’s not what a scoping review or a systematic review is intended to do.

Deena Hamza:

Not only that, but systematic and scoping … I’m just using those two but there are all other types of reviews out there. Just those two as an example, one of the pillars is for an audience to be able to replicate what was done. We were looking for a methodology that would allow us to be flexible, allow us to be creative, to design and piece things together in a way that could propose that new way forward. Those are the elements of a critical review and so, that’s why we chose that approach.

Deena Hamza:

We identified frameworks that exist, that specifically focused on the longevity of change, that was our outset. Not just implementation and the processes of implementation, but beyond that. We identified the three models that became the starting point of the development of our framework. We felt that the critical review allowed for the approach.

Toni Gallo:

Could you just tell us a little bit about what a critical review is? I think it’s maybe a methodology that people aren’t as familiar with. I think a lot of people know what a systematic review is or a scoping review. Could you just tell us the mechanics of what a critical review is?

Deena Hamza:

Our approach with the critical review, based on an article by Grant and Booth in 2009, was to identify between ourselves, between Glenn and I, to identify and have a discussion about different frameworks, which ones would be appropriate for our goals, which pieces or elements could fit into the development of our new framework. We did ask others, so other colleagues, their perspectives of what would be appropriate to include. We did search gray literature. Again, I always draw on the tech industry, which doesn’t necessarily have a… I don’t want to say prominent but the same focus on the academic literature. It tends to be things like in Forbes and then more for their audience.

Deena Hamza:

We looked into the gray literature as well and then based on that, we then found the critical ingredients that aligned with what we wanted to accomplish. I also identified gaps that we wanted to build in our own model. It started out with this is what these other frameworks include but from our perspective, it was not quite sufficient and so, that’s how we built the eco-normalization model.

Dan Schumacher:

Yeah, I think as you talk about this more, I think that this is a good example of how medical education is really doing a great job in bringing in different types of reviews right now. I think I’ve seen this more in recent years and this is an example of something that’s not a systematic review. You’re really speaking to the benefits of this type of a review, right?

Dan Schumacher:

At the beginning of our conversation, you talked about how you bring rich experience from outside of health care and outside of medicine, that it seems like really informed, the lens that you bring to this. It’s not just that you know where to draw from different ideas that come from outside of our field, but you also can add your lens in previous knowledge and interpreting them and putting together a model that is more beneficial than something that’s just simply mechanistic, like more of a systematic review.

Dan Schumacher:

As we talk about the model that you had developed, I wonder if we could actually turn to that. For people that have not been able to look at the paper yet and would hopefully look at it after listening to this podcast, can you describe the components of your model? Also, speak a little bit to why you focused on the interactions of the components rather than the components alone? For those that will look at the paper after this, this is a teaser for Figure 1 in the paper.

Deena Hamza:

The components of our framework can be divided into 2 sections. The very first is, how the different components of what we call the ecology of change. That ecology of change is the innovation, the system, and the people doing the work, and how that interacts with our grand aspirations for the change. I say grand aspirations as things that we don’t often have a conflict over.

Deena Hamza:

We don’t disagree that we want housing security and food security. When we’re thinking about HPE, we want patient care to be optimal, and we want learners to have a personalized experience that is rich and fruitful for them. We don’t disagree on that and those are the grand aspirations. It’s how we translate those grand aspirations into a series of practices and activities that we then start to have differing opinions or differing experiences.

Deena Hamza:

The very first part that we can evaluate for designing or implementing or evaluating, we can look at how the innovation interacts with those grand aspirations of change. Will it lead to that? That’s often where our evaluation starts and ends. It’s like, did we meet those outcomes or that impact that we intended? That’s the interaction between the innovation and the grand aspirations, but we also have to start looking at the people doing the work.

Deena Hamza:

If we’re thinking about longevity, the people doing the work and the grand aspirations, is there a conflict between what the grand aspirations are and the local aspirations of the people doing the work? Are they too grand for the people on the ground to have value or invest in? Then we look at the system and the grand aspirations, are there conflicts or is there compatibility, and where is change needed? Then look at the secondary interactions. Those were the primary interactions, the first half. The secondary interactions are how the innovation and the system interact with one another, how the system and the people doing the work interact with one another, and how the people doing the work and the innovation interact with one another.

Deena Hamza:

One of the things that I find really interesting is, when we evaluate, we’re looking at… And also implement, we’re looking at the innovation and the people doing the work. The interaction, but there’s almost this unspoken rule and again, this pre-given people as passive recipients, that the people just need to do this work. If there’s a problem with the people doing the work, it’s a lack of knowledge and skills, but it’s not necessarily that way. Before I go into why the interactions, Dan, I just want to touch on that a little bit further with an example.

Deena Hamza:

I’m just coming out of a 3-year study in my institution, looking at competency-based medical education and that promissory discourse, and the failure to fail phenomenon. A lot of things are really interesting. When you’re talking to faculty and trainees and explicitly asking them like, “Do you need more faculty development? Do you know how to do the practices and activities of competency-based medical education related to authentic feedback?” We know that this is a problem, not just in my institution, but elsewhere.

Deena Hamza:

What we found is that, as the example, the most prominent example is the difference between the 4 and the 5. I can’t give the 5, I give the 4. The response is often, or the solution is often, well, you need more faculty development because you just don’t get it. There’s a difference between the 4 and the 5. In my conversations and I guess I’m giving away some of the findings before the paper is released early next year, but that’s okay, all for Academic Medicine.

Deena Hamza:

One of the findings is that, really what we’re asking people to do, is ignore either perceived or actual consequences associated with the system. If we’re not looking at that systems piece, we’re going to constantly have this conflict where people don’t feel comfortable giving the 5. Even though we have an adaptation where we put in brackets, I didn’t need to be there in theory, it still did not solve the problem, because we’re not looking at the system aspect, the interaction between the people doing the work and the system.

Deena Hamza:

Another example is about the authenticity of feedback and how some faculty have concerns about how this data is going to be used in the future. Not only are they told, let’s say as the example of the 4 versus the 5s, “Ignore consequences,” but here, we’re also saying, “Overlook these unknowns, just do what the innovation’s telling you to do. Just pretend that that issue you have there, it’s just not there. Maybe it’s just you need faculty development, that’s all that you need.” I think that’s an issue when we’re trying to work toward culture change and this value change within the people on the ground doing the work that will ultimately affect the longevity of the change.

Deena Hamza:

Coming back to your question though, Dan, about why the interactions and not the components alone and so, this model that we created, we’re avoiding assumptions. We’re eliminating the assumptions of traditional stage models. We’re acknowledging that, when we implement something new within a context, that there are interrelationships, not only between organizational levels.

Deena Hamza:

The system is like an onion, there are so many levels there, but the innovation processes, the people doing the work. We’re acknowledging that there’s this continuous, mutual influence between the change that we’ve introduced into the system by the innovation and all these other aspects. Really, we’re also acknowledging that, different parts of, as I said, the ecology of change, the innovation, the people doing the work, and the system, are at different stages in the innovation process at the same point in time.

Deena Hamza:

Let’s say the people doing the work and the innovation are trying to work together, but the system’s not yet keeping up. We want to capitalize on what we’re learning by these interactions when we evaluate to then use the momentum from the cascading effects to drive those changes. Drive the changes in the system rather than just saying, “Well, the system’s static and that’s it.” Well, it has to evolve, everything has to evolve. Maybe not at the same time, but it has to evolve over time.

Dan Schumacher:

Yeah. I think that, that’s one of the most important pieces of the model that you have developed, is that, I think that oftentimes when we are trying to roll out an innovation, we think about… When we think about the system and the people who are doing the work, we oftentimes think, “Well, obviously the system should want to do this, it’s a good innovation and the people should want to do it, it’s a good innovation.” The most we do is, we give a nod to how we can convince the system and the people doing the work that that’s true. We don’t really think about meeting the system and the people doing the work where they’re at and really give them equal footing in rolling out this innovation. I think it’s such an important piece of your model.

Deena Hamza:

That’s a great, great point that you make, Dan, because this draws into our broader conversation of equity, diversity, and inclusion. When we’re innovating, who are we bringing to the table? We can no longer innovate in a way that’s siloed because that will ultimately hinder the longevity of an innovation. Have we not included people in the system, in the design, in the first place to then lead into that?

Deena Hamza:

It’s thinking about a proactive rather than a reactive and slowly reactive approach. I often say, we want to be actively reactive at this stage, but ultimately we want to move, evolve into a space where we are proactive in the way that we design, implement, and evaluate.

Dan Schumacher:

Yeah. Building on this and you already alluded to this a little bit, but how do you envision your model being used and by whom?

Deena Hamza:

I’ve been using this model for the past 3 years in my evaluation at the University of Alberta. Using it to guide some of the questions or some of the directions or identify gaps in my evaluation related to really focusing on the people doing the work and the interaction with the innovation. But what are some of these system issues that come up that maybe I’m not attending to? Are there undesirable outcomes that are coming out in my research?

Deena Hamza:

As an overall framework, I’ve been using it for competency-based medical education. My vision is to operationalize that and that’s underway. I’ve built an interdisciplinary group and we are going to be piloting the development. Not only this framework and the evolution, this is an innovation. The framework is an innovation, it’s going to evolve over time, it’s not static, so I’m not immune to my own critiques about the design implementation and evaluation of innovation. Our interdisciplinary group is trying to operationalize that in this framework into a series of questions that someone who is new to evaluation, or an early career researcher can then use it with guided questions rather than the more broad framework that you see here.

Deena Hamza:

Where we’re piloting this is, one, with an indigenous community and with an engineer that specializes in indigenous engineering to look at water safety. That’s the project is about water safety in an indigenous community. Here, we’re bringing many different people to the table in a governance structure to make decisions about how this water safety is going to be accomplished. We’re going to use this framework to take a look at the governance structure itself.

Deena Hamza:

In addition to competency-based medical education, we have another team member from engineering that’s looking at competency-based professional training of engineers and so, we’re going to pilot with that group as well. We’re also looking at the internet of things and so, project management using automation. What are some of the unintended, I like the word undesirable, undesirable outcomes of this type of innovation?

Deena Hamza:

You can see from my strategy, I’m bringing in diverse people, I’m bringing in diverse perspectives. We have a medical sociologist, we have an evaluation scientist. My Co-PI is Dr. Betty Onyura from the University of Toronto and so, it’s a pan-Canadian project to operationalize this tool. Ultimately my goal is that it would be used for any type of innovation, any type of design implementation and evaluation, not just health professions education.

Toni Gallo:

Do you have any suggestions for readers who are picking up this model and want to use it in their own practice? Do you have some suggestions or recommendations you would give them for getting started?

Deena Hamza:

I would start with the 6 guiding questions and building off of that, additional questions. As you ask some of these questions, the 6 guiding questions that are in the framework, other questions will arise from those conversations that you have related to the innovation. Let’s say if you’re interviewing faculty or you’re creating a survey for trainees, with these guiding questions, ultimately more questions will come forward. And you can reach out to me and I can help you build your evaluation plan, because there are some people internationally that have connected with me based on this framework that are like, “I really like this framework. How can I ask certain questions?” Or, “This is our goals, how do we build it?” And build it from there together until this is operationalized.

Toni Gallo:

Dan, any final questions?

Dan Schumacher:

I don’t think so. This has been a great conversation actually.

Toni Gallo:

I completely agree. Thinking about the lens that you look at evaluation through and bringing in work from other fields and industries, I think it’s really interesting. How you think about all of the different pieces that go into an innovation, I think it’s going to be really helpful for folks as they’re thinking about the whole life cycle of their work.

Deena Hamza:

You know what? I’m curious to see how people experience this tool. I often tell people that, to design an innovation requires a lot of vulnerability to then receive that … I want to say critique, some people just criticize and that’s not where we want to be. We want critique to improve because we’re all working, or I would hope that we’re all striving for the same goal, to be better and to accomplish things together.

Deena Hamza:

I think it also requires that vulnerability on the people who do provide critique, because maybe you’re critiquing it and you’re identifying something that doesn’t quite work. But to be vulnerable enough to say, “I just don’t know what the solution is, but I’m identifying that there’s this problem here.” To bring people to the table and if we’re both engaging in a way where we’re open to one another’s feedback, that’s really how we’re going to evolve the innovation. If I’m closed off and saying, “This is my innovation, it’s perfect, that’s it,” we’re not going to evolve to be better. I think that vulnerability has to be on both ends.

Dan Schumacher:

I have to tell you, you have a number of times in this conversation, called out the use of words quite intentionally, and I actually love that. Like you just talked about the difference between criticizing and critique. Critique being a very good thing, but criticizing not being very helpful. You’ve done this a couple of other times in this conversation as well and I love that. I too am someone that really focuses on the intentional use of words and picking words that really mean what we want them to mean and avoiding words that don’t, I love that.

Deena Hamza:

Thanks, Dan. I think it’s important. I think it’s important to be really clear about both those intentions. Even when we write, in HPE, we’re moving more fully into this direction of reflexivity and identifying your world view and sharing that with the readers so people can understand your perspective or why you define a word a certain way. For example, integrity, integrity of implementation. As an implementation scientist, that to me means fidelity, that’s synonymous. We have to be very intentional about how we define things and how we use things. Not only in conversation, but even in our writing.

Toni Gallo:

As an editor, I’m all for this. This is good. Deena, any final comments that you want to share with everybody?

Deena Hamza:

I’m just so excited that this paper was chosen to be discussed on this podcast. Dan, I wouldn’t have wished for a better person to ask me these questions. I was waiting for this, so this has been a great experience. As I said, I’m really eager to receive feedback about the eco-normalization model. I want people to try it out and say like, “This didn’t really work,” or, “The system, what can we change and what can’t we change?” That’s actually a direction of my research right now, is what’s malleable and what’s not malleable, and then what are the cascading effects of that? Does that go back on the people or does that go back on the innovation, and how do we move from there?

Deena Hamza:

I think the more people try out this framework, the more we’re going to build evidence and improve it and evolve it over time.

Toni Gallo:

Well, thank you both so much for joining the podcast today. This was a great conversation. I’m really excited to share it with our listeners. Everybody can find Deena’s paper, which lays out the model that she discussed here, in the RIME supplement, which is available on academicmedicine.org right now. You can go find that for free. Thanks everyone.

Toni Gallo:

Remember to visit academicmedicine.org to find the complete RIME supplement, as well as the latest articles from the journal and our archive dating back to 1926. You can also access additional content, including free eBooks and article collections.

Toni Gallo:

Subscribe to Academic Medicine through the subscription services link under the Journal Info tab, or visit shop.lww.com and enter Academic Medicine in the search bar. Be sure to follow us and interact with the journal staff on Twitter at @acadmedjournal and subscribe to this podcast anywhere podcasts are available. Be sure to leave us a rating and a review. When you do, let us know how we’re doing. Thanks so much for listening.

One thought on “Transcript for The Eco-Normalization Model: A New Framework for Evaluating Innovations

  1. Pingback: The Eco-Normalization Model: A New Framework for Evaluating Innovations | AM Rounds

Leave a Reply