Ask the Editors: Striving for Clarity in Designing and Reporting Quantitative Research

On this episode of the Academic Medicine Podcast, the journal’s editors–Colin West, MD, PhD, Yoon Soo Park, PhD, Jonathan Amiel, MD​, and Gustavo Patino, MD, PhD–join host Toni Gallo to share practical guidance for designing and reporting quantitative research. They share tips for success and flaws to avoid around designing your study, using descriptive and inferential statistics, and analyzing and presenting your data. While the advice in this episode comes from the editors of Academic Medicine, much of it also applies to designing and reporting quantitative research for other journals and publications. 

This episode is now available on Apple PodcastsSpotify, and anywhere else podcasts are available.

A transcript is below.

Check out related resources: 

Two intertwined speech bubbles, one is green and the other orange

Transcript

Toni Gallo:

Welcome to the Academic Medicine Podcast. I’m Toni Gallo. Last month, I spoke to some of the journal’s editors about designing, conducting, and writing up your qualitative research. We talked about how to get started; how to align your research question, your methodology, and your data; and how to effectively use reporting guidelines. And then, we dispelled some common myths about reporting qualitative research. If you haven’t listened to that episode yet, definitely go back and find it in our archive. This week, for the second episode in our Ask the Editors series, I’m joined by the journal’s editors who are experts in quantitative research. We’ll get into study design and data analysis and presentation, and we’ll share some tips for success and flaws to avoid. And again, we’re going to end our episode with some listener questions today. So to get us started, I’d like to do introductions.

Colin West:

Welcome everyone to the podcast. I’m Colin West. I’m a general internist and biostatistician at Mayo Clinic. I was a statistical editor for Academic Medicine in years past, and I’m currently one of the deputy editors for the journal.

Gustavo Patino:

Hi, everyone. Thank you, Toni, for having us. My name is Gustavo Patino. I’m at the Western Michigan University Homer Stryker School of Medicine, where I am the associate dean for undergraduate medical education. At the Academic Medicine, I’m one of the assistant editors.

Yoni Amiel:

Hi everybody, I’m Yoni Amiel. I’m a professor of psychiatry and senior associate dean for innovation in health professions education at Columbia University in New York. And I’m an assistant editor at the journal as well.

Yoon Soo Park:

I’m Yoon Soo Park. My background is in statistics and measurement science. I’m based at the University of Illinois College of Medicine. Within Academic Medicine, I serve as the consulting statistical editor. Really, really excited about this episode.

Toni Gallo:

‘’m excited about it too. Thank you all for being here. So I’m going to turn it over to you, Yoon Soo. I know you’re going to give us a little bit of context for our discussion today and talk about why we wanted to cover this topic.

Yoon Soo Park:

Thank you. The overarching theme for this episode is to provide practical guidance for quantitative studies with perspectives coming from the journal’s editors. And I think there are several factors that make practical guidelines particularly useful in quantitative studies. As we all know, the field of medical education and health professions education is quite complex, in that we are a vastly interdisciplinary field. We have colleagues coming in from medicine, psychology, humanities, and other backgrounds, which encompass standards for multiple disciplinary domains. In addition, there are also unique aspects to our discipline, particularly aspects that relate to methodology. So all of this put together, it really prompts us to identify, what are the foundational aspects to quantitative studies that are essential to report standards and to include rigor? And so, in this episode, we will be providing practical tips, for example, areas to prioritize for authors, as well as areas to avoid for methodological flaws, in addition to general questions from the audience.

Toni Gallo:

So you all, in addition to being editors for Academic Medicine, are reviewers yourselves and researchers. So this is something that you do in your own practice. So I thought we could start with just some areas that you want authors to pay attention to. So what should folks prioritize as they’re conducting and writing up your research? Colin, you want to get us started?

Colin West:

So each of us is going to speak to a specific aspect of some of the quantitative research process. And I’m going to start us off by just giving a couple of broad tips relating to descriptive statistics. These will be general, because I think it’s important to step back and think about why we’re reporting or analyzing things the way that we are. So the first tip is really try, as an author and a researcher, to give readers a complete picture of who participated in your study. So as you think about your background in presenting demographics and other descriptive data, information on the outcome experiences of your participants, really use this section to provide the context for your study. That may sound a little surprising, because it seems more qualitative than quantitative in some respects, but it’s really important to provide that stepping back context and letting the numbers, the data, share that with the people that you’re trying to reach with your work.

Which leads me into my second tip, which is think about how you’re summarizing your key results, keeping in mind that the data are really telling a story, and your job is to bring that story out, in as accurate a way as possible. So how do you do that? How do you make sure the characters are clear? How do you make sure your readers understand what’s involved? That means including all the main variables, making sure the definitions are clear. If you have quantitative variables, thinking about your measures of central location, like what are the means, the medians? What are your measures of variability? But using those statistics to really help your readers understand the context of the larger story. I’m going to hand it over to Yoni to say a little bit about data presentation now.

Yoni Amiel:

Thanks, Colin. So when I think about data presentation, I think about the tables and the figures that authors use to convey, as Colin said, the story of the research project and the report. So generally, you want to present data demonstrating how well your study aligned to your plans. So as Colin referred to, thinking about participation numbers, rates at each stage of data collection, subject demographics, when they’re pertinent, and then, you want to be thoughtful about how you present your outcomes, which are going to depend on your study design.

So a couple of thoughts on that. First, you can generally expect readers and reviewers to be familiar with methods that provide data on one group, so descriptive data on one group, or compare outcomes between two groups, like pre-post or case control or match designs. These are fairly typical and familiar. And so, for these types of studies, you want to present your results in a way that helps your readers understand its significance and its generalized ability. So practical or clinical significance matters. Sharing the overall gross finding is critical, because people are going to try to take a look at what you’re reporting, to see if it matters to their practice. And when relevant, statistical analyses are also really important. So reporting your P values, your confidence intervals, effect sizes are all helpful so that people can contextualize the work and understand it. That’s the kind of direct advice on the most common types of studies.

But I would like to say that, when your study design is more intricate and includes methods such as regression analyses, cluster analyses, analyses across groups and across time, then it’s helpful to present high level data, as we discussed before, but in some kind of tabular format. And then, to offer a flow chart or a graphical representation of the findings superimposed on a study design, so that people can really understand how this worked and how you were thinking about the data as you were collecting and analyzing it. When you have them, large raw data sets are important for reviewers in particular, but those should really be included as supplemental digital content, since they’re likely to be too in depth for a general readership. So I’ll stop there and transition to Yoon Soo.

Yoon Soo Park:

Thanks, Yoni. So in my review for practical tips, I’m going to talk about analysis and the statistical inference or implications, specifically when writing a manuscript. And I want to focus on 2 areas. We need to talk about, first, the data assumptions and then, number two, which are the analytic components. So for data assumptions, it’s important to clearly state what they are and how they’re being used. Data assumptions are essential in any study, because we know that every study has some limitations and specific characteristics that relate to the data. And these data assumptions prompt also the types of analyses that are possible in our projects and in our studies.

For the second part, which is the analysis, it’s important to remember that keeping the analysis simple and straightforward is key. We do not necessarily need to engage, for example, in all possible analyses with that data, nor do we need to use the most sophisticated type of approach. I tell my colleagues that sometimes the simplest analytic approach sometimes could be the best, because it satisfies the data’s assumptions and makes the study inferences and implications straightforward.

Also, just one last piece of practical tip is to check that the analysis aligns with the research questions, so that the results are presented. And so, there is a flow between the research questions, the method section, and then the results section, particularly relating to the end script. Next, I’ll turn it over to my colleague, Gustavo.

Gustavo Patino:

Thank you, Yoon Soo. So I’m going to be talking about study design, and I’m going to build on everything that Colin, Yoni, and Yoon Soo just mentioned and say that your research question must always be front and center when you’re thinking, OK, what kind of study I’m going to make. And everything has to flow from there. So the first thing is big outcomes that really answer the question that you’re asking. Very often, we see manuscripts that have a very good question, but then, the things they end up measuring and comparing, they have very little relevance or are unable to answer that question. And that is a big flaw. So always start, what do I want to know? With that, take the right outcomes. And just as Colin was saying, you have to tell a story. What is the easiest way to tell a story? As Yoon Soo mentioned, try to tell the story as directly as possible.

Every time you’re doing any statistical analysis, creating a research study, you are creating a model. And I really like that saying that all models are wrong, but some of them are good. You’re never going to be able to capture the whole world in your model. You’re never going to be able to account for everything in your study, but what are the things that are most important for you to know, to get to that answer? Again, a lot of people hear all these buzzwords in the media about machine learning, hierarchical regression, quantile regression. They’re like, you know what, I want to do one of those studies. Again, start with the question and pick the right design that will answer that question in the easiest way possible. Especially if this question is being answered for the first time, it’s perfectly OK to start with something simple and then, little by little, start building it. It doesn’t have to be on this paper, but it can be a research project, that will build over and over those basic blocks.

Toni Gallo:

So I have a question for all of you. You kind of touched on this. You talked about you’re telling a story across your whole paper, and I think something that authors struggle with sometimes is the discussion section and how to go from reporting the results, those findings, to the implications or what do those numbers actually mean. So I wonder if you could each share how you think about writing a discussion section or how you think … when you’re reviewing a paper, when you’re thinking about how to go from “here are the numbers that came out of this study” to “what does this actually mean? What can readers take away?” How do you think about that part of a paper?

Colin West:

It’s a great question, Toni, and I’ll share a structure for a discussion that I actually learned during residency training from one of my mentors. And I came out of graduate school thinking I knew how to write a paper, and I rapidly learned I knew nothing. This structure has served me well for more than 20 years, and I think it would help authors if they followed it, putting their own individual stamp on it as needed. And so, my discussion sections generally try to have 4 main content areas, and each of these is fairly brief. The first is restating for your readers, usually in the first paragraph of the discussion, what are the key results? You’ve just presented several paragraphs and tables of results. How do you help your reader identify what are the key findings of this study?

The second part of the discussion is, how are these relevant in the background of the larger literature? How do they connect with what we already know? And what do they add to the literature? That gets a little bit at some of the implications, but if you can anchor this in what’s already known, the implications in the added value actually become easier to write about. Because you’ve got that, back to that word context, you’ve got the context and you’re keeping yourself reminded of that.

The third section is an honest discussion of the limitations, which sometimes leads to open doors to here’s where the research needs to go next, because not every study’s going to be perfect. That’s not possible, and not every study can answer every question that’s relevant. So you can open the door to say, how do we build on this?

And then, finally, a concluding paragraph that really ties everything together. Again, why does this work matter, very briefly? It’s not perfect, that’s fine, but how does this help move the field forward? That brief sort of 4 point outline, which is usually not more than 6 total paragraphs, allows for a fairly focused discussion to get at some of what you were asking about, about placing your work into the larger context. Curious if my colleagues have additional thoughts or approaches that they use.

Yoon Soo Park:

Actually, I follow the very similar format that Colin just mentioned, and I think that’s such great advice. Because it really structures a manuscript. It really brings everything home. Maybe just 2 other things to just, in my sort of personal style, is … Particularly when we’re getting to the part of interpreting the results, based on the empirical data, I try to extract, based on the empirical data, what are the results that we want to illuminate and magnify, so that we want to put some of them at the center stage. At the same time, we want to place them within the field of the literature, because the field is moving in a certain direction. And we’re trying to contribute, we’re trying to advance the knowledge of the field. And so, putting all of that into place, based on the data that you’ve just done. And then, of course, just using Colin’s outline of the next step, which is the data limitations, that will prompt future studies as well and then, the conclusion. So I really like that 4 step process as well, just to echo Colin’s points.

Gustavo Patino:

Yeah, and I think the outline that Colin gave is very useful. I will reiterate what he mentioned. In the first paragraph, you don’t have to go over every single result of the paper, like what are the key points, the most important things that you want to highlight?

Yoni Amiel:

Maybe if I can add just one more piece, and thinking about this really as a reviewer and as an editor, I really appreciate both the structure that the group floated out there. And also when authors remind us of their research question and tie together what they found to their research question, which for me, I think, really sets up that limitation section, where sometimes authors are overly critical of themselves for limitations. I find this to be one of the most interesting sections, because sometimes, the results are different than the question would’ve suggested or the hypothesis. And so, for authors then to reflect on what was surprising to them in their findings and whether that reflects any limitations or maybe advances them to the conclusions of what other questions might be coming up, I find that to be really, really powerful and interesting and generative. So I appreciate a focus there.

Toni Gallo:

Thank you. OK, I have one more question related to this, and that’s about incorporating tables and figures and other kinds of exhibits. Most research reports will include something, and I think it might be difficult for authors to figure out, What do I put in the text? What do I put in my tables and figures? How do I best present my data, so that I’m giving you a comprehensive picture of my findings? But I’m not just listing 3 or 4 pages of just number after number after number. So do you have any recommendations for the best way to do that? Or how authors should think about that balance of, you know, the text versus the tables and other exhibits?

Yoni Amiel:

So maybe I can start with a couple words here, because I think it ties to the data presentation advice, both to what we talked about already and then a little bit to what we may get to a bit later on. You want to make sure that your tables and your figures are able to stand alone. And so, to the degree that you can take a look at how they tell the story that you mean to tell and make sure that you’re not requiring your readers to go too much back and forth between your tables and your figures, particularly because often, your tables and your figures are what will be abstracted into slides that people use in other presentations and share with other folks as shorthand for what your paper meant. And so, I would make sure that those figures and tables can be used for this purpose. And then, in the text obviously, you can be a lot more nuanced about the details of what you’re trying to present there.

Gustavo Patino:

I’ll throw out the way I see it, and I would be interested to hear what everyone thinks about it. So when I’m thinking of this study design, I actually start with, what are the arguments that I want to make to convince you of the answer to the question? And for each of them, it’s like, Is this going to be a table or is this going to be a figure? And how would those tables and figures look like? And that helps me think, this is the data that I’m going to need to collect so it’s a proper design. That helps guide everything, and then, the text of the paper is just going to be explaining how we get to those tables and figures and what are the key takeaways from each of them.

Yoon Soo Park:

Yeah, I agree with everything that my colleagues, Gustavo and Yoni, had just talked about. I wanted to just focus on some aspects that relate to the analysis, because they also have implications on how we present the data. So the type of analysis certainly contributes a big deal to how we present but also the type of variables. If we’re dealing with continuous variables, if we’re dealing with non-continuous or categorical variables, that also surprisingly makes a big impact on how we present our results. So that’s something that I always tell my colleagues as well, the type of variables.

And then, just one other thing to add as well, particularly in quantitative studies, because there are wide range of various statistics that are out there, and not all of our readers know all of the statistics. So whether we have tables or whether it’s in the narrative of the manuscript, I always tell my colleagues to explain the particular statistic that’s being used, what are the implications. Sometimes if they’re used in a table, it would be really nice to have that explanation as a footnote, what the statistic means and how it applies for the finding as well. That part of making the results friendly to interpret actually makes a big deal for both the reviewers and I think also particularly for the readers of the manuscript.

Colin West:

This is great advice from the entire group here really. And I would just emphasize that your tables and your figures, I think, as Yoni said, are really part of how you tell the story. And so, you need to use them as a tool. And because you’re doing that, that also is a reminder that you don’t need to repeat everything that’s presented in a table in the text or vice versa. They complement one another. So although the tables and figures do need to stand alone, they don’t need to be redundant with the full text. A picture’s worth a thousand words. You don’t need to have the figure and a thousand words describing everything that’s in your table.

And then, the complement to that piece of advice is think about what the digestibility of your table or figure is. We sometimes get tables that are 7 pages long, I wish that was an exaggeration. It’s not. That’s not publishable. And even if it was, it’s not easy for your readers to understand. They’ve got to wade through so many rows of information to understand and pick out what’s important and what’s not. So use your tables and figures to tell the story in an honest way, to break it up for your readers a little bit. Some text, some graphic images. That helps the accessibility of the content that you’re trying to share.

Toni Gallo:

Thank you all. This is great advice. So I know you want to talk a little bit about some flaws that authors should avoid when they’re writing up their quantitative studies. So I’ll turn it back over to Colin to kick off this section.

Colin West:

Yeah, so I think the common thread through a lot of what we’ve spoken about already is really a focus on clarity, and my flaws to avoid are along that same theme. And so, the first is one flaw that comes up commonly is sharing results that leave readers guessing about who was in the study or what you actually observed. And so, as you think about your results, really try and step out of your researcher mode, because no one knows your study better than you do. But imagine yourself as a reader who doesn’t know anything about your study. Is it clear who you studied, what you found? Are the descriptive statistics supporting that? So that you are, again, getting to that story in a way that’s as clear as possible.

Sort of a sub tip here is make sure you consider who might not be represented in your data. So non-responders in studies, incomplete and missing data, and from a demographic standpoint, maybe underrepresented groups from a sociodemographic standpoint or maybe from a clinical or education audience standpoint. But who do your data not include? Who do they not represent? It’s helpful to include that. The flip side of that is, if you don’t consider that, you don’t help your readers understand the scope of your work and who it applies to. You’re leaving them guessing about how far to take the results, how far to extrapolate them potentially to other audiences.

Yoni Amiel:

So on data presentation, I’ll flip this a little bit as what to aim for, but I think you’ll hear the flaws to avoid within that. So first, I would just strongly suggest that you double check that the data that you present conforms to the journal’s guidelines. This is a relatively straightforward check, but very, very useful. Because the guidelines essentially aim to make sure the tables and figures can stand alone, as we talked about, and present the information that helps readers judge how the study itself performs. So how representative for the samples, whether the study was sufficiently powered, et cetera, and then help to infer the practical and statistical significance of the findings. So that’s the first check. Looking at the guidelines.

Second is really getting the balance of the data right. As Colin said, sometimes tables can go on and on and on in ways that it is really not digestible for the readership, but I would say that too little data in your tables is probably riskier than too much data, especially for the purposes of review. So here, I would recommend 2 checks that could be helpful as you’re preparing to present your data. First, you can look at the PowerPoint files included with research reports in the journal that present data from studies similar to yours that you’ve found compelling. These are ones that have been through careful editing and conform to the journal’s guidelines, so they should be good examples for you. Second, once you have a draft, I strongly recommend sharing the table or figure with a trusted colleague who’s not that familiar with your study and seeing what questions they have. These two checks should help optimize your data presentation for review and for a general readership.

The last note I’d offer is that there’s far less consistency in supplemental digital content. So there you can feel free to provide more rather than less data, of course ensuring that it’s all accurate.

Yoon Soo Park:

And I just wanted to follow up on the thread of clarity, and when it comes to the manuscript process, particularly when we get to the analysis section or when we’re talking about the inferential statistics, a lot of times, because of the nature of the analysis that’s involved, the description can get quite complex. It could be overly complex. And so, one of the tips that I tell my colleagues to avoid is to avoid making it complex. We wanted to make sure, yes, there’s complexity, because we want there to be rigor, but we also want to make sure that the analysis section is clear. We want to know why you did specific things in the analysis. How do specific analyses relate back to the research question? So that clarity of, even though you are doing sophisticated analysis, is so important. And yet, it’s one of those very challenging parts that I kind of see when reviewing manuscripts. So again, clarity with respect to analysis.

The second part has to do with sample size and the type of inferences that can be made. In quantitative studies and inferential statistics, sample size is deeply related to the type of inferences that we make, just because of the nature of the approach. And this is widely known in that, in the health professions, we don’t have big, big sample sizes to deal with. What becomes then important is to be clear about the sample sizes that you’re using and to be clear about the limitations. That also has implications on the unit of analysis. So for example, if we are doing analyses that relate to programs and then learners who are embedded within programs, those are different units of analysis. So how are you clarifying those types of analysis within your manuscript? All of those things will have implications. And so, these are quite relevant to discussing the analysis.

Gustavo Patino:

A flaw that I want to highlight, regarding a study design, the first one is that we built a lot around statistical significance, but as Yoon Soo was saying, the sample size is going to have a big effect on your results. And differences that might not have been significant when you had 60 subjects might become statistically significant when you have 2,000. But you, as the researcher, you have to step back and say like, OK, even if I have statistical significance, is the difference that I’m seeing between the groups important for practice? Is it practically relevant for the field? And one example will be, if we’re looking at scores on a scale, and change of one point becomes statistically significant, because of the sample size that you have, you do see a difference. But is that going to be practically useful for other people in the field when they’re having it? So don’t just take the statistical significance as dogma, say, we found it. That’s it. The study shows a difference. Always go back to, what is the question you’re trying to answer? And in answering that question, are you answering in a way that is helpful for everybody else in the field?

The other thing I’ll mention very quickly is, Yoon Soo and Yoni have been mentioning you have to strike the balance between clarity and completeness. And definitely take advantage of the ability to use those supplemental materials. One thing I should say should go into the supplemental materials is that, with the availability of statistical software, authors might be tempted to just say, oh, we use Stata, we use SPSS. But each type of statistical analysis between each of those softwares has a lot of different options. And for replicability and to allow us to be consistent across the field with practices, we should consider making available the scripts that we use for that software as supplemental materials too.

Toni Gallo:

This is all great advice for authors, things they should think about as they’re writing up their work. But I want to turn to reviewers for a second. Do you all have advice for reviewers, as they’re looking at a paper and trying to decide, “Is this something that the journal should publish? What kind of quality is this?” What advice do you have for reviewers?

Yoon Soo Park:

We’re so grateful to our reviewers in the journal, and I think the significance of and the contribution of the work and the study, I think, would be something that the reviewers will look to. In addition, I would say that also the methodological rigor, as we’re talking about in this episode, will be the accompanying partner in supporting whether that significance and that contribution makes sense. And when I look at the reviewer feedback, when we get those different reviewer feedback, I kind of try to see and balance their feedback, as it relates to the significance of the work and then, the supporting evidence that they perceive is coming from this work.

Even if it’s a very cutting edge study, if the methods are not there, if the rigor is not there, and if it requires the authors to completely redo their study or if it’s something that requires them to redesign the study, those types of feedback will be helpful for editors, because then, that will make it difficult for the manuscript to advance, for example. Also, sometimes I get tips from the reviewers that this may need some further statistical review, for example. That will be very helpful for editors to make a judgment on the rigorous process of the study itself.

Colin West:

I want to just build on that last point that Yoon Soo mentioned, because it is very helpful when reviewers can provide methodological insight into the rigor that a study offers. But it’s also perfectly OK to say, “Hey, this is a complex analysis that’s outside my area of expertise, and I can’t comment on the statistical rigor or accuracy of these methods.” That is actually just fine. And it gives us information to make sure that we can obtain reviews or consultation or maybe from a quantitative standpoint, that’s one of those that we bring Yoon Soo into, as our consulting statistical editor, to do a deeper dive into a specific technique. So as a reviewer, you don’t need to feel like you’re an expert on every aspect of a paper that you’re reviewing. Just communicate with us where you feel comfortable and where you feel like something’s a little bit outside your area. And we can then incorporate reviewer comments across the multiple reviewers that we have for every submission.

Yoni Amiel:

Maybe building on that a tiny bit, sometimes reviewers feel like they have to be the experts, and so just being clear about if there’s an area in which you, as a reviewer, are feeling not comfortable or not fully confident, but you think that there might be something there, just flagging that for us is really helpful. And luckily, the folks around this podcast and others and the journal can take a second look, and we often will use one another’s expertise to get a second look or a third look, when there are questions that we flag as editors or our reviewers flag.

Toni Gallo:

I want to turn now to some listener questions. So we asked our readers and we asked on social media if anyone had questions for our editor team related to conducting and reporting research. So I have 2 that I want to put to you today, and the first is around coming up with ideas for research projects. And we actually touched on this last month on our qualitative episode, but how do you all come up with ideas for your work? Or where do you start? Do you start with a research question? How do you come up with those ideas for your own work?

Colin West:

Maybe I’ll jump in with the first piece of this. And in a way that might seem counterintuitive, I very rarely start a research enterprise with a fixed question in mind. I think the background, the fertile soil in which good research develops, is being intellectually curious. This is really about your attitude toward new knowledge, and then you can follow up because you’re interested and passionate about a field on what interests you. So that first piece is you have to be intellectually curious to pursue questions. And then, from there, gaining some experience in searching the literature to see what’s already been done, that might relate to that area that you’re interested in, helps you understand where the gaps are that new research might fill. And then, you start looking at, “Well, OK, what are my educational or clinical questions? How do they link with what I’m curious about? And is there a gap there that a new study might want to fill?” And I think that’s a starting point.

Gustavo Patino:

Yeah. And if I can build on what Colin said, the most important thing is to find what topic are you passionate about? What topic are you really love learning about? Many times, people might try to do research in a field, because “Oh, this is a field in which it’s faster to make papers.” And they might need it for their next level of training or professional development. But so much of research is going to be going in circles, trying to figure out where things were wrong, tinkering, that you have to do it around something that you are curious and passionate about. So always start with, what do you love? What do you want to learn more?

Yoon Soo Park:

Exactly. And I just also wanted to focus on one specific aspect, which are local studies, because a lot of times, I get questions from my colleagues, “This is a small scale study. I’m curious whether a topic around this local project may be worth exploring further.” And I say, “It could be.” If you have a local study, but the study and the question itself is an issue or a challenge that other institutions, your colleagues at other places, are facing the same issue, that’s a question worth pursuing. And even if it may not be a question that’s articulated in the field, it may be.

So the second element to this is the timing of the question. If there’s a specific momentum in the field that is going in a certain direction and we all are curious about that direction, 3 to 5 years from now, it may be worth thinking about it and studying that right now. And that’s the perfect fertile ground for local studies to help inspire building a program of research. So local studies that are smaller in scale are so important, because they help build the next generation of questions for the field as well.

Yoni Amiel:

And if I can throw in one final idea, working at academic medical centers, we’re never short of innovation and projects, whether to solve a problem that we’re encountering or to try to do something new and better than something good that we’re doing. And so, as you’re engaged in that kind of process, either as a learner, as a teacher, as someone who supports students, thinking about the overall program evaluation for how to think about whether what you are doing is working and how it’s working. And then, within that, thinking about specific research questions can really make the work come alive. And so, you can do a lot of really interesting scholarly work and research while you’re trying to innovate, improve, solve problems.

Toni Gallo:

So we have another question from a learner, and it’s kind of about, which comes first: Do you build your methodological and analysis skills first? Or do you reach out to a mentor or colleagues to help you first? So what would you recommend for maybe learners or folks who are just getting started?

Gustavo Patino:

I think it goes back to, what do you like learning about? What are you passionate about? If you’re passionate about a topic and then you find a question and you really want an answer, OK, then go learn the tools to answer that question. If you discovered that, along the way, that you love the field of statistics and study design, then that’s also a great thing to pursue. But always for me, it all starts with, what do you like?

Colin West:

Yeah, I think this goes back to that passion and intellectual curiosity piece. That comes first. Any mentor is going to want to work with a learner, at any stage of development, who is intellectually curious, interested, passionate, regardless of their skillset, because that passion is going to drive them to grow their skillset. And hopefully, the mentor can connect their teams with the resources to grow their knowledge, to get the analytical support that they need. I think the activation energy of feeling like you need to be an expert in analysis before you can even reach out to a mentor is not necessary. And I think it’s something people sometimes think they need, and it’s a barrier. It’s a unnecessary barrier. Bring your interest and your motivation forward, and the right team will help you thrive, building on that intellectual curiosity.

Yoni Amiel:

I love that, Colin. And thinking about this from the perspective of a mentor, what is often really useful is when somebody comes with their passion and their interest and then also has reflected a little bit on what they’re bringing to the project. So what do they know, what preexisting knowledge do they have? And then, what they really want to grow and develop through the project. Because that helps me think about mentoring, kind of using my teacher perspective of, “What can I share? What resources can I connect the junior investigator to make this a success, not only for this specific project, but really from a mentoring and career development perspective?”

Yoon Soo Park:

And just to add, I get asked this question quite often. What kind of stats courses should I take? What kind of quantitative approaches? And because the field is so unique, I think what’s probably more important, again, just to synthesize the collective thought here, is the passion that’s around in the particular area, and more specifically also for the learner, the trainee, to spend some time looking at the literature and really understanding how the story is being told. Because in our field and in medical education, we have a specific way of how the story is told in the journals, how the story is told in terms of what methods we use and how the results are presented. That flow and being familiar with the literature, I think, is super important, and that will actually make some of the methods related techniques … understanding them much easier down the line.

Toni Gallo:

Well, thank you. And I want to encourage our listeners, keep an eye out. We’re going to do some more of these Ask the Editor calls, so pay attention to our social media and some other journal sources. You might see another call in the future.

So as we close today, I want to turn it all back to you for any final thoughts or last pieces of advice that you want to share with our listeners.

Colin West:

I would just say, you’ve heard a thread coming through this really of 2 things. Number 1, strive for clarity in your presentations, and the second is follow what you’re most interested in, what you’re passionate about. And those 2 pieces are cornerstones of success in scholarship. From a resources standpoint, I would encourage all of our listeners to look on the Academic Medicine homepage. There are a host of resources, both for authors and for reviewers, in how to think about research manuscripts, including the quantitative assessment and methodological recommendations, which are a great starting point for people who have additional questions.

Yoon Soo Park:

Two quick thoughts to follow up here. Again, first being, to make your writing clear to and friendly to the readers, so that the presentation of the statistics and the interpretation becomes much more accessible to the readers. And then, the second is to keep things simple as you can. Simple is beautiful in quantitative analysis. Simple is straightforward. Simple helps getting to the research questions into the answers.

Yoni Amiel:

One thing that I can add is to think about a project as a stepping stone to an overall program of research. And so, thinking about the overall trajectory of your work as a series of small projects can help reduce the barrier to entry or how anxious people get about getting started. Starting small is fantastic and actually more likely to create findings that the rest of us find incredibly useful.

Gustavo Patino:

For my tip, I’m going to build on something that Yoni had said before, having collaborators, sharing your ideas with them. And it is definitely take time to build your network and take time to just go spend time with your collaborators, sharing ideas, bouncing crazy plans. And some of them will pan out, some of them will not, but you’ll keep learning. By the time you submit your paper, behind that, there’s a whole network of collaborators supporting your endeavor, even if they are not listed as authors, but that they had given you input, they have helped you with questions. So definitely, don’t feel like you have to do everything by yourself.

Toni Gallo:

I think that’s a great point to end our conversation on. I want to thank you all for being here today and for sharing all of these great tips and strategies, and I want to encourage our listeners, if you haven’t listened to our qualitative episode, definitely go back and find that one too. Because you’ll find other information to help you, if that’s the type of research that you’re doing. So thanks everyone.

Yoni Amiel:

Thanks, Toni.

Gustavo Patino:

Thank you, Toni.

Toni Gallo:

Remember to visit academicmedicine.org to find the latest articles from the journal and our complete archive, dating back to 1926. You can also find additional content, including the author resources we mentioned today, as well as free eBooks and article collections. Subscribe to Academic Medicine through the Subscription Services link under the Journal Info tab or visit shop.lww.com and enter Academic Medicine in the search bar. Be sure to follow us and interact with the journal staff on Twitter at @AcadMedJournal, and subscribe to this podcast anywhere podcast are available. Leave us a rating and a review when you do. Let us know how we’re doing. Thanks so much for listening.