On this episode of the Academic Medicine Podcast, discussing Academic Medicine’s and MedEdPORTAL’s new policy guiding the use of AI tools in the peer review process are Academic Medicine editor-in-chief Laura Roberts, MD, MA, MedEdPORTAL editor-in-chief Lauren Maggio, PhD, MS(LIS), Academic Medicine associate editor Krisztina Fischer, MD, PhD, MMSc, and AAMC director of journals Mary Beth DeVilbiss. They provide an overview of the journals’ new policy and use a series of common peer review scenarios to explore what’s appropriate, what’s not, and what you should think about before using AI as a reviewer.
This episode is now available through Apple Podcasts, Spotify, and anywhere else podcasts are available.
A transcript is below.
Check out the resources discussed in this episode:
- Fischer K, DeVilbiss MB, Maggio LA, Roberts LW. Artificial intelligence tools in scholarly publishing: Guidance for peer reviewers. Acad Med. 2026;101:237-241.
- Academic Medicine Use of Artificial Intelligence tools in peer review policy
- MedEdPORTAL Use of AI Tools in Peer Review policy
- Use of AI tools in Academic Medicine submissions policy
- Use of AI tools in MedEdPORTAL submissions policy

Transcript
Toni Gallo (00:03):
Welcome to the Academic Medicine Podcast. I’m Toni Gallo. Earlier this year, Academic Medicine and MedEdPORTAL introduced a new policy guiding the use of AI for peer review. In the March issue, Academic Medicine editor-in-chief Dr. Laura Roberts, MedEdPORTAL editor-in-chief Dr. Lauren Maggio, Academic Medicine associate editor Dr. Krisztina Fischer, and the AAMC’s director of journals Mary Beth DeVilbis wrote an editorial explaining this new policy and the principles supporting it. You can find the links to the new policy and the editorial in the show notes for today’s episode. I’m joined by Laura, Lauren, Krisztina, and Mary Beth today to talk through a series of scenarios related to using AI for peer review, rooted in Academic Medicine’s and MedEdPORTAL’s new policy. We’ll get into what’s acceptable, what’s not, and what you should think about before using AI as a reviewer. And while our conversation is going to be focused on Academic Medicine’s and MedEdPORTAL’s policy, I encourage you to think about the general principles when you’re reviewing for other journals too. But always remember to check a journal’s website for their specific policies and requirements before reviewing or submitting. With that, let’s start with introductions.
Laura Roberts (01:24):
Sure. My name is Laura Roberts. I serve as the editor-in-chief of Academic Medicine, and I also am the department chair of the Department of Psychiatry and Behavioral Sciences at Stanford University.
Lauren Maggio (01:36):
Hi everyone. My name is Lauren Maggio and I am the editor-in-chief of MedEdPORTAL. I’m also a professor and director of research in the Department of Medical Education at the University of Illinois College of Medicine.
Krisztina Fischer (01:50):
Hi everyone. My name is Krisztina Fischer. I’m the faculty director of the Master’s in Medical Sciences and Medical Education Program at Harvard Medical School, and I also serve as one of the associate editors at the journal.
Mary Beth DeVilbiss (02:02):
And I’m Mary Beth DeVilbiss. I’m the director of journals at the AAMC, so I work very closely with Laura for Academic Medicine and Lauren for MedEdPORTAL, working to manage the staffs of those journals and work on policies and priorities like the AI tools policies.
Toni Gallo (02:21):
Thank you all for being here today. I want to start with just a little bit about the policy that we’ll be talking about. So Laura, Lauren, can you tell us what’s in this policy? Why did you decide that this was the time that we needed something like this for the journals?
Laura Roberts (02:37):
Thanks so much, Toni. AI has just taken the world by storm in this recent several months, and it’s arriving at a time when reviewers for journals already were dealing with tremendous burdens, a lot of work, everybody’s squeezed for time. The demands on reviewers is just immense. And so there’s a natural pressure for reviewers to want to be efficient with their time. They also want to be well-informed and want to be able to reach out to the literature and broaden their expertise. And so AI would naturally be attractive in terms of being efficient and then helping people stretch and deepen their expertise. So to me, it’s very natural that people would want to rely on AI in doing peer reviews, but it has a lot of risks. And so we felt like it was important to come forward with a framework for thinking through how to use AI responsibly when performing peer reviews.
(03:39):
And we wanted to… because the field is moving so fast and the technology is moving so fast, we didn’t want to set up hard and fast rules. We thought kind of an ethical framework and some principles would be the most useful for our colleagues and for us in trying to make decisions around the use of AI in peer review. So we came up with a framework that really places emphasis on professional accountability. Embedded in that are ideas of quality and excellence, integrity, respect for the ownership of the intellectual work of authors. A second principle was confidentiality, and there it was really just deeply respectful of the confidentiality of say, participants in studies that might be included in original research manuscripts that by using these AI tools, that confidentiality would potentially be broken. Third was fair mindedness and avoidance of harm. And then fourth was transparency and disclosure about AI use. So we advanced that framework, which we thought would have valuable and robust principles to help us, but also would be flexible enough to meet whatever the next development and the technology might bring.
Lauren Maggio (04:55):
And just to piggyback on what Laura said, I think that timelessness of the principles that we’ve put forward will really help us as we move forward with this. One of the things I wanted to point out is that both of the journals will be using this policy, so it came together to synchronize around what we thought those core principles are and how they can be applied. So very much looking forward to getting this out.
Laura Roberts (05:19):
May I just jump in? Part of the idea for this editorial came from Krisztina, it was Krisztina’s original idea. So Toni, if you don’t mind, I’d like to ask Krisztina, what made you appreciate this issue so early? We discussed this, I don’t know, gosh, more than a year ago, so you were very, very sensitive to this. Maybe you could comment on that.
Krisztina Fischer (05:43):
Absolutely. So the reason why it was interesting to me was because these AI tools are so available and so attractive for multiple use and without guidance what is okay, what is not okay to do, it’s sort of like an unknown and the best intentions one would like to explore how to use these tools and then reviewing the articles is part of our job, which takes a lot of time and we want to do a good job. And if there is a tool that can help that work that is desirable. However, it’s important to be aware of the unintended consequences of this kind of use.
Toni Gallo (06:29):
The policies and the editorial are all available now through the journals’ websites. So I encourage all of our listeners to go and find all of those and read the full policies and read the editorial for more information.
(06:44):
But I thought we could use our time today to talk through some kind of common scenarios I think many of our reviewers might… questions they might be asking themselves as they’re reviewing and thinking about acceptable uses of AI. So I’m going to pose some scenarios and questions to you all, and I’m hoping you can give me some advice how this reviewer should think about the situation they’re in. So the first one is I’m reviewing a manuscript. Can I upload that manuscript to an AI tool?
Krisztina Fischer (07:15):
So I think even if you remove the names of the authors or the institutions or you use a closed AI system uploading any part of the manuscript under review violates confidentiality, and it’s not just about anonymity, it’s the loss of control and the loss of trust. So once we upload anything into an AI system, we can’t be certain how this information is then stored and logged and reused. So in general, peer review depends on protecting unpublished ideas and data and the big thing that it should be true in the digital space as well. So the short answer is that a reviewer never should upload any part of the manuscript into any AI tools.
Laura Roberts (08:04):
I want to reinforce that. In fact, we had a submission to our journal, to Academic Medicine, where a reviewer disclosed, they were transparent, but they disclosed that they weren’t an expert in the topic, so they loaded the paper into an AI tool. So there were really two problems there. One is, as Krisztina has mentioned, this original manuscript was someone’s intellectual work is no longer protected in any way. It just got loaded into something where there’s no control. But secondly, you really shouldn’t be doing reviews if you don’t have expertise. So it’s not a replacement for expertise or judgment of a reviewer. And so that was, I think another motivator for me behind this policy was that people were beginning to use it, not ill motivated in any way, but still it was possible to make mistakes not really understanding how the technology itself works. So I think the short answer, Toni, is people should not be uploading original manuscripts into AI tools, even closed system university resources.
Lauren Maggio (09:12):
And I would just expand somewhat because of the nature of MedEdPORTAL. We’ve been saying manuscript, but we actually mean tables, figures, appendices, all of the things that we would consider to be the intellectual property of the authors.
Toni Gallo (09:25):
So what about rather than uploading anything directly to the AI tool, I ask it a question like I’m reviewing a paper on this topic. What should I look for in this kind of manuscript? Or I talk about a methodology or maybe ask it about something related to the type of program in the paper. Is that okay?
Krisztina Fischer (09:48):
If it’s limited I think it should be appropriate use, especially if you have a general question as you just mentioned, Toni, that what are some of the things that you are looking for in a qualitative study or what are the limitations of a survey design or what are some common errors in survey development? I think that would be appropriate.
Lauren Maggio (10:12):
And I would agree with Krisztina. However, when I’m thinking about reviewing and thinking about different journals, I often will go to that journal’s website where they usually have a wealth of resources on ways in which you can effectively review perhaps their publication types or different types of studies. I know both MedPORTAL and Academic Medicine have reviewer centers which have a lot of those resources. So yes, you could use AI. I would say we have very much curated resources to try to make your job a little bit easier, and that’s going to point you right in the direction of what the editors are looking for.
Toni Gallo (10:49):
Along the same lines, are there other types of questions that would be acceptable to ask an AI tool as you’re reviewing?
Krisztina Fischer (10:57):
I can imagine that you can somehow get a checklist, although I agree with Lauren that most of the journals provide a detailed guidance on what a reviewer should be looking for when reviewing an article, but sometimes you would like to get more guidance. And then I would imagine that an AI tool could generate a checklist for a review that is rather general, but can serve as a guide for a reviewer.
Laura Roberts (11:26):
I also think someone might be able to use AI to identify relevant literature that would help a reviewer put some context and framing around a new submission. I think there are a lot of very appropriate uses of AI, but not ones that really, as Krisztina says, and Lauren says, violates the boundaries of professional accountability and undermines the trust that we have in the whole process of peer review.
Toni Gallo (11:54):
Okay, so those scenarios looked at the actual reviewing of a paper. So you have your review now. Let’s say I want to upload that to an AI tool to get help with either the language, the way I’ve written my instructions, or just like, is this an appropriate review for this type of paper? What about that? Can I upload my comments to an AI tool?
Lauren Maggio (12:17):
Yeah, so that’s fair game for you to upload your own content, and I think that can be very helpful. A lot of people use it for polishing language, for sometimes organizing their thoughts. So I think that’s fair game. I think one of the points that Laura made, however, is if there’s something maybe you don’t feel comfortable in and you’re asking the AI to give you that kind of a check back, that’s probably a good indication that maybe you should not be either reviewing that paper or if you feel uncomfortable with a method or a theory that you just tell us as the editors, I don’t feel comfortable with this, maybe you should get a methods check. And I think that transparency is more important. We don’t want you to be reviewing or feeling that you need to go out to AI around those areas that we’re hoping you’re going to be expert in to provide that peer review.
Laura Roberts (13:03):
I think it’s a nice opportunity or opening for me to comment. The editors really do read the reviewer comments carefully and appreciate that dialogue with the reviewers. And it’s perfectly okay. I mean, we often see things like, oh, you should have a statistics person look at this piece, or I can comment on this comfortably, but I feel like this area is outside of my scope of knowledge and expertise. And sometimes we’ll get an additional review or we’ll have other ways of helping to bring in appropriate expertise to evaluate that paper. But that self-reflection and that comment to the editors is extremely valuable when reviewers choose to do that.
Toni Gallo (13:44):
Laura, you mentioned one case where a reviewer had disclosed that they used AI. Have you all seen in the course of reviewing papers for both journals an increase in the number of reviewers who have mentioned AI or how has that come up so far in the review process?
Lauren Maggio (14:01):
I can speak for MedEdPORTAL. We have not seen many. And I think, again, this is what spurred our discussion about this. And the first one we had was quite interesting. Again, they were fully transparent and they gave us their raw comments and then they gave us their comments that AI had helped them to restructure and to polish. And we as a journal team took a step back and we’re like, oh, we really need to sit down and think about how we want this to come in and how.. do we give the authors both? Do we give them one and make an acknowledgement that they had used AI? And so at first we weren’t really sure, and we haven’t seen that many to know how we’re going to handle all of the different situations, which is why, again, I think having these kind of timeless principles that we can turn back to to influence how we’re going to continue to iterate on the policy.
Mary Beth DeVilbiss (14:52):
And I think what we know from some research around use of AI tools by authors is that more authors are using AI tools and nervous about disclosing that use because there’s a perception that there’s some kind of stigma or they’re going to get in trouble or it’s going to come back negatively on their work. We were wondering if maybe that was happening in the peer review space as well, or we weren’t asking people to disclose before we started talking about this policy and putting mechanisms in place for people to disclose. So we felt it was important to really articulate what our expectations are and give people the space to share those disclosures. And we hope with this conversation too, that we can emphasize that there isn’t a penalty here. We’re trying to take a measured approach. And we understand that these are very powerful tools when used as tools to support individual’s personal expertise and peer review work, but they need to be support tools and not replacement for people’s own work.
Toni Gallo (16:05):
So how would you like reviewers to disclose their AI use? I think in your editorial, you have some examples, but what’s important to include in that disclosure? Where should reviewers be putting this? Should they be telling just the editors or the authors too? What advice do you have for reviewers around how to disclose?
Laura Roberts (16:29):
I mean, I think it’s evolving, but at this point I think we’re asking what tools have you used? I think we are interested in what the prompt is. We’re kind of relying on a professional attestation approach in that disclosure and that people can be descriptive of what they did so that we can really evaluate it. It would be, I think, very, very helpful. I think those are the things, the tool and the prompt and the intention and description would be kind of where I land on that.
Lauren Maggio (16:58):
And I would add taking full responsibility just as when you’re an author, you would take full responsibility, you’ve checked your work, if you’ve used it for references, you went to make sure those references exist. If you’ve changed things around again, it makes sense to you as a reviewer. So taking those extra steps to make sure you stand behind what you’re submitting, you’re taking responsibility I think is important. And we will, I think maybe even start to see more disclosures with this policy, but also because when you submit your peer review now you are going to be asked to attest to whether you used it, if you did upload the material. So you will get that when you go to submit your peer review.
Krisztina Fischer (17:38):
And by doing this, we would like to understand how the community is using AI in the peer review process and build transparency and ultimately trust within our community how to use AI. And as an evolving discussion, these rules might change in the future, but understanding the limits as the AI in the moment is very important.
Laura Roberts (18:03):
Also, we are not using AI to synthesize the comments of reviewers. We’re not using AI to look for AI in our authors and our reviewers materials. I believe that other journals are doing that or will be doing it by tomorrow, because again, they’re looking for the same efficiencies and ways of synthesizing material. But at this point, we’re really relying on human beings, our editors and our editorial team overall to put the information together to weigh the feedback and guidance that we’ve gotten from reviewers. And we’re not deploying AI tools in that way in our journal at this time. Yeah.
Toni Gallo (18:44):
You all are scholars in addition to editors, have you used AI in your own scholarship? Whether it’s as an author, as a reviewer. Are there examples that you can give from your own work that might be helpful for listeners?
Krisztina Fischer (19:00):
I use AI for organizing ideas and maybe getting started with literature searches. I have very good experiences with that. I also use AI to recommend titles, and I don’t necessarily take the titles that the tool suggest, but it gives me great ideas. So I found that AI tools can be very, very useful in that extent.
Lauren Maggio (19:24):
I have a similar approach to Krisztina using it to sometimes help me organize my thoughts. I’ve started to do research on AI, and so now being very upfront about how we use and we’re testing the different tools. I think one of the things that’s become interesting, the more I’ve started to integrate AI into my practice is thinking about, it’s so very rare that I work alone as an author. We’re almost always working in a team, and I found it really important to have conversations upfront and early with my team members about how we as a team want to use AI because at the end of the day, we’re all on the hook for this article, and if some people don’t feel comfortable with it, I want to honor that. And then if some people are using it and perhaps felt nervous about admitting that they are using it, I think if we open up that dialogue early, that enables us to have some of that transparency. I’m excited to use it, but I realize some people I might write with may not be. And so again, having that conversation early is really important.
Laura Roberts (20:20):
I have a hilarious story where a member of my team at Stanford, I think wanting very much to please me and decrease my work burden, put a prompt in to have an editorial written in the voice of Laura Weiss Roberts and showed it to me, oh my God, first of all, it was terrible. So then it made me feel like, oh my God, are my editorials that bad that it trained in. But that led to a very meaningful conversation on our team about appropriate use of AI for performing work tasks. And I know in my day job, I work with wonderful like finance people and people who were implementing different like academic affairs work where there’s a lot of narrative that has to be built. And I think AI has been just enormously helpful in terms of synthesizing information, doing comparative financial analyses, and then even developing drafts of material that just are very, very time sensitive, very efficient for these long writing tasks that people have to do.
(21:31):
But it is amazing how many errors get introduced or how the tone is not correct or the emphasis. These issues of emphasis, which are very subtle judgments in these professional roles, are very problematic with these AI tools. Now, maybe the AI tools will get better, but there’s a piece of the human judgment that I just don’t think is going to be readily replaced with AI. So I think we should view this as a collaboration with these tools and use of these tools, but it’s really ultimately, especially in these professional roles, going to rely on our professional judgment.
Mary Beth DeVilbiss (22:10):
And that’s really the spirit of peer review too. I mean, we invite experts to evaluate these submissions. We want their professional judgment, and so making space for them to get support and help organize their thoughts perhaps, those are all appropriate and really exciting uses of AI, but we need to preserve that professional judgment and accountability for that judgment.
Toni Gallo (22:37):
A few of you have mentioned people might be uneasy about admitting that they are using AI. What would you tell those folks who might be like, I don’t know if I should, I’m just using it to maybe fix the grammar in my review. Do I really have to disclose that? Are the editors going to judge me differently because they know that I’ve used AI? What would you say to those people?
Krisztina Fischer (23:01):
I would say that they need to disclose and the editors will never judge them. And by disclosing you build trust in the community. And I think this is the most important part of this evolving discussion that you want to handle a manuscript the same way that you wish your own manuscript would be handled. And if you don’t feel comfortable disclosing how you use the AI, perhaps you don’t want your manuscript to be handled with an AI tool that is not disclosed later on. So I think transparency is very important in this discussion.
Mary Beth DeVilbiss (23:38):
In our editorial, we included a table that sort of presents a few potential common ways that reviewers might think of using AI or encounter using AI in the process. And we walk through whether it’s appropriate or not, and then if so, how to disclose that. So hopefully that helps clarify some of those questions as well.
Lauren Maggio (24:00):
So I’ve started to do some qualitative research in which we are speaking to authors who have declared that they’ve used AI. And one of the things that’s been most striking to me when I ask them how does it feel is that they tell me they feel afraid or scared. And I understand that, there’s high stakes to any publication that’s also… we have very low acceptance rates right now. So there’s a desire not to do anything that you think might jeopardize your publications. And so I think we have to think about this. We talked about a little bit in this session today, but going forward, I know as an editor-in-chief, I don’t want people to feel afraid or scared, especially our junior scholars who are now coming into the field, they’re becoming a member of the field. We want to ensure that they feel welcome and that they don’t have any kind of anxiety because they’ve used the tool and they’re about to disclose it as we’re requiring them to do.
Toni Gallo (24:53):
Anything else anyone wants to leave listeners with?
Laura Roberts (24:58):
Just to amplify this concept that Krisztina alluded to. At my home institution, there’s been a lot of work in AI and developing AI, but also reflecting on the ethical repercussions of the use of AI. A very wise and thoughtful group of people came up with this idea of a golden rule, which Krisztina mentioned, which is if you are uncertain about the use of AI to try to take a different perspective and think about how you would like your own work treated in the community of colleagues. And that sometimes does help clarify if there’s a sensitivity or a problem that you’re not seeing it from one point of view, but if you think about how you would like your own work treated, it becomes very, very clear. So just taking that step and following the golden rule, I think is probably good advice for us all, for any use of these technologically based tools.
Lauren Maggio (25:52):
And I would just encourage people that this is a changing space. So as you’re reviewing, as you’re authoring, always be looking at the journal website to see where they stand at that moment. Because it may change. You may start writing your paper and by the time you’re ready to submit, there might be something different. But again, I think if you adhere to these golden rules, you’re going to be in good shape.
Laura Roberts (26:17):
And with all emerging technologies, there are always these edge cases. I mean, sometimes they’re low stakes. For example, we had a submission not too long ago where the disclosure of these… this is an author piece… the disclosure was, well, I wrote this essay and I was trying to come up with good titles for it. So that’s a little bit of an edge case, right? Because it’s totally based on your original work, but part of your original thinking ought to be what the title is for your original work. So generating ideas there, I think is a little bit ambiguous. On the other hand, they disclosed it, and so it gave us the opportunity to think about this kind of edge case. And so I think another function that disclosure can do is help us all get smarter about this. We all need to learn together, and it’s not meant to be stigmatizing or where people are judgmental or unkind about it. It’s that this is brand new and we need to learn together about how to formulate an approach to the edge cases. So disclosure helps with that too.
Toni Gallo (27:21):
Anything else?
Laura Roberts (27:22):
Yeah, I think we should end with just thanks to our reviewers, and again, this appreciation for how hard the work is. We know people are doing this at two in the morning. We know they could be doing something better on their Sunday afternoon or more personally fulfilling, and yet they do this generous act of professionalism and contributing to the rigor and quality of the work, scholarship and education materials, all kinds of things that they’re doing to support their colleagues throughout the field. And so again, kind of not being in any way judgmental, it’s natural that people would look for ways to be efficient and to be smarter and to do their work well. But we just are very mindful of the potential risks and thought it would be important to have this conversation about ethical use of AI in peer review.
Toni Gallo (28:10):
Well, thank you all for being on the podcast today. I want to encourage our listeners to check out Academic Medicine’s and MedEdPORTAL’s AI policy. We also have a policy for authors, so go look at that too if you’re thinking of submitting. Check out the editorial. Everything is available now on the journals’ websites academicmedicine.org and mededportal.org. Thank you.
Laura Roberts (28:33):
Thanks, Toni.
Krisztina Fischer (28:33):
Thank you.
Mary Beth DeVilbiss (28:34):
Thank you.
Toni Gallo (28:36):
Also available on the journals’ websites are the latest publications, the full archive, and additional resources for authors and reviewers. Be sure to follow both journals and interact with the journals staff on LinkedIn and remember to subscribe to this podcast anywhere podcasts are available. Leave us a rating and a review when you do. Let us know how we’re doing. Thanks so much for listening.