Later this month, Academic Medicine will post its published ahead-of-print articles for the June 2014 issue. To tide you over until then, here’s a preview of a research report by Sarang Kim and colleagues.
Searching for Answers to Clinical Questions Using Google Versus Evidence-Based Summary Resources: A Randomized Controlled Crossover Study
Sarang Kim, MD, Helaine Noveck, MPH, James Galt, EdM, Lauren Hogshire, MD, Laura Willett, MD, and Kerry O’Rourke, MLS
To compare the speed and accuracy of answering clinical questions using Google versus summary resources.
In 2011 and 2012, 48 internal medicine interns from two classes at Rutgers University Robert Wood Johnson Medical School, who had been trained to use three evidence-based summary resources, performed four-minute computer searches to answer 10 clinical questions. Half were randomized to initiate searches for answers to questions 1 to 5 using Google; the other half initiated searches using a summary resource. They then crossed over and used the other resource for questions 6 to 10. They documented the time spent searching and the resource where the answer was found. Time to correct response and percentage of correct responses were compared between groups using t test and general estimating equations.
Of 480 questions administered, interns found answers for 393 (82%). Interns initiating searches in Google used a wider variety of resources than those starting with summary resources. No significant difference was found in mean time to correct response (138.5 seconds for Google versus 136.1 seconds for summary resource; P = .72). Mean correct response rate was 58.4% for Google versus 61.5% for summary resource (mean difference −3.1%; 95% CI −10.3% to 4.2%; P = .40).
The authors found no significant differences in speed or accuracy between searches initiated using Google versus summary resources. Although summary resources are considered to provide the highest quality of evidence, improvements to allow for better speed and accuracy are needed.