Visiting Scholar Validates Online Tool to Assess Diagnostic Performance

On April 16, 2019

Physicians strive for diagnostic acumen. It is a fundamental skill in the practice of medicine. In today’s world of medical education, however, much of diagnostic performance is inferred, rather than measured. As an example, physicians in training are observed during rounds to determine their ability to diagnose patients. But what if diagnostic performance could be measured using a scalable, practical, and objective tool? Could this tool serve double duty for medical education and physician assessment? 

As an ABMS Visiting Scholar in the 2017-2018 Class, Souvik Chatterjee, MD, now an attending physician at Medstar Washington Hospital Center in D.C., set out to validate a new method of assessing diagnostic performance that uses brief, open-ended case simulations.* Dr. Chatterjee believes the time has come to incorporate online educational delivery methods and assessment tools into standardized medical education. “We’re assuming that observing individual trainees doing rounds is the best way to evaluate them,” he said. “While direct observation is an invaluable tool, it could be augmented with objective assessments. We need to look more carefully at how to use technology to advance skill sets for training, education, and assessment.”

Dr. Chatterjee and his colleagues conducted a retrospective cohort study of 11,023 unique attempts to solve case simulations using an online software platform, the Human Diagnosis Project (Human Dx). The attempts were made by 1,738 practicing physicians, residents (internal medicine, family medicine, and emergency medicine), and medical students across the country who voluntarily used the Human Dx software between Jan. 21, 2016, and Jan. 15, 2017. Three measures were used to assess diagnostic performance: accuracy, efficiency, and a combined score called Diagnostic Acumen Precision Performance (DAPP), which is consistent with the Institute of Medicine’s emphasis on both timely and accurate diagnosis. The Human Dx app already looked at accuracy and efficiency, but the study authors created the DAPP score because clinicians really need a combination of the two to determine how well they diagnose the cases, he said. Users were then analyzed by level of training and affiliation with an institution in the top 25 ranked medical schools as ranked by US News and World Report in the study posted on JAMA Network Open Jan. 11.

Overall, attending physicians had higher mean accuracy, efficiency, and DAPP scores than residents, interns, and medical students. Attending physicians affiliated with a US News and World Report-ranked institution had higher mean DAPP scores compared with nonaffiliated attending physicians. Similarly, resident physicians affiliated with an institution ranked in the top 25 medical schools by US News and World Report had higher mean DAPP scores compared with nonaffiliated peers.

The study found that individuals with more training demonstrated higher diagnostic performance. More importantly for Dr. Chatterjee’s purposes, the DAPP score may be a valid measure to assess it. “Validating this tool is one step closer to realizing that we can use low fidelity online simulation as an assessment tool for different aspects of care,” he said.

“We found that it takes, on average, two to three minutes to solve a case,” noted Dr. Chatterjee, who applied for the Visiting Scholars Program because of his interest in physician assessment and leadership. That translates to between 120 and 240 cases in an eight-hour day. In comparison, when doing rounds it can take an entire day to take care of eight to 20 patients. “And I might not get a diagnosis that day for any of those patients,” he said. “It could take a week or longer.” In contrast, this online simulated tool provides immediate feedback to solve a case quickly, Dr. Chatterjee said. “It’s a low fidelity environment, but the cognitive examples are replicated from the clinical world, so you’re going through the same diagnostic process, just at an accelerated pace,” he added.

This type of tool could potentially be useful for physician assessment, Dr. Chatterjee noted. He believes it would appeal to many clinicians. “It’s easy to do and doesn’t take up that much time,” Dr. Chatterjee said, “plus it provides useful educational content and learner-specific feedback.” Users choose real-life cases they want to solve that have been posted by other physicians and align with their practice. The Member Boards could evaluate existing cases on the Human Dx app, which are categorized by specialty, or create their own series of cases as part of continuing certification programs and/or continuing medical education activities, he said.

In the meantime, Dr. Chatterjee is already thinking about how to create high-fidelity online simulation for critical care that includes an interdisciplinary team of physicians, nurses, and respiratory therapists. “A higher fidelity environment would allow us to look at knowledge, the diagnostic process, team-based care, and the ability to perform under stress,” he explained. Assessing how well the high-fidelity internet simulator correlates with the real world would be another step toward validating such tools.

“We are trying to study the best way to educate and assess physicians in a way that will provide useful information and feedback,” Dr. Chatterjee concluded. “Physicians are eager for more feedback on their performance; we need to figure out how best to provide that to them.”

Learn more about the ABMS Visiting Scholars Program and who should apply.


* Dr. Chatterjee’s Visiting Scholar’s project, The Human Dx Project: An Objective Assessment of Diagnostic Reasoning, was supported by a grant to the ABMS Research and Education Foundation from the Gordon & Betty Moore Foundation.

Related Articles

More Articles