Diagnostic reasoning as artificial intelligence emerges: a distributed cognition framework

AI and diagnostic reasoning
Cite this article as:
Rohlfsen, C. Diagnostic reasoning as artificial intelligence emerges: a distributed cognition framework, First10EM, March 4, 2024. Available at:
https://doi.org/10.51684/FIRS.134305

This is an invited guest post by Dr. Cory Rohlfsen (@CoryRohlfsen) based on an interesting twitter thread of his from a few month back.

Dr. Rohlfsen is a hybrid internal medicine clinician at the University of Nebraska Medical Center. He splits his time between hospitalist duties and primary care clinic. He is passionate about fostering a community of teaching excellence for future academicians. As a core faculty member, he serves on the Curriculum Competency Committee and reviews resident milestones as part of their progress towards graduation. He is also the director of the first competency-based, inter-professional health educator track in the United States. His scholarship interests include trainee-centered approaches to competency-based medical education (CBME) and innovative assessment practices.


After nearly despairing over how Artificial Intelligence (AI) will one day replace us, I realized it just isn’t so, but not for the reasons typically cited (e.g. compassion, relationships, empathy). Many have imagined what the future of medicine will look like with AI as co-pilot. It’s only a matter of time before a Turing Test of patient outcomes proves AI’s superiority when in the driver’s seat. From that point on, will doctors even be in the cockpit or will we be left peering through the proverbial dust of the runway as AI takes flight? Which generation of doctors can expect this to happen in their lifetime?

Since much of our identity as physicians is tied up in the cognitive domain, it’s normal to wrestle with these questions, particularly as our roles and responsibilities evolve. If you’re reading this as an experienced attending physician, imagine the disproportionate impact such disruptive technology might have on a trainee whose identity is actively being forged.1 As an internist practicing both primary care and hospital medicine, I moved through all stages of grief, bargaining with a few downward spirals of doomsday thinking on the path to acceptance. Eventually, a more optimistic view emerged – a perspective I’m thrilled to share with you today.

Sure, doctors will always have an upper hand with our physical, healing presence. But that’s not what this post is about. We also have a distinctively unique COGNITIVE trait to offer – one that took over 10,000 years of evolution to develop.

Because AI applications in medicine are innumerable, the scope of this post will be limited to diagnosis. While this proposed framework could also apply to a broader swath of management reasoning and medical decision making (including ethics, judgement, uncertainty, etc), let’s start small with diagnostic reasoning – a strictly cognitive domain.

Let’s first acknowledge that our analytical brains glorify hypothesis-driven reasoning and problem solving. We are drawn to what we can measure, study, and improve. But in doing so, we risk neglecting our biggest contribution to medicine.

Yes, AI will outperform average doctors in information rich environments.2 It will process higher volumes of data with greater speed, higher fidelity, and tireless aptitude. But only a fraction of diagnoses come to light in this domain. Most diagnoses spring forth from information deserts.

Recall that ~80% of diagnoses are captured during the history.3 Even if a diagnosis can’t be confirmed until an exam, lab, biopsy, or imaging study is complete, the largest advance in hypothetical-deductive inquiry is usually in the patient interview. This also happens to be where “humanity” prevails and shines brightest.

Since the discovery of mirror neurons,4 humans still don’t know what to call this “super power.” Some call it experience, pattern recognition, situational awareness, emotional intelligence, or action learning. For the remainder of this post, I’ll call it “situated cognition.”

Whatever it is, we have a superior SEARCH5 function in information deserts. With heightened sensitivity to recognize socially encoded cues & patterns (separating signal from noise), we are programmed to navigate these deserts even if we’re not aware we’re doing it.

Remember those mall maps back in the day? The ones that you went up to when you were lost and needed to find the exit closest to your car. The good ones signaled “YOU ARE HERE” with a big red star.

For the mall maps that didn’t have that big red star, what was the harder task?

  • Task 1) Finding yourself on the map (the search) or…
  • Task 2) Mapping your route to the destination once oriented

We know the SEARCH task for that elusive star is way more taxing! Once oriented, it’s not hard to find your way. This search task represents the information desert and we all navigate it differently. Novices might scan “left to right” (taking in every detail to avoid missing the star). But experts usually have a strategy – even if it’s a subconscious one.

Search strategies may include:

  1. Looking in the center of the map (classic histology tactic on a standardized test)
  2. Scanning for a key (looking for symbols) or…
  3. Orienting to a well known landmark (e.g. food court)

The point is that a cognitive search is different from analytic reasoning because it’s context dependent, situated in a unique time & space, and often subconscious (prompting split second, reflexive decisions). When adding humans to the mix, it’s even more variable and “fuzzy” – this is where “situated cognition” becomes so important.5

As tacit data unfolds, a pattern emerges and only then does the data become codified into information. Only then can hypothesis generation start to occur (as the amorphous problem surfaces with a glimmer of clarity). Like being lost in the mall, a map only helps you if you know where you are. Our super power as humans is orienting to that problem – a human problem in all its biopsychosocial wonder.

For clinicians or clinician educators who have relegated history taking to anything short of a skill and thrill, I hope this post is re-invigorating because every clinical problem starts as an information desert, like a mall map without a star. Novices will search with inferior strategies but experts will leverage their situational awareness and experience. A pre-clinical student might learn to memorize a “comprehensive review of systems” whereas a third year student learns some questions are more pertinent than others. Likewise, a fourth year medical student aims to be like the post-graduate trainee in being hypothesis-driven in their data collection. But there is a higher level of patient interviewing, one that I call “situated search and hypothesis reformation.” 

If trained to take a targeted history within a hypothetical deductive framework, we’ll be fully competent to capture most diagnoses. The piece that’s missing to become a full-fledged expert is a feedback cycle of instinctual, situational cues from the interview that inform (and reform) a hypothesis-driven line of questioning. Put simply, mastery in interviewing a patient involves a sensitivity to subtle, non-verbal cues – the thing we evolved to be REALLY good at.

A pause of equivocation.

A hint of sarcasm.

A grimace of displeasure or pain.

Or loss of eye contact when the history starts to go fuzzy.

Each of these moments represents a window of opportunity to SEARCH. That’s when the expert diagnostician says, “I noticed a big sigh after mentioning ____, can you tell me more about that?”

Master diagnosticians will find the oasis in the desert. Through feedback cycles of “sensitized hypothetical-deductive inquiry,” they find the problem with absolute precision where no one else was looking.

Since this SEARCH function rarely gets talked about, it may help to contrast it from the analytical brain. Because analytics is a form of information processing (1 + 2 = 3), it will function well in information rich environments with high signal : noise ratio and finite possibilities. AI is already showing immense promise here. Situated cognition (“sit cog”) on the other hand thrives in information deserts with “fuzzy” signal(s) and infinite possibilities.

Sit cog knows “how” even if it doesn’t know “what” or “why.5” As such, it’s going to be decades before AI competes with humans in this domain.6 Thankfully, it’s not a competition (and it shouldn’t be). Combining these two highly evolved forms of cognition results in a complimentary system of problem solving that is superior to using either one in isolation.7-8

In other words, from a “distributed cognition” perspective, capturing the “net sum” of signal is valuable irrespective of how it’s collected or encoded.7 Traditionally, our imaginations have been tickled by how AI will compliment humans in diagnostic reasoning to achieve the best possible patient care. But have we fully imagined how humans will compliment AI on this journey?8

Yes, our professional identities will evolve as some cognitive tasks are “offloaded.” True, AI will outpace and outperform our analytic capabilities. That said, our ability to search through information deserts is a uniquely human trait that will be a “super power” for calibrating AI for generations to come.

In summary, human cognition has evolved over thousands of years to decipher socially encoded information including non-verbal cues and subtle nuances in speech, tone, or demeanor. These elements are crucial in defining clinical problems with precision, wherein a patient’s history, expressions, and environment can provide essential insights. However, as the problem representation matures, and more information becomes available, the role of AI gains prominence. AI’s capacity to process vast datasets and provide statistical analyses enables clinicians to identify less intuitive patterns or be sensitized to disease base rates or inconsistencies that might elude human perception. Placed side by side, the situated cognition of humans working in collaboration with the analytical capabilities of AI will prove to be a powerful engine for diagnostic reasoning.

Now ask yourself… what will be the rate limiting step in closing gaps of diagnostic inaccuracy or imprecision 30 years from now? I’m betting on humanity. Why? Because human problems (especially the unspoken ones) demand that humans are at the helm.

References

  1. Jussupow E, Spohrer K, Heinzl A. Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals. JMIR Form Res. 2022; 6(3).
  2. Bergerum C, Petersson C, Thor J, Wolmesjö M. ‘We are data rich but information poor’: how do patient-reported measures stimulate patient involvement in quality improvement interventions in Swedish hospital departments? BMJ Open Qual. 2022;11(3).
  3. Cooke, G. A is for aphorism: Is it true that “a careful history will lead to the diagnosis 80% of the time”? Australian Family Physician. 2012; 41(7).
  4. Heyes C, Catmur C. What Happened to Mirror Neurons? Perspect Psychol Sci. 2022; 17(1):153-168.
  5. Kirsh D. Problem Solving and Situated Cognition. The Cambridge Handbook of Situated Cognition. 2009; 264-306.
  6. Krishna R, Donsuk L, Fei-Fei L, Bernstein, MS. Socially situated artificial intelligence enables learning from human interaction. PNAS. 2022; 119(39).
  7. Merkebu J, Battistone M, McMains K, McOwen K, Witkop C, Konopasky A, Torre D, Holmboe E, Durning SJ. Situativity: a family of social cognitive theories for understanding clinical reasoning and diagnostic error. Diagnosis (Berl). 2020; 7(3):169-176.
  8. Rajkomar A, Dhaliwal G. Improving diagnostic reasoning to improve patient safety. Perm J. 2011; 15(3):68-73.

You can find more First10EM guest posts here

Leave a Reply

2 thoughts on “Diagnostic reasoning as artificial intelligence emerges: a distributed cognition framework”

Discover more from First10EM

Subscribe now to keep reading and get access to the full archive.

Continue reading