Evidence based medicine is still the best kind of medicine

A recent podcast has caused a bit of a stir among the nerdiest of my friends. This post, in large part, is a response to that podcast, but more than that, it is a discussion of the role of science and evidence in modern medicine, so you might find it interesting even if you haven’t heard to podcast I am talking about.

The podcast was an episode of The Accad and Koka Report called Beyond EBM: Case-based reasoning and the integration of clinical knowledge. Although I think the discussion has a few logical inconsistencies and mischaracterizes evidence based medicine, it is worth a listen. At its core, it is a discussion by two thoughtful clinicians that touches on some legitimate philosophical questions about applying science in medicine.

I am going to be critical of a number of the ideas and arguments raised in that podcast, but I want to be clear that that criticism is not directed at the individuals making the arguments. I don’t know either of the people on this podcast, but after listening to them talk, I respect them. They clearly care about medicine. They clearly care about their patients. They want to have intelligent conversations about the philosophy of medicine. So although I disagree with a lot of what they say, I think the very fact that they are having this conversation indicates that they are among the best of our profession.

Let me start by saying that evidence based medicine, as it is currently practiced, is far from perfect. Financial interests bias scientific results to the point that they are often unusable. Single answers from (frequently flawed) meta-analyses are often treated as infallible, and translated into thoughtless guidelines and core measures. A gap exists between textbook descriptions of EBM and its general practice. There are also difficult theoretical and philosophical questions about how to apply generalized results from clinical trials to individuals patients. These are all valuable criticisms, but in my mind they don’t outweigh the tremendous value of evidence based medicine ideally practiced.

Evidence based medicine, fundamentally, is a set of tools designed to help us think critically. EBM is a technique, not a final answer. Not THE answer, as it is often portrayed in flawed guidelines co-opting the title of EBM. Like any tool, it is important to know the limitations of EBM. Thoughtful criticisms are important and welcome. Thoughtful criticisms are, really, a foundation of science.

But we shouldn’t throw the baby out with the bathwater. Although much of what was said in the podcast has a basis in valid theoretical concerns, I worry that the comments could be interpreted as a complete dismissal of science. I have seen similar arguments used to reject science or evidence completely. These same arguments are often used by quacks: ‘If science is flawed, we should just reject it. Buy my all natural healing oil instead.’ This is clearly wrong. Discussion of the flaws in aircraft design does not prove the existence of flying carpets.

Rejecting science is bad for our patients. I think evidence based medicine is still the best kind of medicine.

The EBM strawman: Ignoring expertise and values

The podcast discussion begins by creating a definition of evidence based medicine that I don’t think represents evidence based medicine. They state that “although there are many thoughtful proponents of evidence based medicine who will say things other than clinical research matter – pathophysiologic reasoning, clinician experience, certainly the input from goals and values of individual patients. They’ll say that those things matter, but they don’t spend any other time talking about them.” To their credit, the say that we might see this as a strawman, which is entirely correct, because it is a strawman.

Clinical judgement and patient values are integral to true evidence based medicine. They are the the forefront of my practice. If you look through the posts on this site and you will find that the conclusions almost always point to holes in the data that must be filled with judgement and value decisions. The conclusion of every talk I have ever given is that data must be assessed through the lenses of clinical expertise and patient values. Every episode of the Skeptics Guide to Emergency Medicine finishes with discussions about clinical applications (a combination of data and clinical expertise) and involvement of the patient. The conclusion of every lecture in the Best Evidence in Emergency Medicine course is: “It all depends”. That is, the interpretation of the data all depends on the context of the patient in front of you, your judgement, and their values.

Evidence based medicine, at its very core, requires the assessment of evidence by experienced clinicians in the context of real patients. Evidence based medicine cannot be practiced by statisticians or librarians (although statisticians and librarians are incredible resources). Clinical judgement and patient values are not separate from evidence based medicine.

Evidence based medicine is not “cookbook” medicine. Because it requires a bottom up approach that integrates the best external evidence with individual expertise and patients’ choice, in cannot result in slavish cookbook approaches to individual patient care. External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all, and, if so, how it should be integrated into a clinical decision.” (Sackett 1996)

Obviously, I think it is wrong to use this argument to dismiss evidence based medicine. However, I think it is also a valid criticism. Not everyone practices evidence based medicine as I define evidence based medicine. (I have been involved in many lively debates about just how many physicians actually practice EBM. I will say I am more optimistic than many of my colleagues). People do (incorrectly) use meta-analyses (no matter what the quality) to argue right and wrong; to dismiss clinical judgement. Highly biased RCTs are touted as final answers, to the exclusion of all other forms of evidence. Guidelines are used to prescribe rather than guide care.

These practices are done under the heading of evidence based medicine, but they aren’t evidence based medicine. They are bad practices rebranded. It is a lot like labelling comments about “shocks and breaking ribs” as an end of life conversation. They share a similar veneer, but they aren’t the same. Just because a practice is often done poorly, doesn’t mean the value of that practice, when done well, should be rejected.

Another important question raised was: how good are we at discussing the process of integrating scientific evidence with clinical judgement? This is a great question. I agree that proponents of evidence based medicine don’t spent as much time talking about this step as we spend talking about the critical appraisal of the evidence. There are a number of reasons for that.

First, and most importantly, I think proponents of evidence based medicine are different from practitioners of evidence based medicine. It is easy to get these two groups confused, and therefore misunderstand the core practice of EBM. There is a relatively small group of physicians – who you could call the EBM proponents, methodologists, or “EBM nerds” – who spend a lot of time studying methodology, and therefore dissecting papers and discussing the results. They talk a lot about evidence, and they are a fantastic resource, but it would be a mistake to describe what they are doing outside patient care areas as evidence based medicine. They are analyzing evidence and ideally providing good knowledge translation.

The true practitioners of evidence based medicine are physicians actively seeing patients. They use the evidence, sometimes appraised personally, but often digested and summarised by the smaller group of methodologists, to guide their practice. They strive to understand the evidence well enough to know how it applies to the patient in front of them, and when it might not. Then, they apply their expertise to make informed medical decisions. These are evidence based medicine practitioners. (Again, it is unclear exactly how common these true evidence based practitioners are in the real world, but I know many.)

The argument that EBM practitioners don’t talk about how to integrate evidence and experience, when viewed from this standpoint, is simply wrong. The methodologists might not spend as much time discussing this, but practitioners of evidence based medicine spend almost all their time discussing how to integrate scientific evidence with expertise and patient values. This is what we do when we learn medicine in residency. This is what we do when we gather at conferences. This is what we do when we talk with our colleagues. This is the practice of medicine.

The bottom line is that we spend a massive amount of time discussing how to integrate evidence with expertise and values. That discussion is just found in different forums because it is not solely the purview of the “EBM nerd”. I think this is an appropriate and useful division of labour. Methodologists provide special insights into the science, but discussion about expertise and values are the purview of practicing physicians. Sometimes, a single physician fills both roles, but that isn’t necessary.

Another reason we spend less time talking about clinical judgement in public forums like podcasts, blog posts, or papers is that there are an infinite number of potential clinical scenarios. We can’t possibly discuss them all. We can’t anticipate them all. I can’t know that your next PE patient will be pregnant, with a history of a subdural hemorrhage, and an allergy to heparin. All I can do is discuss the best available evidence on managing PE. You have to provide the clinical judgement. (That being said, I definitely think there is a role for more widespread analysis and discussion of the application of clinical judgement. I am a huge fan of explicitly analysing thinking and continuously trying to improve my judgement.)

Finally, there is a practical reason for the focus on critical appraisal. It is the weak point. It is the low hanging fruit. In general, physicians remain incredibly uncomfortable with the basic scientific concepts required to appraise evidence. You spend all of medical school and residency developing your judgement, but almost no time learning science. We give lip service to it in medical education, but in my medical school training, it was essentially nonexistent. For every hour spent learning scientific methodology, we spent at least 1,000 learning physiologic factoids.

I would love to spend more time discussing the complexities of integrating experience and patient values with the available evidence. I am sure most of my colleagues would. But before you can get to that level, you have to have a solid understanding of science. You have to know what data is worth paying attention to, and what the limitations are. The evidence based medicine community still spends a lot of time discussing basic critical appraisal skills because those skills are not even close to being universal in medicine.

However, I will finish by saying that things are not as dire as they are made out to be on the podcast. Although I am not aware of any specific method or algorithm to combine expertise with evidence, at the core of every EBM discussion is a reference to clinical judgement. Read anything by Josh Farkas, Rory Spiegel, Ken Milne, or Jerry Hoffman and you will see medicine described not as a black and white world ruled by evidence, but as land of grey where we must be guided by judgement.

Are we too limited in our definition of evidence?

One of the key criticisms of evidence based medicine is that we are too limited in our definition of wants counts as evidence. Are we too focused on quantitative clinical research? This is a fantastic and complex question. It probably warrants its own blog post. The best answer I have is: “it depends”.

As I mentioned in the intro, EBM terminology is frequently abused. Many practices are labelled “evidence based” without actually being what I recognize as evidence based medicine. Because they sit at the top of a pyramid, meta-analyses are praised, even if they combine multiple low quality studies. RCTs with awful methodology are elevated simply because they are RCTs. Although these practices are common, and should be criticised, they aren’t EBM. On the other hand, it is hard to argue that we too focused on clinical research when so many studies exist suggesting that clinical research is widely ignored (antibiotics prescribed for viral illness or head CTs ordered for everyone who has touched their head).

We probably are, at least at times, too narrow in our definition of evidence. I am a trained qualitative researcher. I value qualitative methodology. There is no doubt that qualitative methods are under-represented in medical discussions.

However, one has to consider the type of knowledge generated by qualitative research and the intended audience. Although there are some qualitative studies that provide excellent insights for the practicing physician, qualitative methodology in general is a type of science that is best at generating ideas (or new hypotheses) rather than confirming or validating those ideas. In an ideal world, ideas generated through good qualitative studies would subsequently be tested and explored through a variety of quantitative methodologies. (Which then generate new questions for qualitative research). For busy practicing clinicians, reading qualitative studies with questionable immediate clinical impact might not be a worthwhile investment of time.

In the same vein, the podcasters are very critical of hierarchies of evidence. We all know the diagrams they are referring to: the meta-analysis sits at the top, just above the RCT, with other forms of data falling in a ranked order below. This is another strawman of sorts. Although I agree that these hierarchies are widely discussed, no true practitioner of evidence based medicine uses these hierarchies. We all know that one good RCT can easily trump a bad meta-analysis. We all believe that smoking causes cancer despite only having observational data to support that claim. We all use clinical judgement at work – probably more often than we ever use a meta-analysis. There are clearly misunderstandings about these core EBM concepts, and those misunderstandings need to be addressed, but that doesn’t undermine the value of EBM.

The tough question ultimately is: what counts as good evidence? Dr. Tonelli is right that RCTs are not always superior to clinical experience; that meta-analyses don’t automatically trump basic pathophysiology. There is no clear demarcation of the best evidence to use in every situation. EBM doesn’t offer a clear answer, although neither does Dr. Tonelli.

I don’t believe that the difficult question of which type of evidence is best in which scenario invalidates science. I wholeheartedly reject the idea of extreme relativism in medicine. Some philosophers will argue that every observation is as valid as all others, because all observations are inherently subjective. Although such extreme relativism is seductive and internally consistent, it is unhelpful and dangerous in medicine. The human brain is flawed and biased. Science has specifically developed methods to deal with those inherent flaws. Some forms of observation, because of their ability to control those biases, are simply more reliable than others.

What counts as evidence, or what types of observation we should rely on when making medical decisions, remains an open philosophical question. Look at the debates around a topic like thrombolytics for stroke, and it is clear that we frequently disagree about what counts as high quality evidence. This is not to say that all science is relative, or that any stance is tenable. These debates tend to center around topics where the most appropriate scientific answer is probably still “we don’t know”. That doesn’t mean that the questions are unanswerable, just that we have yet to make sufficient reliable observations to draw dependable conclusions. (With thrombolytics for stroke, a few RCTs – a proper replication of NINDS – would probably settle the debates.)

One of my major concerns is that, in the podcast, the speakers seem to reject the idea some forms of observation are more reliable than others (although it is unclear exactly how extreme their relativity is, as they still seem to value some people’s observations over others). In particular, there are two forms of observation that they think should be treated on equal footing with clinical trials: physiologic reasoning and individual experience.

Physiologic or mechanistic reasoning

In the podcast, they are very supportive of the use of mechanistic reasoning in medicine. They recognize that mechanistic reasoning can lead us astray (and repeatedly has), but they are quick to write these errors off. They characterize the EBM skepticism of mechanistic reasoning as being “anecdotal”. They state that EBM practitioners have simply handpicked a few examples, such as the CAST trial, where mechanistic reasoning has failed. (Echt 1991) They imply that there are a large number of cases in which mechanistic reasoning has been successful, but that those cases are simply being ignored by EBM proponents, although they don’t provide  any examples of situations in which mechanistic reasoning succeeded. More importantly, their argument is just as anecdotal.

I don’t believe that there are only a small handful of cases in which mechanistic reasoning has failed in medicine. Over and over again, we see practices flourish because surrogate outcomes, only to find out that they never improved patient oriented outcomes. Surrogate outcomes are a kind of mechanistic reasoning. Statins definitely lower cholesterol, but we are finding that in more and more groups of patients, they don’t improve outcomes. We have diabetes medications that lower glucose, but don’t change cardiovascular outcomes. The evidence nerds have been pointing this out for a long time, but much of the medical community, guided primarily by mechanistic reasoning, seems as surprised as the general public when this fact is raised on NPR or in the New York Times.

Mechanistically, stents should help in stable coronary artery disease, but they don’t. (Stergiopoulos 2014) Mechanistically, thrombolytics should help in stroke, but they probably don’t. Mechanistically, amiodarone should save lives in VFib arrests, but it doesn’t. (Kudenchuk 2016) Mechanistically, arthroscopic knee surgery should improve pain, but it doesn’t. (Brignardello-Petersen 2017) Mechanistically, casts should help with buckle fractures, but they aren’t needed. (Jiang 2016) Mechanistically, tight glucose control should help our ICU patients, but it doesn’t. (Finer 2009)

This could go on for a long time. Admittedly, I am still only providing anecdotal evidence that mechanistic reasoning is often faulty in medicine. I am sure you could find counter-examples demonstrating that mechanistic reasoning helps. (Although the only way that you could prove that benefit is through a clinical trial, which sort of undercuts the value of the mechanistic reasoning.)

Interestingly, I think I can make a very convincing argument against mechanistic reasoning using mechanistic reasoning. Human physiology is a very tightly controlled homeostasis. Any physiologic change is immediately balanced through a wide variety of mechanisms, with a large variety of consequences. Trying to predict the outcome of a specific drug just because we know its mechanism of action ignores this homeostasis, the large number of physiologic pathways that will counter the effect of this drug, and the large number of unintended consequences any drug will have in such a complex system. Mechanistically speaking, it seems almost impossible to use mechanistic or physiologic reasoning to accurately predict outcomes at the organism level.

On a more serious note, I think we have pretty good empiric evidence that physiological reasoning doesn’t work. Consider the vast number of chemical compounds that are studied every year as potential pharmaceuticals. Most don’t make it beyond the level of animal testing. Of those that do, there are again failures in phase 1, phase 2, and phase 3 trials, such that the vast majority of compounds tested fail to make it to market. But why were these compounds being tested in the first place? Because mechanistically they should work. Because in a test tube they did work. The entire pharmaceutical pipeline is substantial, quantitative evidence that physiologic reasoning in unreliable.

Another great example is the “parachute trial”, in which the authors looked at medical practices that were described as “parachutes” when justifying the argument that they should not be subjected to clinical trials. (Hayes 2018) In other words, it was so obvious from a mechanistic standpoint that these therapies would work; that science was unnecessary. It turns out a number of these practices had been studied, and the results were positive only ⅓ of the time. These were, in the opinions of published experts, the absolute best examples of mechanistic reasoning, and they mostly failed.

Does this mean that we should completely ignore physiologic reasoning? Should we stop teaching physiology in medical school? Absolutely not (although I think spending a little more time on critical appraisal and a little less on the Krebs cycle would serve us very well). Physiologic reasoning is still important in evidence based medicine. I can reject homeopathy out of hand, because there is no plausible mechanism through which it could work. We shouldn’t waste our time or money studying it. Physiological reasoning is essential in developing new hypotheses for clinical research. Similarly, when we find ourselves in the all too familiar vacuum of evidence, physiology is a reasonable, but imperfect guide. However, I would be wary of following that guide too far.

“In my experience”

One of the major forms of evidence that they argue is discounted by evidence based medicine is clinical experience. Do I believe experience should be automatically discounted? Of course not. I completely agree with the podcasters that experience and clinical expertise play important roles in medicine. However, experience can be dangerous. The human brain is inherently biased. Our memory is flawed, and experience doesn’t have the benefit of comparators or controls. Experience is important, but complicated.

A lot has been written about the shortcomings of human memory and experience. Good lay summaries of this research can be found in 2 books by David McRaney: You Are Not So Smart and You Are Now Less Dumb. Thinking, Fast and Slow by Daniel Kahneman is another good option. I have a series on cognitive errors in medicine here, and a catalog of research biases that can be found here.

The quick version is that experience is a flawed source of knowledge in medicine. Many of the conditions we treat are self-resolving, which can give the false impression that we helped a patient when we actually did not. Even conditions that are not self resolving tend to wax and wane. Patients only seek medical care when they feel their worst, so by simple regression to the mean, they are likely to improve after we see them. Without a control group, we are apt to see these improvement as wins, building a faulty expertise in which ineffectual, or even potentially harmful, practices propagate. (Consider antibiotics used for viral URTIs, which could be strongly supported by both physician and patient experience.) These errors are made worse by our faulty collection and recall of information, as is evidenced in the large number of commonly described biases.

At its worst, experience is used as an anti-scientific tool. In my experience, the only times that I have ever heard the phrase “in my experience” used in a medical setting were times when clinicians wanted to ignore good evidence. When I presented the data that PPIs probably don’t help emergency department patients with GI bleeds, instead of having a discussion about the evidence, the conversation was simply shut down by a senior GI specialist stating that “the evidence doesn’t matter, because in my experience they work”. When I have tried to reduce my patients’ suffering by using topical anesthetics for corneal abrasions, evidence was ignored and “experience” was loudly stated. Every discussion I have had about the shortcomings of thrombolytics for stroke have been countered by neurologists “experience” of seeing a patient recover in front of their eyes (aka treating a TIA).

How should experience be incorporated into clinical decision making? This is a complex discussion that in itself requires a great deal more research. The podcasters are entirely correct that this is an under-appreciated topic. My quick take is that experience should be used to fine-tune evidence based answers, not to reject evidence altogether.

All too often, as illustrated by the examples above, experience is used to overrule science. If you are ignoring the best evidence in 100% of your patients based only on experience, you are making a mistake. Conversely, if you are slavishly applying a single evidence based answer to 100% of your patients, without any nuance, you are also making a mistake.

Consider the management of a sick septic patient. Part of the modern, evidence-based management of sepsis is an IV fluid bolus. If you decided, based solely on your clinical experience, that you were going to completely eliminate the use of IV fluid in your sepsis patients, you would be wrong. However, blindly flooding every patient with a 30 mL/kg bolus is also wrong. Even though there is evidence supporting a 30 mL/kg bolus, depicting that as the “EBM practice” is cartoonish and incorrect. A clinician practicing evidence based medicine will know the evidence supporting the 30 mL/kg bolus, including its shortcomings, inclusion and exclusion criteria, and specific harms and benefits. Armed with that knowledge, the clinician starts with the best evidence based answer (a bolus) and then considers the patient. How might this patient be different than the patients in the trials? Is she more likely to be harmed? Are there other options? Thus, the practice of evidence based medicine leads to a smaller bolus in a patient with CHF despite no specific study supporting that practice.

Evidence based medicine does not provide a specific method for integrating experience with evidence. It is not frequently talked about in published articles, although I would argue that it is widely discussed in evidence based medicine, in venues like grand rounds, conferences, and at the bedside. That is a shortcoming of evidence based medicine, but not a fatal flaw.

Importantly, despite suggesting an alternative to EBM, this podcast also doesn’t provide us with any method for integrating experience and evidence. They note that experience can lead us astray as well as helping us. They note some people use their experience well, but others use it poorly. But they don’t give us any mechanism to determine which is which. (This is a complex subject, and remains a weak point of both evidence based practice and any alternative proposed).

When Semmelweis was struggling to get others to accept his experience that antisepsis was important, it was countered by the experience of many other long practicing physicians of the time. How can we judge whose experience matters? The only answer I can see is through science. We are now on team Semmelweis for no reason other than the fact that his experience was ultimately supported by science.

Evidence based medicine as a driver of too-early adoption?

At one point, the podcasters seem to blame evidence based medicine for the overly rapid adoption of some questionable practices. They give the examples of tight glucose control in critical care and activated protein C as practices that were pushed too fast after early positive trials.

I don’t think it is fair to blame EBM for the rapid adoption of these principals. A core principle of EBM is repetition. We are never happy with a single trial. 20 years later, I am still waiting for a replication of NINDS.

It is difficult to know, in retrospect, exactly why these practices spread so quickly. I expect that they spread primarily because of eminence based medicine (a small group of experts loudly supported them), supported by mechanistic reasoning (they “should” work). However, it is possible that pseudo-EBM practices were used. What we see labelled as EBM in the real world is often unlike EBM as at is envisioned and discussed. We frequently see single papers touted as providing “THE final answer”, potentially with reference to those EBM hierarchies. This is a misuse of EBM. Evidence based medicine teaches complexity, not black and white; it relies on replication not simplicity and speed.

Guidelines are not evidence based medicine

Guidelines are one of the most misunderstood concepts in evidence based medicine. In the podcast, they discuss a study in which clinicians who were deviating from suggested guidelines had to provide their reasoning for deviation. 93% of the time, the reasoning was deemed clinically appropriate. They use this fact to argue that clinicians are appropriately using their expertise to stray from evidence based practice, but they are confusing evidence based practice with guideline based practice.

Clinical guidelines are not synonymous with evidence based medicine. In fact, clinical guidelines are not necessarily even evidence based. I know of no group of clinicians that takes a stronger stance against clinical guidelines than my evidence based colleagues.

Guidelines are often awful (which will be the topic for a future blog post). They are often misused. They should be criticized. Ideally, the entire guideline industry should be revised.

At their absolute best, clinical guidelines provide a valuable synopsis of the medical evidence, including its shortcomings, and the inherent uncertainty, but that is not evidence based medicine. Evidence based medicine takes that summarized data and applies it to the specific patient, using clinical expertise and patient values. Hence, the study cited actually seems to describe a success of evidence based medicine.

Competition is an awful mechanism to judge doctors

Although it is a minor point in the podcast, I think it needs to be addressed. Because the podcasters are arguing against evidence based medicine, they can no longer use science as a measure of good and bad clinicians. However, they still recognize that there is a wide variety of medical practice; that there are good and bad doctors. How do you judge quality in the absence of science? Their proposed solution is to rely on competition: patients can choose their doctors, and ultimately we should trust them to choose the good doctors.

Aside from the fact that it clearly doesn’t apply in emergency medicine, it is an awful idea. Patients currently choose homeopaths in huge numbers. Patient choice has very little to do with medical quality. (Patient satisfaction is actually associated with increased mortality). (Fenton 2012) To abandon science as the criteria of quality is to abandon what sets the medical profession apart from the vast world of medical quackery. No thank you.

** Of course, it still makes sense to pay attention to patient choice. It is not a good judge of clinical practice or science, but being kind and compassionate are important parts of medical care. If patients, given the choice, have no interest in seeing you, that is important feedback that shouldn’t be ignored.

Clinical trials are not prescriptive or sufficient

The podcasters do recognize that clinical trials are incredibly important for the practice of medicine, but they emphasize that they are not prescriptive and not sufficient for making clinical decisions. I think from my comments above it should be clear that I agree with them. If they were, medicine could be practiced by robots.

Evidence based medicine, as a set of tools to help us think critically, does not treat trials as prescriptive and sufficient. However, in the real world, we often see trials being treated as prescriptive and sufficient (often with the claim that one is practicing evidence based medicine). This practice should be criticised, but it shouldn’t be confused with evidence based medicine.

However, I worry that this concept that clinical trials are not prescriptive is actually taken too far in current real world medical practice. We seem to use it far too often to ignore good clinical research. The biggest problem in medicine right now is not a tyranny of evidence, but the exact opposite. Everyday, we ignore high quality evidence to the detriment of our patients. Viral URTIs get antibiotics. Stable coronary artery disease gets stents. Trivial head injuries get CT scans. You don’t have to look any further than the embarrassingly weak recommendations that are included in the Choosing Wisely Campaign to know that clinicians ignore high quality evidence every day. Although I agree that clinical trials should not be prescriptive, I think the bigger problem currently is that that concept is widely abused by clinicians, resulting in the neglect of high quality evidence.

The proposed alternative is actually just evidence based medicine

They end this podcast with a discussion of Dr. Tonelli’s proposal for an alternative system to evidence based medicine. His proposal:

  • Start with the evidence.
  • Then ask yourself, what do I know from experience? Is there anything different about this patient or this context? (I do think he overvalues mechanistic reasoning at this phase.)
  • Then you talk to the patient. Do they have a different perspective on this question?
  • Then you try to integrate all this information into a clinical decision.

The good news is that we agree. As far as I can tell, he has just described evidence based medicine as a replacement for evidence based medicine.

Aside from my greater skepticism about physiologic reasoning, Dr. Tonelli and I probably practice medicine very similarly. Why, then, have I bothered to write this incredibly long blog post? Although the final product is fine, and many of the criticisms are valid, I think that those criticisms could be misunderstood or taken to an extreme in which, much like society at large right now, we lose trust in science. And I think that would be incredibly harmful to our patients.

Is evidence based medicine perfect?

Of course not. I am not arguing that evidence based medicine, as it is currently practiced, is perfect medicine. There are a number valid criticisms, and plenty of room for improvement.

Science, at its core, is a set of tools aimed at systematically and objectively studying the world. There are reasonable philosophical questions to be raised about just how objective we can be, and how we can tell that we are being objective. (I reject the extreme philosophical stances of relativity, not because there is a clear logical pathway out of extreme relativity, but because such philosophical stances are not practically helpful. Sure, we could be living in the Matrix, but it is silly to spend much time thinking about it. On the other hand, I think it is easy to demonstrate the practical benefit of a belief in objective science.)

A more concerning criticism is that our scientific tools can be used, and are being used, to distort the truth. Medical science is currently a mess. Large corporations perform exercises that look a lot like science, but in fact are carefully designed advertisements where the outcome is predetermined by biased methodology. There is a reasonable argument to be made that evidence based medicine has been hijacked. (Ioannidis 2016) However, the solution is not to give up on a set of tools that are incredibly valuable for patient care, but rather to take back the ship and throw the pirates overboard. (Ioannidis 2017)

One difficult question is how we can move from the generalised data in scientific trials to specific patients in front of us. No single patient is accurately represented by an RCT. We talk about numbers needed to treat, but it is impossible to know whether the patient in front of us will be the 1 helped, or among the 99 who weren’t. There is no easy rule to know whether the science applies.

However, unless you plan on retreating to a bizarre philosophical stance of extreme relativity, in which science is no longer accepted, this is fundamentally a practical problem. We can let the philosophers worry about whether knowledge can ever be trusted. In medicine, we simply want to do what is best for our patients. Good evidence indicates when there is a net benefit to a treatment. That doesn’t mean that every patient will be helped, but that if we treat 100 patients, we expect more to be helped than harmed. Because we don’t know which patients will be helped and which will be harmed, when there is strong evidence of benefit, our default position should be to treat. (Eg. aspirin in STEMI). When there is strong evidence of harm, our default position should be not to treat. (Eg. bloodletting). However, we know that trials are not perfectly applicable to patients, so before applying the default option, we apply judgement, and speak with our patient to determine their values, allowing us to occasionally alter the default management. If you are straying from a default that has strong evidence too often, you are practicing bad medicine. If you never stray, you also practicing bad medicine.

Perhaps the biggest concern for EBM is that, despite the claims of a strong PR campaign in this podcast, it might not be practiced very widely. There is a large amount of non-evidence based medicine practiced today. The science skills necessary to appraise the literature are often not well taught in medical school. Many physicians will go years between reading medical studies or engaging with the literature. Classically, we are told that it can take as much as 17 years for high quality evidence to reach the bedside. (Morris 2011) Patients still get antibiotics for their runny noses and CT scans for PE despite being low risk and DDimer negative. Maybe my description of the EBM practitioner is more of a noble aspiration than an accurate description.

Speaking from my experience in emergency medicine, there are a large number of true evidence based practitioners, as I described them, currently practicing medicine. Follow the twitter conversations around epinephrine in cardiac arrest or thrombolytics in stroke and you will find a large community of clinicians actively engaging with the literature, brining their experience and expertise to the conversation, and ultimately preparing to make high quality, individualized, evidence-based decisions in the management of their patients.

Could this practice be more widespread? Absolutely, but that is not an argument against evidence based medicine. Do people still use dial up internet? They do, but broadband is better.

There are many other practical problems with evidence based medicine.

  • The sheer volume of evidence produced if far to great for any single physician to consume. (This might be fixable if academic promotions were based on the quality rather than quantity of work produced.)
  • The concept of EBM is often used to support strict funding measures. (It should be clear from everything above that this is not actual evidence based medicine, but another example of EBM being hijacked. We should actively resist these efforts.)
  • EBM concepts and terminology are being used as part of a practice that is clearly not evidence based medicine. Complexity and critical thinking are dismissed in service of simplistic, biased, and often selfish claims. (This will probably continue to occur until we have legitimate science methodology education in medical school.)
  • Science has been distorted by money. (Absolutely. We need a completely overhaul so that studies are never funded by people with vested interests in the outcome.)
  • Evidence based guidelines don’t capture the nuance of complex patients. (I completely agree. But guidelines are not evidence based medicine. Guidelines should guide care, and when done well can describe the best evidence, as well as its shortcomings and uncertainty. Guidelines should not mandate care. See above.)
  • There is a replication problem. (Absolutely. This is a combination of a number of the above problems. Poor quality studies with bias from the outset makes this problem worse. However, a core principle of science is replication. One study is never enough. We should always demand more, and temper our excitement in the meantime.)
  • We often over-generalize the results of clinical trials. For example, just because low tidal volume ventilation decreased mortality is ARDS doesn’t mean it is a panacea in all conditions, and doesn’t confirm that it is the best technique, nor than 6mL/kg is better than 7 mL/kg. (This is not a failure of evidence based medicine, but of our teaching of science. Methodologists and EBM nerds are generally the first to warn against indiation creep.)

I would love to continue the conversation about these problems with EBM, and the various possible solutions. However, it is important to note that none of them undermine the incredible value that EBM provides for our patients.

Conclusion

Evidence based medicine is not data. It is not journal articles. “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice. Increased expertise is reflected in many ways, but especially in more effective and efficient diagnosis and in the more thoughtful identification and compassionate use of individual patients’ predicaments, rights, and preferences in making clinical decisions about their care.” (Sackett 1996)

Evidence based medicine is still the best kind of medicine.

It is currently flawed and imperfectly practiced. There is room for improvement. But it is still the best kind of medicine.

Other FOAMed

You can hear an expanded version of Dr. Tonelli’s views in this grand rounds talk:

A huge thanks to Chris Carpenter, Jerry Hoffman, and Ken Milne for their guidance and input on my various drafts of this post.

References

Brignardello-Petersen R, Guyatt GH, Buchbinder R, et al. Knee arthroscopy versus conservative management in patients with degenerative knee disease: a systematic review. BMJ open. 2017; 7(5):e016114. [pubmed]

Echt DS, Liebson PR, Mitchell LB, et al. Mortality and morbidity in patients receiving encainide, flecainide, or placebo. The Cardiac Arrhythmia Suppression Trial. The New England journal of medicine. 1991; 324(12):781-8. [pubmed]

Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Archives of internal medicine. 2012; 172(5):405-11. [pubmed]

Finfer S, Chittock DR, et al. Intensive versus conventional glucose control in critically ill patients. The New England journal of medicine. 2009; 360(13):1283-97. [pubmed]

Hayes MJ, Kaestner V, Mailankody S, Prasad V. Most medical practices are not parachutes: a citation analysis of practices felt by biomedical authors to be analogous to parachutes. CMAJ open. 2018; 6(1):E31-E38. PMID: 29343497 [free full text]

Ioannidis JP. Evidence-based medicine has been hijacked: a report to David Sackett. Journal of clinical epidemiology. 2016; 73:82-6. [pubmed]

Ioannidis JPA. Hijacked evidence-based medicine: stay the course and throw the pirates overboard. Journal of clinical epidemiology. 2017; 84:11-13. [pubmed]

Jiang N, Cao ZH, Ma YF, Lin Z, Yu B. Management of Pediatric Forearm Torus Fractures: A Systematic Review and Meta-Analysis. Pediatric emergency care. 2016; 32(11):773-778. [pubmed]

Kudenchuk PJ, Brown SP, Daya M, et al. Amiodarone, Lidocaine, or Placebo in Out-of-Hospital Cardiac Arrest. The New England journal of medicine. 2016; 374(18):1711-22. [pubmed]

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research J R Soc Med. 2011; 104(12):510-520.

Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ (Clinical research ed.). 1996; 312(7023):71-2. [pubmed]

Stergiopoulos K, Boden WE, Hartigan P, et al. Percutaneous coronary intervention outcomes in patients with stable obstructive coronary artery disease and myocardial ischemia: a collaborative meta-analysis of contemporary randomized clinical trials. JAMA internal medicine. 2014; 174(2):232-40. [pubmed]

Cite this article as:
Morgenstern, J. Evidence based medicine is still the best kind of medicine, First10EM, October 1, 2018. Available at:
https://doi.org/10.51684/FIRS.6313

 

Leave a Reply

9 thoughts on “Evidence based medicine is still the best kind of medicine”

Discover more from First10EM

Subscribe now to keep reading and get access to the full archive.

Continue reading