Thrombolytics for stroke: The evidence

Thrombolytics for stroke: undoubtedly the biggest controversy in emergency medicine. Also, the topic of this week’s Emergency Medicine Cases Journal Jam podcast. Rory Spiegel, Anton Helman, and I take a deep dive into the evidence. Why would we do this? No, it isn’t just that we have too much time on our hands. The journal jam podcast exists because we truly believe it is important to understand why we do what we do, both to ensure we are always providing the best care for our patients, but also so that we can explain that care to our patients. The evidence for (or against) thrombolytics is important precisely because the topic is so controversial. You will hear arguments on both sides. So will your patients. It is only through a familiarity with the studies, their strengths, and their weaknesses, that you will be able to decide for yourself what the evidence really shows and guide your patients to the best decision for their circumstances.

What follows are the notes I made while preparing for the podcast. First, I review the major randomized controlled trials looking at thrombolytics for stroke. That is followed by a discussion of the things I think are important to consider when trying to interpret this data. (Many folks might want to skip straight to this discussion section.)

The Major RCTs

These are the major RCTs in chronological order.


Multicentre Acute Stroke Trial–Italy (MAST-I) Group.Randomised controlled trial of streptokinase, aspirin, and combination of both in treatment of acute ischaemic stroke. Lancet (London, England). 1995; 346(8989):1509-14. PMID: 7491044

This is a randomized, multicenter, open-label, controlled trial




Hacke W, Kaste M, Fieschi C. Intravenous thrombolysis with recombinant tissue plasminogen activator for acute hemispheric stroke. The European Cooperative Acute Stroke Study (ECASS). JAMA. 1995; 274(13):1017-25. PMID: 7563451

This is a randomized, multicenter, double-blind, placebo controlled trial




NINDS study group. Tissue plasminogen activator for acute ischemic stroke. The New England journal of medicine. 1995; 333(24):1581-7. PMID: 7477192 [free full text]

A randomized, double-blind, placebo controlled trial




NINDS study group. Tissue plasminogen activator for acute ischemic stroke. The New England journal of medicine. 1995; 333(24):1581-7. PMID: 7477192 [free full text]




P value

Barthel index 50% 38% 0.026
Modified Rankin scale 39% 26% 0.019
Glasgow outcome scale 44% 32% 0.025
NIHSS 31% 20% 0.033


FOAM commentaries


Hommel M, Cornu C, Boutitie F, Boissel JP. Thrombolytic therapy with streptokinase in acute ischemic stroke. The New England journal of medicine. 1996; 335(3):145-50. PMID: 8657211 [free full text]

A multicenter, randomized, double-blind, placebo controlled trial




Donnan GA, Davis SM, Chambers BR. Streptokinase for acute ischemic stroke with relationship to time of administration: Australian Streptokinase (ASK) Trial Study Group. JAMA. 1996; 276(12):961-6. PMID: 8805730

This is a randomized, double-blind, multicenter, placebo control trial.



Hacke W, Kaste M, Fieschi C. Randomised double-blind placebo-controlled trial of thrombolytic therapy with intravenous alteplase in acute ischaemic stroke (ECASS II). Second European-Australasian Acute Stroke Study Investigators.. Lancet (London, England). 1998; 352(9136):1245-51. PMID: 9788453

A multicenter randomized, double-blind, placebo controlled trial




Clark WM, Wissman S, Albers GW, Jhamandas JH, Madden KP, Hamilton S. Recombinant tissue-type plasminogen activator (Alteplase) for ischemic stroke 3 to 5 hours after symptom onset. The ATLANTIS Study: a randomized controlled trial. Alteplase Thrombolysis for Acute Noninterventional Therapy in Ischemic Stroke.. JAMA. 1999; 282(21):2019-26. PMID: 10591384

A multicenter, placebo controlled, double-blind, randomized trial




Clark WM, Albers GW, Madden KP, Hamilton S. The rtPA (alteplase) 0- to 6-hour acute stroke trial, part A (A0276g) : results of a double-blind, placebo-controlled, multicenter study. Thromblytic therapy in acute ischemic stroke study investigators. Stroke. 2000; 31(4):811-6. PMID: 10753980 [free full text]

A multicenter, placebo controlled, double-blind, randomized trial



Hacke W, Furlan AJ, Al-Rawi Y. Intravenous desmoteplase in patients with acute ischaemic stroke selected by MRI perfusion-diffusion weighted imaging or perfusion CT (DIAS-2): a prospective, randomised, double-blind, placebo-controlled study. The Lancet. Neurology. 2009; 8(2):141-50. PMID: 19097942 [free full text]

A multicenter, placebo controlled, double-blind, randomized trial




Hacke W, Kaste M, Bluhmki E. Thrombolysis with alteplase 3 to 4.5 hours after acute ischemic stroke. The New England journal of medicine. 2008; 359(13):1317-29. PMID: 18815396 [free full text]

A multicenter, placebo controlled, double-blind, randomized trial




Sandercock P, Wardlaw JM. The benefits and harms of intravenous thrombolysis with recombinant tissue plasminogen activator within 6 h of acute ischaemic stroke (the third international stroke trial [IST-3]): a randomised controlled trial. Lancet (London, England). 2012; 379(9834):2352-63. PMID: 22632908 [free full text]

A multicenter open-label, randomized, controlled trial



Further reading

In Japan, tPa was licensed at a dose of 0.6mg/kg. Registry data seemed to indicated a decrease in intracerebral hemorrhage without a sacrifice in efficacy. This led to the following RCT comparing standard and low dose tPa:


Anderson CS, Robinson T, Lindley RI. Low-Dose versus Standard-Dose Intravenous Alteplase in Acute Ischemic Stroke. The New England Journal of Medicine. 2016; 374(24):2313-23. PMID: 27161018 [free full text]

This is a multicenter, randomized, open-label, non-inferiority trial. It has a 2×2 design, with a subset of the patients being investigated for aggressive blood pressure control. I focus on the interventional therapy here:



Further Reading

Here is a nice table from Ken Milne that summarizes these papers in a colour coded fashion:

Trying to understand these studies

There is no simple summary of these studies. If there was, there probably wouldn’t be any controversy. There are some things we need to know if we are going to try to make sense of this research to guide our practice and speak to our patients.

The outcomes

All of these studies use neurologic scoring systems or scales as their primary outcomes. To understand the studies, it is important to understand the scales. The most common scale used is the modified Rankin scale (mRs), so I will delve into it in a bit more detail. For reference, other scales used are:

The modified Rankin scale has 7 categories:

At first glance, the scale seems pretty straightforward, but humans are complex and real life examples blur the lines. What counts as being unable to carry out all previous activities? How much help is “some help”. I am in my thirties with no neurologic problems, but I require a lot of help getting through my life. My grandfather’s days are spent sitting in front of the TV smoking. My grandmother looks after him completely. He could have a stroke resulting in a dense hemiplegia of the upper extremity, and as long as he had one hand to smoke and use the remote control, you wouldn’t know the difference. So is he “able to carry out all his usual activities, despite some symptoms” or does he “require some help, but able to walk unassisted”. All of us have good days and bad days, and our score here could vary widely.

Most examples will be more subtle than that, but these are elderly patients with comorbidities and existing disabilities. Furthermore, there are complex social interactions that influence one’s sense of disability, especially after an acute illness. The score one is given might depend on the time of day, day of the week, or even who you happen to be talking to.

This is borne out in the literature. Even among trained neurologists, when more than one person scores the same patient, they frequently arrive at different results. See, for example, this paper (Quinn 2009) where the kappa values for the modified Rankin scale is 0.46 (very poor).

The bottom line is that the categories on these scales are not black and white. There is a subjectivity present, which is especially important in unblinded trials, or if blinding was broken.


Leaving aside IST3, the majority of trials here were blinded. Unfortunately, there are some potential problems with blinding. If you have ever used tPa, it has a somewhat foamy appearance in the syringe. These trials used saline as the placebo and clinicians might have been able to tell the difference between the two in the syringe. More importantly, if you have ever looked after a patient given tPa, you know this drug is almost impossible to blind. Nurses may note excess bleeding from IV sites or gingival bleeding. Also, changes in routinely drawn lab values might make it obvious which group the patient had been allocated to.

Obviously, we are always concerned about blinding in trials. However, it is especially important if the outcome being measured is not objective. I trust the mortality differences reported here, even if the groups became unblinded. However, as discussed above, the primary outcome in all these trials relied on scoring systems that involve subjective assessment. The difference between a modified Rankin scale of 1 and 2 is not obvious at the best of times, but especially not if the data is collected from family members over the phone (or by mail) who may have been unblinded.


Every trial has different definitions of intracranial hemorrhage. When summarizing the data, I tried to include the definition that I think it most important, which is the large symptomatic bleeds. If you are comparing the bleeding rates from different trials, or at your own hospital, make sure you read the definitions carefully. Honestly, though, I don’t worry as much about the bleeds themselves, because I want to know their clinical outcomes, which will be captured in the death and disability numbers.

Some statistics

What is a p value? You might be surprised to find that the topic is hotly debated, and a mere mention of p values can get a statistician far more excited that you ever thought possible. I am not going to wade into that morass, for now, but it is important to know what we mean when we say a study is “statistically significant’. (If, for whatever reason, you want to read a little more about p value, this is an excellent article). In the eyes of the statistician who invented the p value – Ronald Fisher – the p value is an informal way to judge whether data was worthy of a second look. The p value doesn’t define truth. The foundation of science is replication, and the p value was only intended to tell us which studies are worth replicating. It is also worth noting that the p value of 0.05 has no special meaning. We use it in medicine because we had to choose something and it’s relatively practical for biologic experiments. However, in physics the standardly accepted p value is 0.0000003. (See Fatovich and Phillips 2017)

Most importantly, the p value can only be judged in the context of the scientific literature. Much like diagnostics in our clinical practice, to properly interpret a p value, we must know the pretest probability. Unfortunately, there is no clear method to judge a study’s pre-test probability. In general, we know the chance that a new medication will help patients is low, because the vast majority of trials of new medications are negative. If you start with a general pretest probability of 5% for any new treatment (it’s probably not that high), you might increase that slightly before NINDS was started, because we knew that thrombolytics work for MI, and the pathophysiology is similar. On the other hand, we have to bring our estimate back down somewhat because there were already 2 negative trials of the same therapy for stroke.

If we were to assume a 10% pretest probability that thrombolytics would work in NINDS (and I think that is being generous), applying a p value of 0.025 (the average of the 4 primary outcome p values), you would get a post-test probability of 77%. (This assumes you ignore the problems with NINDS and just take the numbers as they are.) A 77% post-test probability is good. Based on this result it is now more likely than not that thrombolytics work, but there is still a 23% chance that they don’t work. Clearly, this study should be replicated. A second statistically significant result, starting with this new pretest probability, might be enough to convince us that thrombolytics work. With a 77% post-test probability, it might even be reasonable to use this experimental treatment while you are waiting for further research to be completed. What is clearly inappropriate is to stop all further study and declare the treatment to be the standard of care.

(The most tenuous part of these calculations is determining the pretest probability. Based on the fact that at least 90% of newly tested medications fail to show benefit, I think that the the numbers here are too high. However, if you what I was being too conservative, and want to make the pretest probability that tPa would work in NINDS 25%, the post-test probability is still only 91%. Higher, and more convincing, but still in need of replication. You can play around with these numbers using this calculator.)

This is a fundamental rule of science that we often forget in medicine. Research needs to be replicated. P values don’t tell us the truth, they just alter our post-test probability. Consider sepsis protocols. Consider therapeutic hypothermia. The results are very similar. In both those cases, much like with thrombolytics for stroke, we allowed our clinical practice to get somewhat ahead of the evidence. That is understandable, because we want to help our patients, but we must learn from these (not unexpected) reversals. Where is the validation study for thrombolytics?

The fragility index

The fragility index is a powerful and intuitive statistical concept. The index tells you how many people in a study would have had to have a different outcome in order for the study to become “not statistically significant” (to have a p value above 0.05). A fragility index of 100 tell you that 99 patients could have slipped from a good to a bad outcome and the trial would have still been statistically significant. A fragility index of 1, on the other hand, tells you that if even a single patient had a different outcome, the trial would have been reported as negative instead of positive. It is a powerful tool, because it gives you a sense of how easily random chance could have changed the results of a trial.

How does this help us when looking at the stroke literature? Given that the foundation of our current practice is NINDS, it would be good to know if NINDS was likely to give us the same results if replicated. The fragility index can give us a sense of the robustness of the results. Unfortunately, NINDS had 4 primary outcome measures, so there isn’t just a single fragility index. However, the results are very similar. For 3 of the primary outcomes, the fragility index was 3. In other words, if 3 extra people in the control group had had a good outcome, the trial would have been statistically negative. (For the fourth primary outcome, the fragility index is 4). Clearly, this is a small number. Random chance (not to mention the various sources of bias in the trial) could easily have turned this trial from positive to negative. Therefore, it would not be surprising if a replication of NINDS turned out to be negative. (At this point, I would probably be more surprised at the trial being run than I would be to find out it was negative).

Josh Farkas talks about a related concept, called the instability index, which he suggests tells you how much imprecision there is in the final outcome based on imbalance between groups, post-randomization crossover, and loss to follow up. You are better off just reading his post here. For NINDS, he calculated the instability index as 6.5. Meaning, we shouldn’t be surprised if 6.5 patients changed outcomes if the trial was repeated. Compared to the fragility index of 3, this tells us NINDS is unlikely to give us the same results if it were replicated. (Please note, unlike the fragility index, which is a well recognized statistical tool, the instability index is just a (useful) product of Josh’s imagination).

There is one other study supporting tPa use: ECASS 3. The fragility index? One!!

Baseline imbalances

There are two positive trials listed above. In both of those trials, the placebo group had a higher NIH stroke score than the tPa group when patients were enrolled. This isn’t anything nefarious. It is just something that happens by chance when you are dealing with small trials. Unfortunately, the single largest predictor of outcome in strokes is how severe the stroke is at baseline. These trials did not measure how much you improved, but instead asked how many patients were functionally independent at the end of the trial.

Imagine we were testing two different pain medications. Drug A takes patients pain from an average of 7/10 on arrival to 3/10 at 1 hour. Drug B reduces pain from 8/10 to 4/10. If we are interested in the change in pain, both drugs reduce pain by 4/10 and we would consider them equivalent. However, if we discovered that patients consider any pain score of less than or equal to 3 to be “minimal pain”, we might ask: “how many patients had a pain score of 0-3 at the end of the trial?” If we set that as our primary outcome, the baseline imbalance between the two groups makes Drug A seem superior to Drug B.

Because the placebo groups in both NINDS and ECASS 3 has sicker patients to begin with, it is not surprising that less of those patients were independent at 3 months.

Conflicts of interest

Pharma runs our studies. We understand that, but we also know that pharma run studies have a much higher likelihood of being positive than studies run by non-conflicted sources. There is clear evidence of bias in the way that these studies are written up (ignoring primary outcomes and emphasizing secondary outcomes in conclusions). Aside from IST3, all the major RCTs had significant industry involvement. It’s hard to know how much industry involvement affects the results, but until we can fix the underlying scientific system, we have to account for this inherent bias when interpreting published studies. This, of course, is not a problem unique to studies of thrombolytics for stroke, but is nevertheless a problem evident in these trials.

A common misconception: Rapid recovery

When discussing thrombolytics with other doctors, I hear a lot of anecdotes. The people who are convinced that tPa works almost never suggest that NINDS was a great study, nor do they point out a promising analysis of the data. The refrain is almost always the same: I pushed tPa and the patient got better in front of my eyes.

Witnessing such an event is indeed powerful. We all want to help our patients, and in these cases it seems like the medication being pushed saved the patient. Unfortunately, it is a mirage. Thrombolytics simply don’t work that way. In none of these trials did patients improve immediately. NINDS part 1 was specifically designed to look for improvement at 24 hours, and there was none. Thrombolytics may provide some long term benefit, but there is no evidence here that they have an immediate impact. (There was a difference with tPa, but it did not reach statistical significance. NINDS may simply have been underpowered.)

I know people don’t like data. So, instead, consider the many other stroke anecdotes. You are called urgently to a room to assess a patient with a dense hemiplegia. You rapidly activate the stroke protocol and the patient is whisked off to the CT scanner. You meet them back in the room after a rapid review of the images, ready to discuss the harms and benefits of tPa, only to discover that their symptoms are resolving. I have seen this hundreds of times. Sometimes, their symptoms have resolved before I can even order the CT. We tend to ignore these patients, because we were not the saviours – because we don’t get the credit – but patients rapidly resolving on their own are far more common than patients rapidly resolving after tPa. If a stroke patient’s symptoms resolve in the first 24 hours, whether or not you gave tPa, that is called a transient ischemic attack.

I should also mention, it isn’t really clear how thrombolytics could have an effect at 3 months but not at 24 hours. This is one of the facts that leads me to believe that much of the difference we are seeing in NINDS was due to baseline imbalance between the groups. (At 24 hours, they measured a 4 point improvement on the NIHSS – so it wouldn’t matter where you started. At 3 months they measured how many patients were doing well, in which case it really matters how sick you were at the outset.)

How do these studies compare to thrombolytics in MI?

There were over 60,000 patients in the MI studies, as compared to about 10,000 with stroke. All of the MI studies were positive. Thrombolytics improved mortality in MI. Every thrombolytic agent worked. The thrombolytics worked early and late. None of this is true for thrombolytics in stroke.

Of course, the major difference is that thrombolytics only worked in one specific type of MI: the STEMI. By defining this narrow population, they were able to ensure benefit (2.5% decreased mortality) despite a very narrow therapeutic window (1% major bleeding). We have not identified any such subgroup for stroke.

Why not include meta-analyses here?

There is probably too much clinical heterogeneity here to simply pool the results together. With different inclusion criteria, timing, definitions, and agents used, it isn’t clear that a single number can summarize this data. On the other hand, it is inappropriate to simply ignore data based on retrospectively chosen criteria.

Meta-analyses are great for increasing statistical power, but do nothing to help us with bias. The various flaws discussed above are simply compounded when trials are combined. The larger sample size doesn’t get us any closer to the truth.

Combining trials together also makes larger trials more important. Unfortunately, in the stroke literature, the largest trial, contributing almost half of all patients in current meta-analyses, is the deeply flawed, open-label IST3 trial. Allowing such a biased trial to overpower the others doesn’t make much sense.

Stopping negative trials early and effect on meta-analyses

Another problem with meta-analyzing the data here is the imbalance created by stopping only negative trials early. The trials stopped early should have included a total of 3900 patients based on their initial study protocols. (I include MAST-I in this group). Instead, they only enrolled 1827 patients. The 2000 missing patients in the negative trials is more patients as were actually enrolled in the positive trials (NINDS and ECASS 3) combined. The result is a significant imbalance.

Imagine that I wanted to prove to you that my basketball team was an excellent 3 point shooting team, being able to make 50% of their shots. We run a few trials. Player 1 misses the first 2 shots and we decide to stop the trial early “because of statistical futility”. Player 2 also misses the first 2 shots and we stop the trial early. Player 3 makes 7 out of 10 tries. Player 4 makes 1 of 4 shots, but our statistician tells us this is unlikely to reach significance, so we stop. Finally, player 5 makes 6 out of 10 shots. Is this team a good three point shooting team? Three of the five players were awful, shooting 0% or 25%. However, in total, the team has attempted 28 shots, and made 14 of them. By eliminating the poor shooting of 3 players, we created an unbalanced sample that was able to “prove” that my team can shoot 50%. Although this is a relatively simplistic example, it gives you a sense of the problem of combining results when all the negative trials were stopped early.

Why do we ignore the negative trials?

One of the greatest scourges in modern medicine is publication bias – where negative trials never see the light of day. This is not (as far as we know) the case when in comes to thrombolytics for stroke, but for some reason, we have decided to ignore the negative trials anyway.

The current stroke literature, as outlined above, includes 4 papers stopped early for harm or futility, 6 negative trials, and 2 positive trials. That is almost exactly the random distribution that you might expect from studying an intervention with no effect. Just because we decided to paint a bullseye around the two positive trials, doesn’t mean that we actually hit the mark. (The Texas sharpshooter fallacy.)

Does agent matter?

Three of these trials (ASK, MAST-Italy, and MAST-Europe) used streptokinase. All three were negative trials. In fact all three were stopped early due to harm. The question is, should these three trials be treated as different because streptokinase is in some way different than t-Pa? It isn’t clear what the answer should be. There are theoretical reasons that streptokinase might be different than t-Pa, but also theoretical reasons that would indicate it shouldn’t be. There is no difference between the various thrombolytic agents in the management of STEMI and it is rare in medicine to see true differences between different medications of the same class. However, the outcomes with lytics in stroke are clearly different from those in STEMI, so it is not easy to extrapolate from that literature. If you look at the outcomes of these three trials and compare them to the outcomes of the t-Pa trials, it is hard to see any clear differences, although the mortality numbers are the highest with treatment in these three trials. The Cochrane reviews have not identified a difference between the agents. (See Wardlaw 2014)

Does time matter?

Although there is some convincing evidence that harms increase the later that tPa is given, it is not clear that the “time is brain” mantra is based in science. Physiologically, it never made much sense, as neurons die 3-6 minutes after their blood supply is lost – orders of magnitude different from the 180-270 minute timeframes we are talking about. This Cochrane review concluded that the current data does not support a significant difference in outcomes between the 0-3 and 3-6 hours groups. IST3 provides us with the rather implausible result that patients presenting less than 3 hours after symptom onset benefit, those between 3 and 4.5 hours are harmed, and those in the 4.5-6 hour time frame are again helped by tPa. Maybe, for patients presenting at 3.5 hours, we should wait a bit?

Although it is certainly possible that early treatment results in earlier reperfusion to an ischemic penumbra that is not yet dead, there is another explanation that would explain the better outcomes seen in earlier presenters: selection bias. A patient seen at 90 minutes might still be a TIA which will self-resolve, but by 3 hours that is less likely. Similarly, a patient at 90 minutes might still be post-ictal or having a migraine, but the longer you wait, the more of these self-resolving stroke mimics will resolve. The result is that more patients in the early group are likely to have TIAs and stroke mimics. These patients will, of course, have much better 3 month outcomes than patients having strokes, making earlier treatment erroneously look more effective than later treatment.

The idea that patients treated early might naturally be expected to fare better is important when interpreting NINDS. As part of the NINDS protocol, there had to be an even distribution between patients in the 0-90 minute group and the 90-180 minute group. What that means is that, even among the patients in the 0-3 hour window, the majority were excluded from this trial (because so few patients show up in the first 90 minutes). The result is a very select group of patients presenting, on average, much earlier than patients present in real practice. Consequently, we should expect these patients to fare better than the patients we actually see.

Another common mistake: hemorrhage versus good outcome

Often, when skeptics of thrombolytics discuss this topic, the potential benefit of tPa is weighed against the 6% rate of symptomatic intracerebral hemorrhage. I think this is a mistake, when you consider the primary outcomes here. These trials looked at death and disability at 3-6 months. That is a reasonable, patient centered outcome (aside from the subjectivities mentioned above.) The harms of a intracerebral hemorrhage are included within that outcome. So it is not a balance of whatever benefit you think these studies show against the harms of hemorrhage; it’s the overall benefit despite the harms of hemorrhage.

That isn’t to say there isn’t harm from these medications. I think these trials fairly consistently demonstrate an increased risk of early death. That is a harm. The risk of death trends back towards neutral with time, but that is expected. Run any trial long enough, and the mortality rate will be 100% in both groups. In a group of older stroke patients, with multiple comorbidities, we should expect a number of patients to die in both groups, independent of tPa, which tends to make the groups seem more similar. So there is probably harm here, and the benefits are uncertain, but we should not set up a false balance between functional benefit on one hand and bleeding on the other.


The thrombolytics debate isn’t about numbers or statistics. This isn’t a question that can be answered simply by dissecting these trials (believe me, I have tried). The reason that this issue is still debated is all about the reliability of the data.

Stroke is a devastating condition and every clinician wants to do everything in their power to help their patients. Unfortunately, good intentions are not enough, and it is generally our sickest patients in whom we need to be most careful about the delicate balance between doing good and doing harm. I have read all this literature through more times than should ever be done. I can’t tell you for sure whether thrombolytics work. Physiologically speaking, they are clearly doing something, as is evidenced by the increase in bleeding. There is a hint at benefit throughout a number of these papers, but that has to be tempered by the various sources of imbalance and bias in this literature.

My guess is that there must be some subgroup of patients who are benefiting to balance out harms in others. (Although I am not absolutely certain that there is any benefit here). Unfortunately, our currently approach is akin to giving thrombolytics to all chest pain patients, or at least to any patient with a positive troponin. In that population, lytics fail. We don’t have an ST elevation equivalent to guide us in stroke. My biggest concern is that the push to define tPa as the “standard of care” has robbed us of the important research that would have discovered this subgroup.

Bottom line? I don’t know. If NINDS was replicated today, I would open the odds between 4:1 and 9:1 against the same results. (In other words, I think there is about a 10-20% chance that if the same protocol was run, we would see the same results). I think we clearly need more research. I think basic philosophy of science and statistical tenants tell us that we must attempt to replicate NINDS. Or maybe this whole debate will simply disappear, as endovascular therapy becomes the new norm. More on that next time…

How do I discuss this with my patients?

I don’t work at a stroke center, and because of EMS bypass protocols this isn’t a conversation I have frequently. I tend to say something like:

“There is a treatment we sometimes use for stroke that is supposed to break down the clot causing the stroke. The treatment is controversial, and you will probably hear different things from different doctors. The issue is that out of 13 major trials, only 2 have shown benefit, and both of those trials have some problems, and they were both paid for by the people who make the drug. There are some risks that we’re certain about: about 1 in 12 patients will have severe bleeding resulting in worse neurologic outcome. Despite that risk, in the best case scenario, about 1 in 10 people given this drug early will have a noticeable improvement in their function after 3 months. Unfortunately, it isn’t clear how reliable the science has been, and we don’t know which patients have the greatest chance at benefit or harm. The choice to receive this medication remains up to each individual patient.”

Other FOAMed

theNNT: Thrombolytics for stroke

SGEM: Episode 85 Won’t get fooled again (tPa for CVA)

St. Emlyn’s: Kicking against the prick: Systematic Review of stroke thrombolysis

FOAM Cast: ACEP tPA policy; Dr Jerome Hoffman on ACEP’s tPa clinical policy

Life in the Fastlane: The Use of Thrombolysis as a Treatment for Acute Stroke

EMCrit: tPA for ischemic stroke debate

If you have some time, you can watch a true expert, Jerry Hoffman, talk about these issues:

A special thanks for Dr. Ken Milne for reviewing the discussion section of this post to ensure I wasn’t making any major errors.

Cite this article as: , "Thrombolytics for stroke: The evidence", First10EM blog, July 10, 2017. Available at: