Stastics is not my strong suit, so I confess that I find reading about these massive human trials of drugs and other treatments to be more of a chore than a pleasure. Each study may use a different population group, a different drug dosage, a different end point, and a different time to the end point.
Furthermore, most of these trials are supported by drug companies, and I don't trust the results. There are many ways to manipulate the data to make small differences sound like large differences, or to explain away results that aren't what you wanted.
And you can't trust the headlines written by popular science news services like Science Daily or those in general medical magazines. They'll just restate the conclusions emphasized by the authors of a particular study and then reiterate the background of the topic. In many diabetes news stories, more space is devoted to explaining the difference between type 1 and type 2, giving the numbers now suffering from these conditions, and then describing the "obesity epidemic" than in describing what's new.
Three Science Daily headlines about Avandia (rosiglitazone from studies reported at the recent American Diabetes Association meeting in Orlando, Florida, illustrate how difficult it is for us to know what is really going on. They were as follows:
1. No Link Between Diabetes Drug Rosiglitazone and Increased Rate of Heart Attack, Study Finds.
2. Type 2 Diabetes Medication Rosiglitazone Associated With Increased Cardiovascular Risks and Death, Study Finds.
3. New Meta-Analysis Demonstrates Heart Risks Associated With Rosiglitazone.
What's going on here?
First note that the first headline mentions "heart attack," the second refers to "cardiovascular risks and death," and the third refers to "heart risks." None say what the risks are compared to, and cardiovascular risks or heart risks could refer to a lot of different things. I suspect most people wouldn't delve deeply into the details and would interpret these headlines simply as (1) Avandia good, (2) Avandia bad, and (3) Avandia bad.
Let's start with study number 3. This is a meta-analysis, and like many other people, I don't trust meta-analyses. What they do is try to take a lot of small studies in which the results weren't statistically significant and pool them all together so that the results become significant.
This is because the statistical significance depends on both the magnitude of an effect and the number of people in the study. So, for example, let's say you were testing a drug called SugarDown and found that among 300,000 people matched to controls not taking the drug, 200,000 saw their A1cs decrease by at least 1 point and only 100,000 of the control subjects reached this endpoint. That is, twice as many people taking the drug had a desired result.
But if you gave the drug to only 3 people matched to controls, 2 of those taking the drug reached the end point and only 1 not taking the drug reached the end point, this might suggest to you that this drug was worth trying in a larger trial, but even though twice as many taking the drug had a desired result, just as in the larger trial, the results would obviously not be significant. The result could have been the result of chance.
When the results are more extreme -- let's say all 3 people taking the drug dropped dead 15 minutes after taking it and none of the controls did, then there's less chance that the results would be from chance.
These are obviously extremes; most studies have more realistic numbers. But no trial is perfect. Too many study patients make the studies too expensive, and too few patients make the results unreliable.
A meta-analysis tries to overcome these limitations.
The problem is that unlike the studies themselves -- which are usually "double-blinded," meaning that neither the patient nor the researchers know which ones got the real drug and which one got the placebo -- the researchers doing the meta-analyses have the results of all the trials in front of them.
They are able to pick which ones to use. This can be difficult when each study has a different end point and different parameters. And even a researcher without an ulterior motive might have unconscious biases that would result in rejection of one study that had an undesired result and the use of another that had a desired result.
With those caveats, here's what this study, by Steven Nissen and Kathy Wolski, published in the Archives of Internal Medicine, concluded: "Eleven years after the introduction of rosiglitazone, the totality of randomized clinical trials continued to demonstrate increased risk for myocardial infarction [heart attack] although not for cardiovascular or all-cause mortality. The current findings suggest an unfavorable benefit to risk ratio for rosiglitazone."
In other words, in this meta-analysis of rosiglitazone (Avandia) studies, more patients had heart attacks when on rosiglitazone, but no more died.
Study number 2, published in JAMA, which was not a meta-analysis, concluded that "Compared with prescription of pioglitazone, prescription of rosiglitazone was associated with an increased risk of stroke, heart failure, and all-cause mortality and an increased risk of the composite of acute myocardial infarction (heart attack), stroke, heart failure, or all-cause mortality in patients 65 years or older." [Italics mine}
So this study was limited to people over 65, and the results were compared with those of patients getting a similar drug, pioglitazone, not with those of patients not getting any of this type of drug. It's always possible that some drug might have a positive effect on some outcome compared with no drug, but it might have a less positive effect than another drug, so compared with the second drug the results would appear to be negative.
Their conclusions also refer to stroke, heart failure (not heart attack), and all-cause mortality. They found no increase in heart attack rates, but then they lump heart attacks together with the other end points and say there was an increase in this composite result!
That's like saying eating beans causes a lot of gas, and a composite consisting of people who ate beans, crackers, and chicken soup had more gas than people who ate none of these things. Many people reading that statement would avoid crackers and chicken soup before important meetings, even though the crackers and chicken soup had no effect on gas production.
The conclusion in the Abstract of the article doesn't specifically mention that the drug had no significant effect on heart attack rates. It just mentions the significant effect on stroke, heart failure, and all-cause mortality and the effect on the composite index.
Some busy physicians quickly reading this composite conclusion in the Abstract (and many people don't take the time to read an entire article, assuming the Abstract summarizes it correctly) might conclude that heart attack rates were increased as well as the other endpoints.
This article has other flaws. They mention in the Introduction that 7 previous studies showed that rosiglitazone increased heart attack rates, but they don't say 7 out of how many or increased compared with what. Seven studies out of 8 would be one thing; 7 studies out of 50 would be another.
The authors noted that most studies that showed an increase in heart attacks with rosiglitazone (again, they don't say increased compared to what) were done in younger patients (54 to 65 years old), whereas their study was in people older than 65 years.
This is another source of confusion for people trying to decide whether or not to use a drug. What helps or harms in one patient population might not do the same in another population.
Study number 1 seems to contradict the other two. It has not yet been published but was presented at the ADA meeting. According to this study, rosiglitazone had no effect on heart attack rates or mortality. Then they also used a composite outcome -- heart attack, mortality, and stroke -- and said rosiglitazone reduced this composite outcome.
But only stroke rates actually decreased, by 64%, whereas the "rates of heart attack and death on their own showed no significant difference between those who took rosiglitazone and those who did not." Once again, the composite outcome is confusing, and people may come to an erroneous conclusion.
This study was limited to patients with diabetes and existing cardiovascular disease. Some got revascularization for their cardiovascular disease. Some got insulin or metformin instead of rosiglitazone.
And the results reported at the ADA meeting were from a post-trial analysis of the results from the BARI-2D trial, which was not designed to test the safety of rosiglitazone. Hence the drug was not randomly assigned. Reanalysis of studies designed to test something else are somewhat questionable.
So is rosiglitazone safe to take? The evidence is not clear-cut. The FDA will soon meet to discuss the safety issue, presumably taking into account other studies in addition to the three discussed here.
But these three studies are a good example of the slippery slope we have to deal with when results of large clinical trials are published: confused statistics, biased authors with ties to drug companies, and different patient groups, comparisons, and end points.
I wonder how many bad drugs are on the market because of confusing clinical studies. So too, I wonder how many good drugs might have been dropped from the pipeline because of equally confusing clinical studies.
Evaluating risks vs benefits is not simple, and the best choice for a large population is not always the best choice for an individual patient. You might be allergic to a drug that helps most patients. Conversely, a drug that harms most patients might be wonderful for you.
All this is one reason that controlling with good food and exercise should always be the first choice. But this isn't always enough. Then we and our physicians have to evaluate which drugs will work best for us.
It is not a simple task.
Furthermore, most of these trials are supported by drug companies, and I don't trust the results. There are many ways to manipulate the data to make small differences sound like large differences, or to explain away results that aren't what you wanted.
And you can't trust the headlines written by popular science news services like Science Daily or those in general medical magazines. They'll just restate the conclusions emphasized by the authors of a particular study and then reiterate the background of the topic. In many diabetes news stories, more space is devoted to explaining the difference between type 1 and type 2, giving the numbers now suffering from these conditions, and then describing the "obesity epidemic" than in describing what's new.
Three Science Daily headlines about Avandia (rosiglitazone from studies reported at the recent American Diabetes Association meeting in Orlando, Florida, illustrate how difficult it is for us to know what is really going on. They were as follows:
1. No Link Between Diabetes Drug Rosiglitazone and Increased Rate of Heart Attack, Study Finds.
2. Type 2 Diabetes Medication Rosiglitazone Associated With Increased Cardiovascular Risks and Death, Study Finds.
3. New Meta-Analysis Demonstrates Heart Risks Associated With Rosiglitazone.
What's going on here?
First note that the first headline mentions "heart attack," the second refers to "cardiovascular risks and death," and the third refers to "heart risks." None say what the risks are compared to, and cardiovascular risks or heart risks could refer to a lot of different things. I suspect most people wouldn't delve deeply into the details and would interpret these headlines simply as (1) Avandia good, (2) Avandia bad, and (3) Avandia bad.
Let's start with study number 3. This is a meta-analysis, and like many other people, I don't trust meta-analyses. What they do is try to take a lot of small studies in which the results weren't statistically significant and pool them all together so that the results become significant.
This is because the statistical significance depends on both the magnitude of an effect and the number of people in the study. So, for example, let's say you were testing a drug called SugarDown and found that among 300,000 people matched to controls not taking the drug, 200,000 saw their A1cs decrease by at least 1 point and only 100,000 of the control subjects reached this endpoint. That is, twice as many people taking the drug had a desired result.
But if you gave the drug to only 3 people matched to controls, 2 of those taking the drug reached the end point and only 1 not taking the drug reached the end point, this might suggest to you that this drug was worth trying in a larger trial, but even though twice as many taking the drug had a desired result, just as in the larger trial, the results would obviously not be significant. The result could have been the result of chance.
When the results are more extreme -- let's say all 3 people taking the drug dropped dead 15 minutes after taking it and none of the controls did, then there's less chance that the results would be from chance.
These are obviously extremes; most studies have more realistic numbers. But no trial is perfect. Too many study patients make the studies too expensive, and too few patients make the results unreliable.
A meta-analysis tries to overcome these limitations.
The problem is that unlike the studies themselves -- which are usually "double-blinded," meaning that neither the patient nor the researchers know which ones got the real drug and which one got the placebo -- the researchers doing the meta-analyses have the results of all the trials in front of them.
They are able to pick which ones to use. This can be difficult when each study has a different end point and different parameters. And even a researcher without an ulterior motive might have unconscious biases that would result in rejection of one study that had an undesired result and the use of another that had a desired result.
With those caveats, here's what this study, by Steven Nissen and Kathy Wolski, published in the Archives of Internal Medicine, concluded: "Eleven years after the introduction of rosiglitazone, the totality of randomized clinical trials continued to demonstrate increased risk for myocardial infarction [heart attack] although not for cardiovascular or all-cause mortality. The current findings suggest an unfavorable benefit to risk ratio for rosiglitazone."
In other words, in this meta-analysis of rosiglitazone (Avandia) studies, more patients had heart attacks when on rosiglitazone, but no more died.
Study number 2, published in JAMA, which was not a meta-analysis, concluded that "Compared with prescription of pioglitazone, prescription of rosiglitazone was associated with an increased risk of stroke, heart failure, and all-cause mortality and an increased risk of the composite of acute myocardial infarction (heart attack), stroke, heart failure, or all-cause mortality in patients 65 years or older." [Italics mine}
So this study was limited to people over 65, and the results were compared with those of patients getting a similar drug, pioglitazone, not with those of patients not getting any of this type of drug. It's always possible that some drug might have a positive effect on some outcome compared with no drug, but it might have a less positive effect than another drug, so compared with the second drug the results would appear to be negative.
Their conclusions also refer to stroke, heart failure (not heart attack), and all-cause mortality. They found no increase in heart attack rates, but then they lump heart attacks together with the other end points and say there was an increase in this composite result!
That's like saying eating beans causes a lot of gas, and a composite consisting of people who ate beans, crackers, and chicken soup had more gas than people who ate none of these things. Many people reading that statement would avoid crackers and chicken soup before important meetings, even though the crackers and chicken soup had no effect on gas production.
The conclusion in the Abstract of the article doesn't specifically mention that the drug had no significant effect on heart attack rates. It just mentions the significant effect on stroke, heart failure, and all-cause mortality and the effect on the composite index.
Some busy physicians quickly reading this composite conclusion in the Abstract (and many people don't take the time to read an entire article, assuming the Abstract summarizes it correctly) might conclude that heart attack rates were increased as well as the other endpoints.
This article has other flaws. They mention in the Introduction that 7 previous studies showed that rosiglitazone increased heart attack rates, but they don't say 7 out of how many or increased compared with what. Seven studies out of 8 would be one thing; 7 studies out of 50 would be another.
The authors noted that most studies that showed an increase in heart attacks with rosiglitazone (again, they don't say increased compared to what) were done in younger patients (54 to 65 years old), whereas their study was in people older than 65 years.
This is another source of confusion for people trying to decide whether or not to use a drug. What helps or harms in one patient population might not do the same in another population.
Study number 1 seems to contradict the other two. It has not yet been published but was presented at the ADA meeting. According to this study, rosiglitazone had no effect on heart attack rates or mortality. Then they also used a composite outcome -- heart attack, mortality, and stroke -- and said rosiglitazone reduced this composite outcome.
But only stroke rates actually decreased, by 64%, whereas the "rates of heart attack and death on their own showed no significant difference between those who took rosiglitazone and those who did not." Once again, the composite outcome is confusing, and people may come to an erroneous conclusion.
This study was limited to patients with diabetes and existing cardiovascular disease. Some got revascularization for their cardiovascular disease. Some got insulin or metformin instead of rosiglitazone.
And the results reported at the ADA meeting were from a post-trial analysis of the results from the BARI-2D trial, which was not designed to test the safety of rosiglitazone. Hence the drug was not randomly assigned. Reanalysis of studies designed to test something else are somewhat questionable.
So is rosiglitazone safe to take? The evidence is not clear-cut. The FDA will soon meet to discuss the safety issue, presumably taking into account other studies in addition to the three discussed here.
But these three studies are a good example of the slippery slope we have to deal with when results of large clinical trials are published: confused statistics, biased authors with ties to drug companies, and different patient groups, comparisons, and end points.
I wonder how many bad drugs are on the market because of confusing clinical studies. So too, I wonder how many good drugs might have been dropped from the pipeline because of equally confusing clinical studies.
Evaluating risks vs benefits is not simple, and the best choice for a large population is not always the best choice for an individual patient. You might be allergic to a drug that helps most patients. Conversely, a drug that harms most patients might be wonderful for you.
All this is one reason that controlling with good food and exercise should always be the first choice. But this isn't always enough. Then we and our physicians have to evaluate which drugs will work best for us.
It is not a simple task.
Gretchen, you are not alone when it comes to statistics. I am happy you choose to write about this. Even the blogs posted today don't seem all that consistent. When are people going to wake up and realize that many of these studies only tell us what the company wants us to know or hear.
ReplyDeleteI know I mention studies, but I do this so that the reader can read what I read, but I still do not like 99 percent of the studies or believe the conclusions put forth.
Even my cardiologist admitted that many of these studies are questionable. He said he was a drug rep before he went to medical school, so he knows the approaches the drug people can use.
ReplyDeleteDespite this, he supports a mainstream approach to heart disease, which is mostly based on these drug-company sponsored trials.
Good stuff, Gretchen.
ReplyDeleteIn a rational world, the burden of proof is on those who assert the safety of these drugs.
The statistical sleight-of-hand masquerading
as proof tells you what you need to know about
the ethics & intentions of the "researchers",
and the drug companies they work for...
jack
Let's face it, Jack, everyone in the world wants to make a buck, and ethical people often finish last.
ReplyDeleteBut it's sad when innocent patients who aren't able to research things themselves end up taking a dangerous drug.
We also have to remember that even "harmless" drugs can cause harm in some people. Better not to take any if we can help it, but I take a lot.
I remember when I was first Dx'd and didn't want to take metformin "because no one knows how it works." My doctor said, "That's true. But we do know what the side effects of high blood sugar are, and it's not a pretty picture."
We're always having to weigh risks vs benefits. Of course we should be doing the same with food too, but very few people do.
Here is a good discussion of study size and statistics: http://healthcorrelator.blogspot.com/2010/07/china-study-with-large-enough-sample.html
ReplyDeleteThis is what we all need to know about Avandia; a thorough study. You did a fabulous job by writing this article and reality behind. FDA should also go for complete research and study on Avandia to reveal the truth and actual cause of this issue.
ReplyDelete