Here is the study abstract:
...
Thoughts, comments?
I can't get access to the full paper, but according to this the study showed the following results:
• Vitamin E led to a 22 percent reduction in the risk of heart attack.
• Vitamin E led to a 27 percent less risk of stroke.
• Vitamin E led to a 9 percent lower risk of death from cardiovascular disease.
• Vitamin E led to a 23 percent lower combined risk of heart attack, stroke, and cardiovascular-related death.
• Vitamin E and vitamin C together lowered the risk of stroke by 31 percent.
These results seem significant, and totally at odds with the authors' conclusions.
According to the article, the reason this is so different than the reported results is that the authors for some reason included in their results the people who didn't consistently take the vitamins!
Why would they do this? It makes no sense to me.
Can someone who can get the entire article verify this?
As soon as I saw the study abstract, I copy/pasted it here. After reading what you posted that was presented at
this page, I decided to examine the full text. I just got back from my almost daily workout, and I've got an endorphin rush going, so what they hey. It's best to be as careful as possible and I try not to come to any conclusions myself; that's what meta-analyses are for.
In fact, health policy now is not determined by one study or another, but rather by particular systems of meta analysis. It's far too easy for biases to play a role in the findings of one (randomized) study or another (let's say a pharmaceutical firm funded the study, the researchers did a sloppy job, made an error in transcription, there was an undisclosed or hidden conflict of interest, etc.).
If you really want to learn how modern health care decisions are being made these days, please evaluate
Systems to Rate the Strength of Scientific Evidence: Evidence Report/Technology Assessment: Number 47. I will copy/paste the introduction for you to evaluate:
Introduction
Health care decisions are increasingly being made on research-based evidence rather than on expert opinion or clinical experience alone. Systematic reviews represent a rigorous method of compiling scientific evidence to answer questions regarding health care issues of treatment, diagnosis, or preventive services. Traditional opinion-based narrative reviews and systematic reviews differ in several ways. Systematic reviews (and evidence-based technology assessments) attempt to minimize bias by the comprehensiveness and reproducibility of the search for and selection of articles for review. They also typically assess the methodologic quality of the included studies—i.e., how well the study was designed, conducted, and analyzed—and evaluate the overall strength of that body of evidence. Thus, systematic reviews and technology assessments increasingly form the basis for making individual and policy-level health care decisions.
Throughout the 1990s and into the 21st century, the Agency for Healthcare Research and Quality (AHRQ) has been the foremost Federal agency providing research support and policy guidance in health services research. In this role, it gives particular emphasis to quality of care, clinical practice guidelines, and evidence-based practice—for instance through its Evidence-based Practice Center (EPC) program. Through this program and a group of 12 EPCs in North America, AHRQ seeks to advance the field's understanding of how best to ensure that reviews of the clinical or related literature are scientifically and clinically robust.
The Healthcare Research and Quality Act of 1999, Part B, Title IX, Section 911(a) mandates that AHRQ, in collaboration with experts from the public and private sectors, identify methods or systems to assess health care research results, particularly "methods or systems to rate the strength of the scientific evidence underlying health care practice, recommendations in the research literature, and technology assessments." AHRQ also is directed to make such methods or systems widely available.
AHRQ commissioned the Research Triangle Institute—University of North Carolina EPC to undertake a study to produce the required report, drawing on earlier work from the RTI-UNC EPC in this area.1 The study also advances AHRQ's mission to support research that will improve the outcomes and quality of health care through research and dissemination of research results to all interested parties in the public and private sectors both in the United States and elsewhere.
The overarching goals of this project were to describe systems to rate the strength of scientific evidence, including evaluating the quality of individual articles that make up a body of evidence on a specific scientific question in health care, and to provide some guidance as to "best practices" in this field today. Critical to this discussion is the definition of quality. "Methodologic quality" has been defined as "the extent to which all aspects of a study's design and conduct can be shown to protect against systematic bias, nonsystematic bias, and inferential error."(Ref. 1, p. 472) For purposes of this study, the authors hold quality to be the extent to which a study's design, conduct, and analysis have minimized selection, measurement, and confounding biases, with their assessment of study quality systems reflecting this definition.
The authors do acknowledge that quality varies depending on the instrument used for its measurement. In a study using 25 different scales to assess the quality of 17 trials comparing low molecular weight heparin with standard heparin to prevent post-operative thrombosis, Juni and colleagues reported that studies considered to be of high quality using one scale were deemed low quality on another scale.2 Consequently, when using study quality as an inclusion criterion for meta-analyses, summary relative risks for thrombosis depended on which scale was used to assess quality. The end result is that variable quality in efficacy or effectiveness studies may lead to conflicting results that affect analyst's or decisionmakers' confidence about findings from systematic reviews or technology.
The remainder of this summary briefly describes the methods used to accomplish these goals and provides the results of the authors' analysis of relevant systems and instruments identified through literature searches and other sources. They present a selected set of systems that they believe are ones that clinicians, policymakers, and researchers can use with reasonable confidence for these purposes, giving particular attention to systematic reviews, randomized controlled trials (RCTs), observational studies, and studies of diagnostic tests. Finally, they discuss the limitations of this work and of evaluating the strength of the practice evidence for systematic reviews and technology assessments and offer suggestions for future research. The authors do not examine issues related to clinical practice guideline development or assigning grades or ratings to formal guideline recommendations.
I was telling my friend George (from MySpace) about these systems, and I tried try to emphasize the following issue with him, as I will now here -- that these systems aren't determined by one dude or dudette sitting up in his or her office arbitrarily deciding how to "run" things. It's teams of 13 Evidence-based Practice Centers. Here's some introductory information regarding these centers and what they do:
Evidence-based Practice Centers
Synthesizing scientific evidence to improve quality and effectiveness in health care
Overview
In 1997 the Agency for Health Care Policy and Research (AHCPR), now known as the Agency for Healthcare Research and Quality (AHRQ), launched its initiative to promote evidence-based practice in everyday care through establishment of 12 Evidence-based Practice Centers (EPCs). The EPCs develop evidence reports and technology assessments on topics relevant to clinical, social science/behavioral, economic, and other health care organization and delivery issues—specifically those that are common, expensive, and/or significant for the Medicare and Medicaid populations. With this program, AHRQ became a "science partner" with private and public organizations in their efforts to improve the quality, effectiveness, and appropriateness of health care by synthesizing the evidence and facilitating the translation of evidence-based research findings. Topics are nominated by non-federal partners such as professional societies, health plans, insurers, employers, and patient groups. Go to http://www..ahrq.gov...c/epctopicn.htm for topic nomination procedures. Federal partners often request evidence reports and should contact the EPC Program Director for more information.
For details on the EPC program for current and potential partner organizations, go to the EPC Partner's Guide.
Centers
In June 2002, AHRQ announced the second award of 5-year contracts for EPC-II to 13 Evidence-based Practice Centers to continue and expand the work performed by the original group of EPCs. Most of the second group of EPCs were part of the initial set. However, EPC-II brings in three new institutions to the program—the Universities of Alberta, Minnesota, and Ottawa—while MetaWorks® and the University of Texas-San Antonio have concluded their respective contracts as two of the original EPCs.
Three of the EPCs specialize in conducting technology assessments for the Centers for Medicare & Medicaid Services (CMS). Go to: http://www..ahrq.gov/clinic/techix.htm for more information.
One EPC concentrates on supporting the work of the U.S. Preventive Services Task Force (USPSTF). Go to: http://www..ahrq.gov...ic/uspstfix.htm for more information.
The current EPCs are located at:
• Blue Cross and Blue Shield Association, Technology Evaluation Center
• Duke University1
• ECRI1
• Johns Hopkins University
• McMaster University
• Oregon Health & Science University2
• RTI International—University of North Carolina
• Southern California
• Stanford University—University of California, San Francisco
• Tufts University—New England Medical Center1
• University of Alberta, Edmonton, Alberta, Canada
• University of Minnesota, Minneapolis, MN
• University of Ottawa, Ottawa, Canada
1 EPCs that focus on technology assessments for CMS.
2 EPC that focuses on evidence reports for the USPSTF.
For contacts and additional information about the current participating EPCs, go to: http://www..ahrq.gov...c/epcenters.htm.
One or two studies, no matter how well they may be conducted, are merely one or two data points in the data pool.
Anyways, back onto the topic at hand -- a report claimed:
• Vitamin E led to a 22 percent reduction in the risk of heart attack.
• Vitamin E led to a 27 percent less risk of stroke.
• Vitamin E led to a 9 percent lower risk of death from cardiovascular disease.
• Vitamin E led to a 23 percent lower combined risk of heart attack, stroke, and cardiovascular-related death.
• Vitamin E and vitamin C together lowered the risk of stroke by 31 percent.
Actually (I've got the full paper in front of me now):
No differences were seen in the primary end point by randomized vitamin E assignment (RR, 0.94; 95% CI, 0.85-1.04 [P = .23]) (Table 2 and Figure 2), with no significant variation in the relative risk over time. We found a nonsignificant 16% reduction in total stroke, with a 21% reduction in ischemic stroke (P = .06) and an increase in hemorrhagic stroke based on small numbers. There was an overall 10% reduction in the combination of MI, stroke, and CVD death, with a nonsignificant decrease (P = .08) in benefit over time. No difference in total mortality by vitamin E group was found.
So it seems these are non-significant reductions (i.e. what that may mean is they could be determined by chance). However, the bottom line is in bold above -- usually stroke is the cause of death -- and no difference in mortality was found.
However, you also may note:
Censoring participants on noncompliance led to a significant 13% reduction in the primary end point (RR, 0.87; 95% CI, 0.76-0.99 [P = .04]).
The author of that news report seemed to leave out a 14% increase in CVD mortality from beta carotene in this study (an interesting
selective focus
): However, no difference in total mortality.
BETA CAROTENE
There was a nonsignificant 14% increase in CVD mortality in the active group, with a significant decline over time in the effect on CVD deaths (P = .04) but no difference in total mortality. When participants were censored on noncompliance, the effect on the primary end point remained nonsignificant (RR for major vascular disease, 1.09; 95% CI, 0.96-1.24 [P = .18]), but an increase in CVD mortality appeared to emerge (RR, 1.48; 95% CI, 1.08-2.02 [P = .02]).
Once again, when censored on noncompliance (i.e. I guess the subjects were asked "did you follow the study rules?"), an INCREASE in CVD mortality appeared to emerge.
Finally:
COMBINATIONS OF ANTIOXIDANTS
There were no significant 2- or 3-way interactions among the agents for the primary end point. The effects for each of the combinations of active agents compared with the group with all 3 placebos is shown in Figure 3. [b]There were also no interactions for the secondary end points of MI or cardiovascular death. For stroke, we found a significant 2-way interaction between ascorbic acid and vitamin E (P = .03). Those in the active groups for both agents experienced fewer strokes compared with those in the placebo group for both agents (RR, 0.69; 95% CI, 0.49-0.98 [P = .04]) (Figure 3).
I might suggest a remedial on what primary endpoints mean in research; I could probably use one myself.
Please correct me if I am wrong. I am not a doctor.
However, let's take a quick look at the comment from the full text:
[b]In this large-scale randomized trial among women at high risk for CVD, we found no overall effects of vitamin E, ascorbic acid, or beta carotene on the primary end point of major vascular disease over a long-term follow-up of more than 9 years. These null results are consistent with the majority of trials of these antioxidants in both primary and secondary prevention. When combinations of agents were examined, there were no significant interactions, except for a possible reduction in stroke among those taking both active ascorbic acid and active vitamin E. In contrast to a recent meta-analysis of antioxidant supplements,16 we found no detrimental effects of any of these agents on total or CVD mortality.
So it seems the only significant element that we know now that we didn't already know from the abstract was "a possible reduction in stroke among those taking both active ascorbic acid and active vitamin E."
Further thoughts or comments?
Take care.
Edited by adam_kamil, 15 August 2007 - 07:47 AM.