EyeNet Magazine



   
 
Clinical Update: Comprehensive
Clinical Trials: How Significant Are Their Results?
By Richard Trubo, Contributing Writer
 
 

The clinical trial, an engine of medical progress since the 19th century, is often considered the authority of record in medicine. In fact, many physicians feel comfortable using the findings of trials as a primary guide in their practice decisions.

The modern clinical trial was born in 1948, when a study of streptomycin for the treatment of tuberculosis became the first study to control for confounding variables by both randomizing the patients to different study arms and masking the arms so that neither patients nor researchers knew who received which treatment until the study was completed. This design has become the gold standard in medical research and
is intended to produce results that are verifiable and “statistically significant.”

But statistical significance may not be everything, if you ask George L. Spaeth, MD, director of the glaucoma service at Wills Eye Hospital and professor of ophthalmology at Jefferson Medical College. He feels that statistics can sometimes be misleading. His analyses of concepts supporting clinical trials and statistical significance have been the basis of frequent lectures at ophthalmology meetings.

What Counts, Who Counts
On the one hand, Dr. Spaeth doesn’t discount the value of controlled studies. “They’re enormously important in providing critical information,” he said. But he urges ophthalmologists to look carefully at “what statistics really mean and how to apply them. Any result from a clinical trial—no matter how conclusive it may seem—always has to be interpreted in terms of common sense and whether its findings are appropriate for a particular patient.”

In general, the results of a controlled, randomized clinical trial are considered statistically significant if the differences observed between study populations could only have occurred by chance alone less than one time in 20 (expressed as a P value of < 0.05). In other words, as important as a statistical observation may be, “it only tells us the likelihood that the observed finding was not due to chance,” said Dr. Spaeth. “It is an indication that there’s something different from normal there, and that it may or may not affect a person’s health. That’s quite different than something that is clinically significant and, in fact, does have an effect on a patient’s well-being.”  

According to Dr. Spaeth, when clinicians look at the findings of a trial they should ask if the numbers are clinically, and not just statistically, meaningful.

Historical perspective. By some accounts, clinical trials have come to ophthalmology relatively late compared with cardiology and many other medical specialties. As recently as 1998, a study of leading journals by William C. Steinmann, MD, MSc, found that the proportion of randomized clinical trials published in the three leading ophthalmologic journals was 50 percent lower than in the American Journal of Cardiology.¹ In more recent years, the emphasis on trials in ophthalmology has increased dramatically.

Not perfect? “Today, when you say that something is supported by a randomized clinical trial, everyone assumes that it has to be true,” said Kuldev Singh, MD, MPH, professor of ophthalmology and director of the glaucoma service at Stanford University. “While randomized trials have the greatest potential to educate physicians and improve medical care, they also have the ability potentially to mislead both physicians and patients,” he said.

Clinical trials can have flaws that are difficult for even the sophisticated reader to detect, said Dr. Singh. The design, conduct and interpretation of these studies are subject to both occasional and systematic errors (the latter is generally referred to as “bias”). For reasons like this, Dr. Singh advises approaching the medical literature with “healthy skepticism.” 

At the same time, the problem with many journal articles is not their statistical analyses, argued Dr. Steinmann, professor of medicine and director of the Center for Clinical Effectiveness and Prevention at Tulane University. “The problems are with conclusions that aren’t supported, or a subgroup analysis that doesn’t make sense. Or it’s emphasizing means as opposed to proportions. There are many factors to consider when evaluating these trials.”

No Patient Is a Population
When Dr. Spaeth delivers his lectures on trials and statistical significance, his most important message is that doctors treat individuals, not populations. “People are highly variable,” he said. Nevertheless, Dr. Spaeth noted, many of his colleagues rely on population statistics and apply them to individuals. 

To make his point, he said, “even when a particular patient has an intraocular pressure or a cup-disc ratio that is not within the average range, it doesn’t mean that the person’s condition is going to worsen or [that he or she] even needs treatment,” although clinicians may be inclined to treat these patients.

To Treat or Not to Treat?
A cup-to-disc ratio may be outside 
normal range but still not warrant treatment.
Caption: A cup-to-disc ratio may be outside normal range but still not warrant treatment.

Risk can grow larger, yet remain low. Dr. Spaeth often reviews data indicating that a patient whose IOP is 25 mmHg has about a fivefold greater risk of eventually developing glaucoma damage than an individual with an IOP of 10 mmHg. However, he stressed, the same studies show that the overall chance of damage in the 25 mmHg group is still only 5 percent.

“So if you’re treating everyone whose pressure is 25 because of their greater risk, you’re still going to unnecessarily treat 95 percent of this patient population,” said Dr. Spaeth. “You’d have to treat about 20 patients with pressures of 25 in order to benefit a single patient.

“We live in a culture in which doctors are expected to be all-knowledgeable and achieve a good result every time,” said Dr. Spaeth. “Patients don’t want their medical condition to worsen. And we’re sued if we miss things. So there’s every reason for us to choose treatment.” Even so, he stresses, in patients whose IOP is 25, the consequences of not treating are quite benign. At some later point, if field loss, cupping of the optic nerve or other early signs of glaucoma are detected, then treatment can be started at that time.

Nuance Among the Numbers
When the findings of a randomized trial do not reach statistical significance, investigators often blame an insufficient sample size or inadequate statistical power. But even with a large sample size, statistically significant conclusions can still be clinically insignificant. “So, for example, if 10,000 patients are enrolled in a particular trial, just a small difference in an outcome variable may turn out to be significant, even though it may not be clinically meaningful,” said Dr. Spaeth.

Dr. Singh agrees. Particularly with very large trials, “statistical significance can overstate the case,” he said. “Conversely, in small studies, statistical significance may not be achieved, even though clinically important differences may exist.”

Whose patients participated? Before making clinical decisions based on the results of trials, physicians also need to weigh the makeup of the study population itself. Were the subjects in the trial representative of patients who commonly present in your own practice, so that the results are generalizable to your own patient population? 

Do your homework. Some clinicians believe that it’s important to look beyond clinical trials and consider other published reports. Many times, said Dr. Steinmann, “I prefer a good case series over a bad randomized, controlled trial. Generally, there’s a lot of truth in case series. In fact, the idea that randomized clinical trials are the only answer would negate 90 percent of the recommendations in ophthalmology.”

Dr. Singh echoed some of those sentiments in an editorial in the Journal of Glaucoma. “Although many practitioners believe that our glaucoma practice patterns in the 21st century are more ‘precise’ or ‘evidence based’ than those of our predecessors, the degree to which our ability to care for patients has been improved by the results and interpretations of recent studies is open to debate,” argues Dr. Singh.²

The literature doesn’t write itself. Some physicians may believe that once a trial is completed, there is only one way to present the data. But, in fact, there are a number of ways to look at large sets of data. It’s helpful for readers to know that the preconceived notions and hopes of investigators to find certain outcomes may be one potential influence on how the data are presented, said Dr. Steinmann.

When Dr. Steinmann surveyed the editors of 120 medical journals, including the major ophthalmology journals, and asked them to rank the characteristics they were seeking in their editorial board members, expertise in subject content was at the top of the list, and expertise in statistics and research methodology was at the bottom. “When we asked for a ranking of the criteria for selecting manuscript reviewers, the results were similar,” he said.

Dr. Steinmann believes that in physician training, more attention needs to be paid to developing skills for critically reviewing the medical literature. “I’d like to see critical appraisal added to the curriculum,” he said. “Are you able to look at a study, understand the strengths and weaknesses of its methodology, and interpret the evidence? I’m much more concerned about those kinds of factors than the statistics alone.”

______________________________
1 Steinmann, W. E. Yearbook of Ophthalmology (St. Louis: C. V. Mosby, 2001).
2 J Glaucoma 2004;13:87–89. 

Variables, Design, Analysis: A Topical Discussion of Trials for AMD
At the Annual Meeting in Chicago, Edgar L. Thomas, MD, will discuss the challenges of translating study results into clinical practice in “How to Critically Evaluate Clinical Trials for Age-Related Macular Degenerations: Comparisons, Contrasts, and Pitfalls.”

Dr. Thomas’ presentation will take place Monday, October 17, from 4:30 to 5:30 p.m. at McCormick Place. Onsite event fee is $35.

About Us Academy Jobs Privacy Policy Contact Us Terms of Service Medical Disclaimer Site Index