Skip to main content
  • Winter 2023 Editorial - Mistakes


    It comes up, occasionally, that a student or resident asks me about the sort of mistakes that physicians make.

    My answers vary with the context. However, I am apt to mention that the most common mistakes in science usually relate to ascertainment bias. But what are the most common mistakes in medicine? I’ve pondered the question and thought that this forum, of senior ophthalmologists, would be an interesting place to consider the most common serious mistakes physicians make with their patients.

    For the purposes of argument, I’m going to propose that we exclude the mistakes we might make in the OR. It’s also not so interesting to consider the common failings we have in not making the earliest diagnosis of an insidious disease (like glaucoma) or in giving steroids to steroid responders. I want to be more philosophically broad in this article by looking for system failures. I think the biggest mistake we commonly make is to order a test that wasn’t warranted.

    In fact, I have written some about this. For example, I’ve published an article, “Neuro-ophthalmology Safer Than MRI” (Sadun AA, Chu ER, Boisvert CJ. Ophthalmology. 2013 Apr;120 (4):879.) But that article was rather specific and technical and not sufficiently philosophical.

    I also published a short essay for layman online in Quora. That article is basic and too unsophisticated for our group of senior physicians. So, I’ll attempt a more sophisticated answer in this forum.

    Several years ago, the Institute of Medicine for the National Academy of Sciences (a nonpartisan group of world-class physician/scientists) recommended not doing mammograms for women under the age of 50, as long as they did not have a family history or other indication of being high risk for breast cancer. You may remember that the press and politicians went crazy. I heard a congresswoman (over 50 herself) saying this was a cold attempt by the government to save itself money at the expense of citizens, and if it were her family, she’d insist on getting the mammogram. That was either insincere or idiotic. Such a test in a low-risk patient would have put her at statistically greater risk and done her a disservice.

    Similar things have happened with the idea of doing PSA screening for prostate cancer. The early advocates are just beginning to understand. There are important roles for mammograms and PSA testing, but not in the low-risk patient. By the way, would you get a PSA on a woman or a mammogram on a man? Maybe our foolish congresswoman would have. Context matters. But not just in the extreme.

    Simply stated, more testing is not the panacea that has been advertised. The likelihood of a positive test being meaningful requires us to estimate and then integrate the pretest probabilities in a Bayesian analysis (a statistical model named for English mathematician Thomas Bayes). I could try and show you that Bayesian analysis with a bunch of numbers, but it would be ugly. It hasn’t worked out well for me at several professional presentations. So here is my best attempt at a qualitative answer to a quantitative problem (which I admit is not ideal).

    When you do a test, the results may not be accurate. There is a chance it could give you a false negative (you have cancer but the test doesn’t find it), a false positive (you have no cancer but the test says you do) or a hyper-diagnosis (something is there and it’s real, but it’s irrelevant). Most laboratory tests provide their sensitivities and specificities so you know the false negatives and false positives, but only as it applies to a population comparable to those that have been previously tested. The classical studies used to calculate these numbers were done against a gold standard in which there were many real cases of the disease. So, if you want to assume their numbers, you need to match their population of patients. This means, you need a reasonable index of suspicion of having the disease to be compared to previous people who had the test. But if you test people without a good pretest probability of the disease, you will be misled.

    Here’s an example: Pregnancy tests have a specificity of 98% when done in women of childbearing age. That’s pretty good. If you are a 30-year-old woman who missed her period and the test comes back positive, start buying baby clothes. But if you gave a pregnancy test to 100,000 men, it would claim that many were pregnant. This is not so funny if you used the test to decide on doing something risky, like surgery. But that’s what happens when you are screening for cancers. If you did the test in enough people without enough suspicion of cancer, then you’d do a lot of surgery without merit. And all surgeries carry some risk.

    Sure, a breast biopsy isn’t that dangerous. Let’s say it goes fine. But the pathology reports also have errors. So, there is a small probability that a benign breast lump ends up being reported as malignant. What follows then will be a big surgery and often awful chemotherapy. Now we’re talking serious morbidities and even a small chance of mortality. But if you started with low enough pretest probabilities, the odds of such mortality and morbidity would actually be higher than the chances that you missed some early cancers by not doing enough testing. The answers are in the math. People have done this number crunching.

    The Institute of Medicine did it and recommended when mammography should be done (absent family history, starting at age 50), and it had nothing to do with saving Medicare dollars. But the politicians and the press and the talking heads never did get it. Most patients didn’t get it. Most doctors today don’t get it. And the testimonials only work one way (the error of ascertainment bias). Whenever someone picked up something early on a scan, they tell the patient that they’ve dodged a bullet with early diagnosis. Those that got unnecessary surgery never learn that they took undue risks. We never hear testimonials from the cases of hyper-diagnosis leading to unnecessary surgery leading to death and morbidity. These are chalked up as the unfortunate consequences of breast cancer.

    This frustrates me since so many of my referring physicians don’t get it. Commonly, I inherit patients with abnormal MRIs that I’m asked to work up. Only the MRIs should never have been done in the first place. Maybe the patient had a tension headache or had a slip and fall or they just talked their physician into ordering the scan. The patient and/or doctor said, “Better to be safe than sorry.” Or “You can’t be too careful.” But that’s the point. In medicine you CAN be too careful. If the patient just had a headache and no neurological indications for the MRI, the resulting imaging can’t be compared to similar images in the textbooks showing MRI pathology, as those were obtained from patients who had symptoms. Apples and oranges. I’m often stuck explaining to the patient that what was found on the MRI was real but it was not the cause of his/her problem. It may have been there all their lives and so it probably shouldn’t be removed. That’s a very hard sell. To patients. And to doctors.

    There’s another problem with screening tests called “lead-time bias.” Supposing we compared the death rate of the U.S. vs. U.K. (the National Health Service) for prostate cancer. It turns out that they are almost identical. But if we compare the five-year survival statistics between the two, we are astonished to discover that the five-year survival is about 82% for the U.S. and only 44% for the U.K. How is that possible? The answer is that the PSA screening system widely used in the U.S. is leading to earlier diagnosis, but NOT delaying the date of death. So, the first five years in the U.S. just happens earlier in the course of the disease. But all that screening did not improve the main outcome of death averted or delayed. People with prostate cancer in the U.S. and U.K. died at the same ages despite the big differences in the use of PSA. That’s lead-time bias. But in this present discussion, the focus is on the issue of making a diagnosis based on inappropriate testing or imaging.

    As mentioned above, I published an article about just such a case. The patient had a brief episode of diplopia that in my hands was easily diagnosed as due to a decompensated fourth nerve palsy. Only the neurologist had seen the patient first and misdiagnosed her as having a third nerve palsy, so he ordered an MRI that revealed a small posterior communicating aneurysm. The patient was scheduled to have an emergency craniotomy.

    When I revised that recommendation as the MRI scan results were clearly not associated with any symptoms in the patient, this neurosurgeon told me that by not doing immediate surgery for what he thought was an aneurysm, “you just killed your patient.” That was a horrific thing to hear. In other words, I decided her MRI showed an infundibulum, present since her birth and not an aneurysm, though both look identical on MRI. When I finally published the paper, it was 25 years later and she was still fine despite not having had the critical surgery. Explaining this to the patient was a very difficult conversation, although to her credit, she understood it.

    On this subject, I’m resigned to the fact that some of my readers won’t get it. They have absolute faith in tests like MRIs. I say that they are praying to a false icon and assume that an image can’t be wrong because it shows a structural lesion. But it’s the dysfunction that matters more than the structure. I often encounter medical students who are astonished that I wouldn’t want to do the test. Why am I emulating the ostrich who buries his head in the sand? I grant that it’s a subtle argument to make. Most modern tests are fantastic when used judiciously and in context, but dangerous when used out of context and too ubiquitously. But even in this era of fantastic imaging and other laboratory testing, I teach that symptoms trump signs, and signs trump tests. But I suspect that this is a vanishing perspective. And my attitude will soon rest in peace once artificial intelligence (AI) plays a serious role in managing patient workups.

    For those interested in sharing these explanations with your patients especially regarding lead time bias, I found a video from British cardiologist Dr. Rohin Francis, “The Epidemic of Fake Disease” who does a very good job with these difficult concepts.