Download PDF

We’ve all experienced it. We get our car repaired, and the service manager says, “Please recommend us on Yelp.” How many times this year have you been contacted by an ophthalmology program asking you to “vote for us” on Doximity or U.S. News and World Report? Are most of us any better at ranking residency programs or departments of ophthalmology than we are at assessing car repair?
Many physicians complain that online reviews of their own professional services are unrelated to quality and that the movement to “transparency” is actually enticing patients to make health care decisions based on subjective and often marginally relevant factors. This plays out on a larger scale, too, for example, when resident applicants use the Doximity Navigator, based largely on such rankings.
U.S. News ranks hospitals in 16 different fields, including ophthalmology. In 12 of these specialties, rankings are “determined mostly by data,” with a minor reputational component. In 4 specialties (including ophthalmology), rankings are “determined entirely by reputation, based on a survey.”1
In 2016, the #1 ranked program had a score of 62.8 and the #6 program score was 13.5, based on a national survey of hundreds of ophthalmologists. Really? There’s that much difference? I consider myself pretty knowledgeable about academic institutions in our profession, but I am uncomfortable “ranking” the top departments and refuse to participate.
Do these popularity contests have economic value? A lot of people check them, enough that U.S. News can sell advertising on the website. As of March 16, the 3 most prominent advertisers were 2 large ophthalmology departments—and California Psychics, which described itself as “the most-trusted source of accurate psychic readings by phone,” with a “Talk Now” button. All part of evidence-based decision-making.
But people remember these rankings. The U.S. News website advises patients that consulting the rankings “may be in order if your care calls for special expertise. …” One study found that patients remember them up to 2 years after the original publication.2 Another study showed that rank changes can engender a 5% change in patient volume (and revenue).3
One problem inherent in all these reputation-based rankings is the heterogeneity of criteria. What are we ranking: overall clinical care, faculty quality, clinical research, teaching environment, personal relationships, or pure old-fashioned “street credibility”? Do we implicitly believe that bigger departments offer better residency training than smaller ones? Do we take it on faith that success in clinical research and publishing translates into success as a teacher of residents and as a mentor? There are large programs that relatively deprioritize teaching and small programs with great teachers.
I recommend that we individually, through our academic institutions, and as a profession consider the following:
- Encourage U.S. News/Doximity to include quantitative inputs in addition to “reputation” in compiling their ophthalmology rankings. These data exist.
- Encourage U.S. News/Doximity to ensure that the national ophthalmology community is appropriately sampled and to better define the criteria for “reputation.”
- Don’t use your voting as a way to reward friends or alma maters. People assume that these rankings are meaningful and may give them undue weight in life-changing decisions.
- Educate medical students as to the limitations of these rankings as navigational aids in choosing interviews or making a final residency rank list.
We must recognize these websites for what they are—popularity contests that accumulate individually determined qualitative preconceptions and generate a pseudo-quantitative rank list. Is it simply better to click the “Talk Now” button for $1 per minute?
___________________________
1 health.usnews.com/health-care/best-hospitals/articles/faq-how-and-why-we-rank-and-rate-hospitals.
2 Hibbard JH et al. Health Aff (Millwood). 2005:24(4):1150-1160.
3 Pope DG. J Health Econ. 2009;28(6):1154-1165.