Skip to main content
  • Opinion

    Harmful or Helpful? AI’s Potential in Ophthalmology

    Download PDF

    Headshot of Ruth D. Williams, MD.

    By Ruth D. Williams, MD, Chief Medical Editor, EyeNet


    I’ll be back” is one of the most famous one-liners in movie history. Declared by Arnold Schwarzenegger’s character in the 1984 film The Terminator, the cyborg-assassin is sent back to 1980s Los Angeles from a dystopian 2029. Forty years ago, we were already thinking about artificial intelli­gence (AI) gone awry. And now, only a few years shy of 2029, the medical community is grappling with AI’s potential in medicine—both harmful and helpful—with the launch of OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s revamped Bing. Introduced in November, ChatGPT is already stimulat­ing thoughtful discussions about its role in scientific writing and updated policies in the ophthalmic publishing space.

    ChatGPT (Chat Generative Pre-trained Transformer) and other AI chatbots are being utilized in academia in increas­ingly creative ways to write cover letters, letters of recommen­dation, syllabi, and opinion pieces like this one (but I’m writ­ing this the old-fashioned way). These tools can, for example, describe in detail a fundus photo in the documentation box of the medical record. I would not be surprised if the next generation of programs includes ophthalmic applications that leverage AI-based diagnostics—similar to the current system(s) that autonomously diagnose patients for diabetic retinopathy—to process images in addition to text. (GPT-4 even offers work-life balance perks. It can recommend din­ner recipes based on a photo of the contents of your refriger­ator—a practical application for busy ophthalmologists.)

    The explosive growth of AI chatbot technology has engen­dered new applications (and ethical dilemmas) that many never would have contemplated only a few months ago. Moreover, it highlights the importance of ensuring AI is used in a responsible and trustworthy manner.

    Just months after it was launched, ChatGPT was listed as a coauthor on several scientific papers, leading journal editors and scientific publishers—including Nature, Science, and Elsevier—to develop policies around the use of genera­tive AI in scientific writing. Generative AI can be used in the writing process to “improve the readability and language of the work,” as described in “Publishing Ethics” on the Elsevier website.1 Note: it is imperative for authors to disclose the use of AI and AI-assisted technologies in the manuscript.1

    Ophthalmology editor-in-chief Russ Van Gelder pointed out to me that chatbots are the next step in the evolution of writing tools. He added that “ChatGPT is disruptive because it crosses the Turing test threshold. That is, it displays intelli­gence nearly indistinguishable from that of a human.”

    Most journals, including the family of Academy journals, do not allow ChatGPT or other AI-assisted technologies to be listed as an author. Russ explained that all authors are responsible for the data in the paper and a chatbot cannot attest to the veracity of the data and the analysis. Further­more, because the human authors typically can’t identify the sources ChatGPT uses to generate content, it’s plausible that plagiarism is occurring. Who is responsible for that?

    Russ also emphasized that chatbots must not be used to interpret data or draw scientific conclusions, which raises the question of how this technology might be used in academic papers. ChatGPT can assist in conducting a literature review and can generate summaries of relevant studies and highlight key findings, but with limitations: its trustworthiness is ques­tionable, it does not give references, and it sometimes makes things up. ChatGPT can also be used to correct grammar, punctuation, and spelling errors. It can suggest synonyms or alternative words to improve the clarity and precision of the text.

    I asked ChatGPT how it could be used to write scientific papers. Its response was thorough, but inconsistent with the existing policies of Ophthalmology. For example, ChatGPT suggests that it can be trained on scientific data and used to perform data analysis, such as predicting outcomes or iden­tifying patterns. To its credit, ChatGPT did give some good advice to authors regarding its own credibility. It said, “While ChatGPT can be a useful tool, it should not be used as a substitute for human analysis and critical thinking.” And on a lighter note, ChatGPT generated an entertaining list of times the catchphrase “I’ll be back” has reappeared in our cultural lexicon since The Terminator first played in theaters.

    As our ophthalmic community conjures new ways to use chatbots, there is sure to be further discussion on the use of AI-assistive technologies. No doubt, “We’ll be back.”

    ___________________________

    1 www.elsevier.com/about/policies/publishing-ethics#Authors. Accessed April 3, 2023.