Back to Journals » Clinical Ophthalmology » Volume 18

The Utility of ChatGPT in Diabetic Retinopathy Risk Assessment: A Comparative Study with Clinical Diagnosis [Response to Letter]

Authors Raghu K, S T, S Devishamani C, M S, Rajalakshmi R , Raman R

Received 24 January 2024

Accepted for publication 24 January 2024

Published 1 February 2024 Volume 2024:18 Pages 313—314

DOI https://doi.org/10.2147/OPTH.S461186



Keerthana Raghu,1,* Tamilselvi S,2,* Chitralekha S Devishamani,1 Suchetha M,2 Ramachandran Rajalakshmi,3 Rajiv Raman1

1Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India; 2Centre for Health Care Advancement, Innovation, and Research Department, Vellore Institute of Technology, Chennai, Tamil Nadu, India; 3Department of Diabetology, Ophthalmology and Epidemiology, Madras Diabetes Research Foundation & Dr. Mohan’s Diabetes Specialities Centre, Chennai, Tamil Nadu, India

*These authors contributed equally to this work

Correspondence: Rajiv Raman, Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India, Email [email protected]


View the original paper by Ms Raghu and colleagues

This is in response to the Letter to the Editor


Dear editor

We are writing in response to Elanda Fikri’s letter regarding our recently published article titled “The Utility of ChatGPT in Diabetic Retinopathy Risk Assessment: A Comparative Study with Clinical Diagnosis” in Clinical Ophthalmology.

Firstly, we wish to express our gratitude to Fikri for their insightful feedback on our study. We appreciate the recognition of our work in utilizing ChatGPT-4 for generating medical reports and evaluating its prediction accuracy against clinical diagnosis using anonymized data. The interest and discussion our study has sparked are both encouraging and invaluable.

Addressing the limitations pointed out by Fikri, we concur with the concerns regarding the use of a research version of ChatGPT. The potential for generating nonsensical or incorrect information due to training data and algorithmic limitations is a valid point. The current version of ChatGPT primarily acquires knowledge from publicly available texts and is not specifically trained for medical tasks, which limits its comprehensive understanding of complex medical scenarios.1

To enhance the capabilities of current Large Language Models (LLMs) like ChatGPT, further training with detailed knowledge of eye anatomy, physiology, and diseases is essential. Incorporating specialized ophthalmological literature and clinical guidelines into the training process can significantly transform LLMs into effective tools for ophthalmic applications. However, much of this specialized material remains publicly inaccessible, which poses a limitation to the scope of LLM training. This underscores the need for clinicians to critically review and verify the outputs against accurate and current medical information.2

Ethical considerations in healthcare research were paramount in our study. We adhered to strict ethical guidelines to ensure patient confidentiality and data privacy. Although ChatGPT is not yet fully HIPAA compliant, we implemented rigorous anonymization procedures to safeguard patient privacy. Any data utilized was thoroughly de-identified, making it impossible to trace back to individual patients. Nevertheless, the ethical concerns surrounding data privacy and the integration of LLMs like ChatGPT into clinical routines warrant careful consideration and must be addressed diligently.

Despite its current limitations, our study’s use of ChatGPT allowed us to explore the potential of language models in predicting diabetic retinopathy risk based on clinical and biochemical parameters. This novel approach paves the way for future research. We align with your suggestions on the necessity for further optimization and are committed to enhancing the capabilities of ChatGPT through future studies. Collaborations with healthcare professionals, technology vendors, and researchers are crucial for improving the accuracy and reliability of these models.

Thank you for the opportunity to discuss our study further. We are committed to contributing to the advancement of medical knowledge and improving patient care through responsible AI usage.

Disclosure

No conflicting relationship exists for any author in this communication.

References

1. Nori H, King N, McKinney SM, Carignan D, Horvitz E. Capabilities of gpt-4 on medical challenge problems. arXiv. 2023. doi:10.48550/arXiv.2303.13375

2. Betzler BK, Chen H, Cheng CY, et al. Large language models and their impact in ophthalmology. Lancet Digital Health. 2023;5(12):e917–24. doi:10.1016/S2589-7500(23)00201-7

Creative Commons License © 2024 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.