The Increasing Role of Artificial Intelligence in Health Care: Will Robots Replace Doctors in the Future?
Authors Shuaib A, Arian H, Shuaib A
Received 17 June 2020
Accepted for publication 29 September 2020
Published 19 October 2020 Volume 2020:13 Pages 891—896
Checked for plagiarism Yes
Review by Single anonymous peer review
Peer reviewer comments 2
Editor who approved publication: Dr Scott Fraser
Abdullah Shuaib1,†, Husain Arian,1 Ali Shuaib2
1Department of General Surgery, Jahra Hospital, Jahra, Kuwait; 2Biomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait City, Kuwait
†Dr Abdullah Shuaib passed away on July 21, 2020
Correspondence: Ali Shuaib
Biomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait City, Kuwait
Tel +965 24636786
Email [email protected]
Abstract: Artificial intelligence (AI) pertains to the ability of computers or computer-controlled machines to perform activities that demand the cognitive function and performance level of the human brain. The use of AI in medicine and health care is growing rapidly, significantly impacting areas such as medical diagnostics, drug development, treatment personalization, supportive health services, genomics, and public health management. AI offers several advantages; however, its rampant rise in health care also raises concerns regarding legal liability, ethics, and data privacy. Technological singularity (TS) is a hypothetical future point in time when AI will surpass human intelligence. If it occurs, TS in health care would imply the replacement of human medical practitioners with AI-guided robots and peripheral systems. Considering the pace at which technological advances are taking place in the arena of AI, and the pace at which AI is being integrated with health care systems, it is not be unreasonable to believe that TS in health care might occur in the near future and that AI-enabled services will profoundly augment the capabilities of doctors, if not completely replace them. There is a need to understand the associated challenges so that we may better prepare the health care system and society to embrace such a change – if it happens.
Keywords: artificial intelligence, technological singularity, health care system
Imparting intelligence and human-like feelings into machines and inanimate objects has been a fascinating concept since time immemorial. The term artificial intelligence (AI), however, was first introduced by John McCarthy in 1956.1 AI mainly relies on analyzing large databases, recognizing interactions, cross-matching complex symptoms and signs, and developing algorithms to resolve problems.2 In the last few decades, there has been an exponential increase in peer-reviewed publications on AI, reflecting the recent surge in the interest in this area.3
The use of AI in health care is poised for many-fold growth.4 Current applications include disease diagnostics, drug development, the personalization of treatment, supportive health services, and gene editing. The use of AI in medicine can be classified into visual and physical domains. Visual AI covers areas such as electronic medical records, outpatient appointment reminders, and health tracking applications, whereas physical AI encompasses tasks such as robotic surgeries and robotic drug dispensaries.
The capabilities of AI-enabled systems have significantly increased in the recent past. Technological singularity (TS) is a hypothetical future point in time at which AI is expected to surpass human intelligence.5–11 Ray Kurzweil predicted it would occur between the years 2040 and 2050.5 TS in health care, in a strict sense, will imply the replacement of medical practitioners with AI-enabled robots and peripheral systems. A futuristic example of TS in health care was shown on the American science fiction television show Star Trek Voyager (1995 to 2001), which portrayed an AI hologram of a physician who treated the spaceship’s crew. With the current pace of technological advances in the field of AI, the debate on the possibility of whether AI will replace humans has shifted from a fictional landscape to a realistic one. The need is to carefully weigh the possible socio-legal and ethical impacts of such advances and prepare society to embrace such changes when they happen. In this paper we discuss some of the successes, opportunities, and challenges associated with the integration of AI in health care.
One of the major motivations for using AI in health care is to simulate or enhance the efficiency of human clinicians.12 Indeed, its use has resulted in several advantages such as improved diagnostic and treatment accuracy, increased efficiency, and reduced costs. By contrast, the replacement of activities – conventionally conducted by humans – with AI-enabled machines, could eliminate the emotional contact which plays a major role in the patient-doctor relationship.2,13,14 Given the technical, emotional, and legal complexities associated with the field of medicine, there is a growing concern about the effectiveness and impact of AI in medical practices.4,12 It is worth pondering, in the event of TS, what the structure would be with regard to the patient-centered rational model of health care which rests on compassion and trust.
AI involves sophisticated algorithms to learn how to react to a situation and/or resolve a problem by processing a large amount of data. Its general function falls into two categories, namely machine learning (ML) and natural language processing (NLP). In a ML process, AI collects and analyzes structured data such as diagnostic images and gene traits. NLP focuses on collecting unstructured data or information from electronic medical records and research journals.1,5 These processes are collectively used to produce clinical decisions. To process the data, AI systems commonly use algorithms based on artificial neural networks (ANN), fuzzy expert systems (FES), evolutionary calculations (EC), and hybrid intelligent systems (HIS).
Driving an analogy from biological neural networks, ANN is a computer program that simulates human thinking, such as with learning and retrieving data from previous experiences. An ANN observes examples of solutions to previous problems, collects information from those examples, processes this information through learning algorithms, and develops a response. Unlike a pre-program, an ANN learns, reasons, and responds as humans do. The quality of results depends on the amount of data it processes. A large amount of data enables better learning processes.
FES is a rule-based AI that is programmed with expert knowledge in a specific field and that mimics the response of an expert. The advantage of an FES system is that the knowledge of experts will be perpetually more accessible. EC is a computer program that is inspired by biological evaluation to provide an optimal or near-optimal solution. Essentially, EC processes data and information, suggests suitable solutions, and evolves with bigger data while eliminating unsuitable solutions. HIS involves the synergetic use of some or all of the previously mentioned systems.
In recent years, AI-based applications have been adopted in different clinical specialties. In ophthalmology, for instance, an AI algorithm-based grading system was developed to assess fundus photographs of diabetic patients and to determine who should be referred to an ophthalmologist.12 This was approved by the US Food & Drug Administration (FDA) in 2018.15 In the area of surgery, one of the noticeable developments has been the Da Vinci robotic surgical system, which was approved by the FDA in 2000.16–18 Several thousand units of this system are functional and in use across the globe for complex urological, gynecological, and gastrointestinal and other surgical procedures.
AI is finding applications in more and more clinical areas and there has been a significant amount of research on developing new ones. The authors of a recent review synthesized three previously reported systematic reviews on the use of AI to reduce lower back pain.19 The study’s authors underscored the potential of ML in back pain management via the sub-classification of diagnosis.
Computer vision (CV) is another emerging application of AI which involves image processing, pattern recognition, and response.3,20,21 It is useful in several medical areas but envisaged to be most promising in radiology and pathology, which involve a great deal of image processing. In such areas, CV can be used for differentiating benign lesions from malignant ones. In an interesting example, CV is being explored to reduce colorectal cancer-related mortality by using a standard, fully convolutional network to yield a better diagnosis than the conventional approach of the visual assessment of polyp malignancy.21,22 With an increase in the availability of a large amount of visual medical data, advanced processing algorithms, and superior storage devices and cameras, the use of CV in medicine is expected to continue growing.21
In several clinical settings, AI models can be used to predict the risk of postoperative complications.23 The accuracy of such systems is generally better than that which is typically observed in conventional approaches. For example, supervised algorithms, which learn from data labeled by humans, classify or make predictions based on new, unseen data; they have been reported to predict sepsis about one day before its onset with an impressive receiver operating characteristic curve. Whether such an algorithm can be used to guide an effective therapeutic intervention, however, remains to be established. As the peripheral electronic and clinical workflow systems will advance and get effectively integrated with these models, their use in routine clinical practice will become common.24
The analysis of a large amount of data (numeric, text, or imaging) and their combinations is involved in several medical disciplines.25 Our current diagnostic and therapeutic levels of potential, in many ways, are limited by our biological, cognitive, and sensory abilities. AI, along with other technological advances, is poised to change this landscape. A fitting example in this regard is radiomics. Radiomics is a quantitative medical imaging tool capable of extracting key information that is imperceptible to the human eye. By using efficient data extraction and mathematical analysis, radiomics attempts to quantify image intensity, shape, or texture which can be correlated with a specific clinical attribute.26 These AI-enabled advances are expected to revolutionize the field of medicine as we see it today.
AI can be used to improve health management systems. NLP can be used to analyze clinical text and reports; for example, it can be utilized to conduct a finer analysis of an extensive amount of previous health records of a patient and get a history of previous infections, side effects, their family history, and other details relevant to their current condition. All parts of this system can be easily automated and information can be retrieved almost instantaneously.
Along similar lines, high throughput screening, bioinformatics, and the medical records of several patients can help in developing AI methods for drug design and development.13 Nanorobots, which can be used for targeted drug delivery or softbots, which are autonomous programs acting on behalf of a user, have been proposed as psychotherapeutic avatars to detect early emotional disturbances in youngsters. Other applications have been suggested as well; several of these approaches have been reported to work better than human interventions.27
Having discussed current applications of AI, now we shift our focus to the feasibility of TS and its potential social, legal, and ethical ramifications.28 The societal and ethical impacts of AI demand serious reflection. If there was a medical error or a claim of malpractice, would the AI be considered legally liable and subject to criminal liability? AI models are based on learning and interpreting from the data, posing obvious concerns regarding the privacy and autonomy of patients. An interesting question is that, after TS, to what extent, and how, does AI use this data? Should it be decided by the AI itself or by a being with lesser intelligence, as with a human?
Certainly, it does not look wise to give decision-making authority to a being with lesser intelligence. From a practical standpoint, nonetheless, it seems important to examine to what extent life and death decisions can be allowed to be made by AI. Such decisions are an integral part of medical practice, and based on today’s socio-ethnic values, it seems unlikely that AI will be given full control. Furthermore, can the ethical dilemmas, which have been posed to humans by advances such as gene editing and the creation of embryos, be left to the discretion of AI, even if it achieves TS and becomes more intelligent than humans? If we accept the omniscience of AI and leave gene editing decisions to machines embedded with its capabilities, the next stage will be allowing AI to decide to modify genes and create super-humans in the future. If doctors are completely replaced by AI and AI is given full autonomy, the possibilities are almost endless and maybe even incomprehensible today.7
If we analyze purely from a feasibility perspective, we need to shift our focus to the inputs that AI needs to function and ask whether these data are available in health care. To illustrate the recent triumph of AI over humans, the fitting comparison point would be the defeat of the human champion in the Go board game by a deep reinforcement model (DFM)-based AI. The DFM was first learned from Go experts and it subsequently developed into a completely autonomous model. Similar approaches have the potential to transform medical decision making, but the question is whether we have high-level medical data as an input for AI models. Given the rarity and complexities of human diseases, it seems that such a comprehensive set of data does not exist. Furthermore, each clinical decision involves several subtle complexities that may not be properly captured in previous experiences. How AI can adapt to such unforeseen scenarios and provide solutions, remains to be seen. As of now, there is no high-level proof supporting the efficiency of autonomous AI-enabled machines. Since its use involves life and death scenarios, it is doubtful that a machine will ever, even after achieving TS, be given full autonomy to execute such decisions. There is no doubt, however, that intelligent, autonomous machines have the potential to augment medical professionals. That is why their use is going to increase along with the quality of medical care.
If we assume that all technological or data gathering challenges are addressed and AI has reached the point of TS, many other subjective challenges need to be addressed, in all technical aspects, before replacing human clinicians with AI robots. One of the major challenges is empathy.29 Empathy, which is the ability to understand and share the feelings of another, develops from self-awareness and through interactions with other members of society.29 We believe that for AI to succeed in the health care system and to achieve TS in health care, it has to embrace empathy.
The proponents of AI recognize this problem and a great deal of attention is being made to make it more humane by imparting Artificial Empathy (AE).29 One of the efforts in this direction has been AI-enabled, empathetic mobile app technologies. In a recent study on such technologies, the authors reported that close to 70% of the users found the app to be helpful and encouraging.14 In another study, scientists compared clinical judgment with the MySurgeryRisk algorithm for preoperative risk assessment; the authors reported the performance of this algorithm was significantly better than that of physicians who were making initial risk assessments.20 All these pieces of evidence suggest that AI is a useful tool in enhancing the functionality of clinicians, rather than being a substitute for a human doctor.10–12,30,31
It must be acknowledged that the process of integrating AI with health care practices also poses challenges such as reduced employment opportunities for humans and the liability of errors. These are not unique to AI but have always been associated with medical practice in some form or fashion. As the use of AI in health care matures, an optimal solution will be devised. There are still too many unknowns; however, if we draw projections from the information available today, it seems that TS, when it happens, will not be absolute (ie, it will not lead to the complete removal of human intervention from medical care). Brailas et al aptly pointed out that AI and humans form a bio-techno-social system in which each participating actor coevolves.32 The future, therefore, cannot be accurately predicted based on the capabilities and understanding of humans and society as they exist today. As AI will evolve, so will other actors of this bio-techno-social system. Notably, acknowledging the concerns associated with AI, the World Health Organization (WHO) made a pledge to address ethics, governance, and regulation of AI in relation to health care decisions.33
As a counterthought, Melanie Mitchell draws attention to the quality of “intelligence” in AI.34 She illustrates that several AI programs lack key aspects of human understanding and that it is dangerous to give AI too much autonomy, especially without addressing their limitations. We are quite optimistic that AI will become widespread and more intelligent in the near future. However, it is more likely that, even in the era of TS, AI will not fully replace doctors but rather enable them to focus on the more productive and important aspects of patient care.35 Health care professionals need to increasingly adopt AI-enabled technologies and adapt themselves to such changes with a focus on improving patient care while retaining humanistic values, ethics, and motivations.
AI in medicine and health care is a rapidly growing field with enormous potential. TS in the health care system could occur in the near future, depending on the pace of technological and computational growth. There is the possibility that AI could be a substitute for human doctors for many medical activities; however, such a replacement will not be absolute. Human doctors will continue serving patients with capabilities augmented by AI. As it evolves, clearer guidelines will emerge on its integration with medical practice. AI in health care will need a substantial component of AE to truly achieve singularity. All AI-enabled services or devices in health care, no matter how advanced, will always be guided by the core principles of humanity and patient-centered care.
AI, Artificial intelligence; ML, Machine-learning; NLP, Natural Language Processing; ANN, Artificial Neural Networks; FES, Fuzzy Expert Systems; EC, Evolutionary Calculations; HIS, Hybrid Intelligent Systems; FDA, US Food & Drug Administration; AE, Artificial Empathy; WHO, World Health Organization.
All authors contributed to drafting or revising the article, gave final approval of the version to be published, and agree to be accountable for all aspects of the work.
There is no funding to report.
The authors declare there are no conflicts of interest.
1. Kostic EJ, Pavlović DA, Živković MD. Applications of artificial intelligence in medicine and pharmacy – ethical aspects. Acta Medica Medianae. 2019;58(3):128–137. doi:10.5633/amm.2019.0319
2. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in health care: past, present and future. Stroke Vasc Neurol. 2017;2(4):230. doi:10.1136/svn-2017-000101
3. Tran BX, Vu GT, Ha GH, et al. Global evolution of research in artificial intelligence in health and medicine: a bibliometric study. J Clin Med. 2019;8(3):360. doi:10.3390/jcm8030360
4. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven health care. In: Artificial Intelligence in Health Care.
5. Solez K, Bernier A, Crichton J, et al. Bridging the gap between the technological singularity and mainstream medicine: highlighting a course on technology and the future of medicine. Glob J Health Sci. 2013;5(6):112–125. doi:10.5539/gjhs.v5n6p112
6. Boyles RJM. A case for machine ethics in modeling human-level intelligent agents. Kritike. 2018;12(1):182–200. doi:10.25138/12.1.a9
7. Braga A, Logan RK. The emperor of strong AI has no clothes: limits to artificial intelligence. Information. 2017;8(4):156. doi:10.3390/info8040156
8. Guliciuc V. Pareidolic and uncomplex technological singularity. Information. 2018;9(12):309. doi:10.3390/info9120309
9. Mantatov V, Tutubalin V. Sustainable development, technological singularity and ethics. Eur Res Stud J. 2018;21(4):714–725. doi:10.35808/ersj/1239
10. Gomes da Costa DLP. Reviewing the concept of technological singularities: how can it explain human evolution? NanoEthics. 2019;13(2):119–130. doi:10.1007/s11569-019-00339-2
11. Smolin VS. The prospects of the mankind in the era of technological singularity. Epistemol Philos Sci. 2020;57(2):192–207. doi:10.5840/eps202057230
12. Loh E. Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health. BMJ Leader. 2018;2(2):59. doi:10.1136/leader-2018-000071
13. Gilvary C, Madhukar N, Elkhader J, Elemento O. The missing pieces of artificial intelligence in medicine. Trends Pharmacol Sci. 2019;40(8):555–564. doi:10.1016/j.tips.2019.06.001
14. Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. 2018;6(11):e12106. doi:10.2196/12106
15. United States Food and Drug Administration (FDA) News Release. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. 2018. Available from: https://www.fda.gov/newsevents/newsroom/pressannouncements/ucm604357.htm.
16. Aruni G, Amit G, Dasgupta P. New surgical robots on the horizon and the potential role of artificial intelligence. Investig Clin Urol. 2018;59(4):221–222. doi:10.4111/icu.2018.59.4.221
17. Panesar S, Cagle Y, Chander D, et al. Artificial intelligence and the future of surgical robotics. Ann Surg. 2019;270(2):223–226. doi:10.1097/SLA.0000000000003262
18. Dasgupta P. New robots – cost, connectivity and artificial intelligence. BJU Int. 2018;122(3):349–350. doi:10.1111/bju.14496
19. Tagliaferri SD, Angelova M, Zhao X, et al. Artificial intelligence to improve back pain outcomes and lessons learnt from clinical classification approaches: three systematic reviews. NPJ Digit Med. 2020;3(1):93. doi:10.1038/s41746-020-0303-x
20. Brennan M, Puri S, Baslanti TO, et al. Comparing clinical judgment with the MySurgeryRisk algorithm for preoperative risk assessment: a pilot usability study. Surgery. 2019;165(5):1035–1045. doi:10.1016/j.surg.2019.01.002
21. Gao J, Yang Y, Lin P, et al. Computer vision in health care applications. J Healthc Eng. 2018;5157020.
22. Korbar B, Olofson AM, Miraflor AP, et al. Deep learning for classification of colorectal polyps on whole-slide images. J Pathol Inform. 2017;8:30. doi:10.4103/jpi.jpi_34_17
23. Schinkel M, Paranjape K, Panday RSN, et al. Clinical applications of artificial intelligence in sepsis: a narrative review. Comput Biol Med. 2019;155:103488.
24. Yuan KC, Tsai LW, Lee KH, et al. The development an artificial intelligence algorithm for early sepsis diagnosis in the intensive care unit. Int J Med Inform. 2020;141:104176. doi:10.1016/j.ijmedinf.2020.104176
25. Vellido A. Societal issues concerning the application of artificial intelligence in medicine. Kidney Dis. 2019;5:11–17. doi:10.1159/000492428
26. Timmeren JV, Cester D, Lang ST, Alkadhi H, Baessler B. Radiomics in medical imaging – “How-to” guide and critical reflection. Insights Imaging. 2020;11:91. doi:10.1186/s13244-020-00887-2
27. Rehm IC, Foenander E, Wallace K, et al. What role can avatars play in e-mental health interventions? Exploring new models of client-therapist interaction. Front Psychiatry. 2016;7:186.
28. Leenes RE, Palmerini E, Koops BJ, Bertolini A, Salvini P, Lucivero F. Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law Innov Technol. 2017;9(1):1–44. doi:10.1080/17579961.2017.1304921
29. Bringsjord S. Ethical robots: the future can heed us. AI Soc. 2008;22:539–550. doi:10.1007/s00146-007-0090-9
30. Portnoff AY, Soupizet JF. Artificial intelligence: opportunities and risks. Futuribles. 2018;426(5):5–26. doi:10.3917/futur.426.0005
31. Shestakova IG. To the question of the limits of progress: is singularity possible? Vestnik Sankt-Peterburgskogo Universiteta, Filosofiia I Konfliktologiia. 2018;34(3):391–401.
32. Brailas A. Psychotherapy in the era of artificial intelligence: therapist panoptes. Homo Virtualis. 2019;2(1):68–78. doi:10.12681/homvir.20197
33. Goodman K, Zandi D, Reis A, Vayena E. Balancing risks and benefits of artificial intelligence in the health sector. Bull World Health Organ. 2020;98(4):230–230A. doi:10.2471/BLT.20.253823
34. Mitchell M. Artificial intelligence hits the barrier of meaning. Information. 2019;10(2):51. doi:10.3390/info10020051
35. Loftus TJ, Filiberto AC, Balch J, et al. Intelligent, autonomous machines in surgery. J Surg Res. 2020;253:92–99. doi:10.1016/j.jss.2020.03.046
This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.Download Article [PDF]