DataCures - The Risks of AI in Healthcare
  • 1
  • 0

Undeniably, the industrialized world is amid a technological revolution strongly driven by AI technologies. One of the vital questions is what are the risks of AI in healthcare? Whether or not using these systems is inevitable is a matter of current discussion, but the facts speak for themselves. At present, there is a significant number of resources invested in the research and development of AI.

The data processing capacity that makes AI in healthcare attractive

The data processing capacity makes AI attractive to most industries. AI has also gained the attention of medical professionals and the healthcare industry. However, introducing new methods, procedures, or frameworks in medicine has always been a challenge. Medicine is more than just a multimillion-dollar industry. A science that at its heart, seeks to care for patients and restore their health as much as possible. 

The Risks of AI in healthcare

That is why implementing AI in healthcare presents a series of risks that must be first evaluated against the benefits. One risk is the impact of AI-driven technologies on the labor sector of healthcare. When the explosion of deep learning occurred in 2010 and the research showed the predictive accuracy of computer vision systems that could be applied to radiology and other similar branches of medicine, many became worried that certain types of medical professionals were going to replace by machines

Yet, so far no big labor displacement has occurred. However, the concern over this aspect brought to light is the not-so-clear risk for medical professionals: the risk of deskilling.

Deskilling in a broad sense is a phenomenon studied widely within economic theory. It describes the situation where a particular skill, or set of skills, disappears or loses quality because of the introduction of new advanced technologies. For example, the skill required to learn morse code has almost entirely disappeared as modern methods of communication developed.

Can Artificial Intelligence in Healthcare solve the pressing health equity issue?

DataCures_AI_solves_health_equity_issue

Deskilling is one of the risks of implementing AI in healthcare?

Deskilling is not an adverse consequence that must be avoided. It is the logical result of emerging technologies that bring great benefits. However, two elements shake this assumption: 

  1. The speed at which models are developing.
  2. The reliability of these models

Historically, a shift in skills happens during a transition period that sometimes requires a generational change. For example, a new generation of professionals grows up with the new technologies and as a result, they are more intuitive. AI however is growing at such speed that it makes it difficult to adapt organically. Human societies take time to enact profound change, and this is also the case of skill shifts. 

Additionally, we must consider that the most precise models, those that fall under the umbrella of machine learning are difficult to scrutinize. The problems of low explainability and transparency seem to interfere with fundamental values of medicine like trust, autonomy, and informed consent. In the scenario where a model gives the wrong answer, it would seem preferable that the physicians in charge of making the final decisions, especially in diagnosis and treatment processes skilled to acknowledge the mistake and make the right call. 

In my previous article about deskilling, I identified two ways in which this phenomenon is present in medicine: moral and technical deskilling.

The latter refers to what was mentioned previously – technical skills are replaced by automation. Moral deskilling “occurs when a moral agent loses the ability to make appropriate moral judgments and lacks the skills of moral decision-making due to overreliance on technological developments.” (p. 53)

Improvement of AI in healthcare is Non-stop

The notion of deskilling, however, is not easily measurable. There is no golden parameter that evaluates medical skills at a given point and that can be tracked over time. Nevertheless, it is reasonable to assume that a technology whose primary use is data processing and automation of specific tasks will have an impact on the clinical workflow. Over time, it could make some tasks obsolete leading to the disappearance of a skill.

This is not to say we should not continue to try and make the best out of AI-driven technologies in healthcare or panic about unskilled medical personnel. Instead, we have the responsibility of making sure that necessary skills preserve and cultivated. Whilst using AI to make workflows more efficient and assisting medical professionals in making better, more evidence-based decisions.

Leslye Dias is a PhD candidate in applied ethics at the Ruhr-University Bochum. Her work focuses on bioethics and the ethics of artificial intelligence in healthcare.

Add Comment

Your email address will not be published. Required fields are marked *