Three Complex Questions to Ask when Utilizing AI in Medicine

Three Complex Queries to Ask when Utilizing AI in Medicine
Share this:

Artificial intelligence has continuously been making a considerable impact in the medical sector.

For instance, recent demonstrations indicated how computers could outwit or match their human physician counterparts, especially in diagnosing heart rhythms, reading pathology specimens, identifying troublesome moles and interpreting X-rays.

Also, last month, the FDA gave the first of its kind AI system intended for diagnosing diabetic retinopathy the green light.

These revolutionary innovations have triggered tremendous hype and hope regarding the potential for artificial intelligence systems to diagnose together with doctors or on their behalf.

There’s no doubt artificial intelligence holds a lot in store for the medical or healthcare sector. However, certain aspects or factors ought to be considered such as psychological, ethical and legal issues. Here are the three main questions that need to be addressed.

Will patients entrust computers to make the right judgment? One of the enigmas of machine learning is that the more sophisticated a neural network becomes, the harder it is for you to understand the exact weightings and combinations of variables that the artificial intelligence system utilized to make a decision.

As such, the accuracy level may increase while the transparency level declines. The same case applied to a human brain.

Generally, will you be okay with the answer you get if a doctor tells you that you have cancer because the computer said so?

Who gets sued when the computer makes a misdiagnosis, the software engineer or doctor? The liability systems in the United States have grown around the idea of the doctor as the trusted specialist who is in charge of diagnosis.

Nonetheless, if the artificial intelligence systems have higher accuracy levels compared to clinicians, then the determination of the algorithm will always be the suitable option statistically.

In this case, you can either argue that the physician should come up with the patient’s output or the AI system should do so since it holds more experience and patient information. With that, will juries or patients accept the defense that the computer made me do it?

Where did the computer obtain its knowledge or expertise in medicine from? Machine learning that powers artificial intelligence(AI) relies on the large volume of data that was used in training it to accomplish various tasks.

As such, this case raises several questions such as which patients received the CT scans that are fed into the data sets, How are they grouped, and who groups the images as abnormal or normal? The responses to these queries best display the imperfections of human decision making.

Looking at such questions, it is okay to ask whether structural biases in medical practice baked into what the computer understands as truth. All artificial intelligence systems are expected to have a garbage-in, garbage-out issue.

They may also have a bias-in, bias-out problem that ought to be addressed, especially when they obtain their knowledge from the real world. Soon, you may begin to wonder whether your physician uses AI in his or her work. If he or she does, then that a good thing since man + machine is better compared to either of them alone.

Source WSJ

Share this:

Leave a Reply

avatar
  Subscribe  
Notify of