Home Healthcare Three Complex Questions to Ask when Utilizing AI in Medicine

Three Complex Questions to Ask when Utilizing AI in Medicine

Artificial intelligence has continuously been making a considerable impact in the medical sector.

For instance, recent demonstrations indicated how computers could outwit or match their human physician counterparts, especially in diagnosing heart rhythms, reading pathology specimens, identifying troublesome moles and interpreting X-rays.

Also, last month, the FDA gave the first of its kind AI system intended for diagnosing diabetic retinopathy the green light.

These revolutionary innovations have triggered tremendous hype and hope regarding the potential for artificial intelligence systems to diagnose together with doctors or on their behalf.

There’s no doubt artificial intelligence holds a lot in store for the medical or healthcare sector. However, certain aspects or factors ought to be considered such as psychological, ethical and legal issues. Here are the three main questions that need to be addressed.

READ MORE – Top 10 Ways Artificial Intelligence is Impacting Healthcare

Will patients entrust computers to make the right judgment? One of the enigmas of machine learning is that the more sophisticated a neural network becomes, the harder it is for you to understand the exact weightings and combinations of variables that the artificial intelligence system utilized to make a decision.

As such, the accuracy level may increase while the transparency level declines. The same case applied to a human brain.

Generally, will you be okay with the answer you get if a doctor tells you that you have cancer because the computer said so?

Who gets sued when the computer makes a misdiagnosis, the software engineer or doctor? The liability systems in the United States have grown around the idea of the doctor as the trusted specialist who is in charge of diagnosis.

Nonetheless, if artificial intelligence systems have higher accuracy levels compared to clinicians, then the determination of the algorithm will always be a suitable option statistically.

In this case, you can either argue that the physician should come up with the patient’s output or the AI system should do so since it holds more experience and patient information. With that, will juries or patients accept the defense that the computer made me do it?

Where did the computer obtain its knowledge or expertise in medicine from? Machine learning that powers artificial intelligence(AI) relies on the large volume of data that was used in training it to accomplish various tasks.

MORE – RPA – 10 Powerful Examples in Enterprise

As such, this case raises several questions such as which patients received the CT scans that are fed into the data sets, How are they grouped, and who group the images as abnormal or normal? The responses to these queries best display the imperfections of human decision making.

Looking at such questions, it is okay to ask whether structural biases in medical practice baked into what the computer understands as truth. All artificial intelligence systems are expected to have a garbage-in, garbage-out issue.

They may also have a bias-in, bias-out problem that ought to be addressed, especially when they obtain their knowledge from the real world. Soon, you may begin to wonder whether your physician uses AI in his or her work. If he or she does, then that a good thing since man + machine is better compared to either of them alone.

Source WSJ

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...