Home Healthcare How Machine Learning is Shaping Precision Medicine

How Machine Learning is Shaping Precision Medicine

There has been an outburst of patient data in terms of electronic health records and genetic information in the past 15 years, which has played a role in sharpening a doctor’s view of the given patient as well as assist in coming up with treatments that are tailored to meet their precise requirements – known as precision medicine.

This kind of targeted care is called precision medicine – treatments or drugs that are intended for small groups instead of large ones, based on aspects like data recorded by wearable devices, genetic makeup, and medical history.

Back in 2013, the Human Genome Project’s completion featured promises about the imminent emergence of such treatments, even though the outcomes have underwhelmed so far.

Currently, modern technologies are assisting in revitalizing this promise.

From big corporations to government-funded and university-led collectives, doctors are now utilizing artificial intelligence in developing precision treatments to deal with sophisticated illnesses.

Their primary goal is to bring together the massive and existing data sets insight into something that allows patients to stay healthy at the individual level.

Those insights could assist in guiding the manufacturing of new drugs, discovering new uses for old drugs, forecasting disease risk, and recommending personalized combinations.

READ MORE: GE Healthcare & Vanderbilt Collaborate on AI-Powered Precision Medicine

READ MORE: Top 10 Ways Artificial Intelligence is Impacting Healthcare

Almost 83 percent of all the respondents involved in a recent survey conducted by Oracle Health Sciences said that they expected machine learning and artificial intelligence (AI) to boost treatment suggestions.

Also, in 2017, the Medical Futurist Institute’s Director Dr Bertalan Meskó wrote a paper saying “there is no precision medicine without AI.”

Even though forward-looking, his point recognizes the fact that without the existence of AI to analyze patient data, the valuable resource would remain mostly untapped.

Thanks to its increased growth, genetic data has been the main subject of discussion as far as personalizing treatment is concerned.

“The genome is not enough,” said Eric Topol, a cardiologist, geneticist, and the Scripps Research Translational Institute’s director.

“Much more can be gleaned, much more strides can be made once we start working with this multi-modal data.”

Through applying machine learning and AI to all these different data sources – sensor data, genetic data, EHRs, lifestyle, and environmental data – researchers are making an effort to develop individualized treatments for illnesses from cancer to depression.

Check out the biggest challenges and opportunities.

AI-Driven Precision Medicine is currently in its Early Stages

Numerous organizations are beginning to explore artificial intelligence (AI)powered techniques to precision medicine.

Toronto-based startup Deep Genomics utilizes AI in reducing the amount of expensive trial and error typical in drug discovery through analyzing massive genomic databases, even though its initial clinical trial will not be done until 2020.

All of Us, a research program by the National Institutes of Health (NIH) intends to gather data on one million patients in a bid to boost the study of precision medicine.

The program started enrolling participants back in May 2018 with a mission of creating a large patient information database that research-based organizations can analyze through an array of methods such as AI in an effort to come up with precision treatments.

In 2016, a leading AI-based diagnostic platform was said to have successfully diagnosed a woman’s cancer through analyzing her genetic data.

However, the particular software was found to have over-promised on its potential.

It also suggested unsafe treatment methods.

In one of his research projects, Topol utilizes deep learning approaches in studying microbiomic, cardiovascular, and genetic data of “special populations,” like 90-year-olds, in a bid to discover the patterns that allow them to stay healthy.

In turn, researchers may utilize such patterns in developing drugs that render harmful genes useless; physicians may utilize them in predicting who is at risk of illness.

“It’s not a clinical project at this juncture,” says Topol. “There are a lot of things sitting in the data, like treasures, that haven’t been discovered yet, because they haven’t had deep learning applied.”

According to Topol, AI-powered diagnostic tools, including the FDA-accepted imaging tool used for diagnosing diabetic eye illness, are already present in hospitals, even though AI-driven treatments are currently at the early stage of development.

READ MORE: FDA Grants Breakthrough Device Designation for Bayer & Merck’s AI Software

READ MORE: Artificial Intelligence in Medicine – Top Applications

At the Seattle-based Fred Hutchinson Cancer Research Center, Krakow studies and treats leukemia patients, especially those whose cancer relapses after undergoing a stem cell transplant.

“Past treatments, the current complexity of the disease, side effects—all that info needs to be integrated in order to intelligently choose the new treatment,” said Krakow.

The issue was the medical record, which physicians depend on for decision-making, didn’t show the sequential, personalized nature of treatment.

As such, Krakow and her colleagues assembled all the medical data of 350 relapse patients and put it towards creating a machine-learning algorithm, which could forecast the ideal treatment sequence for each patient.

The particular algorithm functions by simulating treatment options, primarily for patients involved in the study, and projecting the results by comparing each of their profiles with historical patient data.

Krakow is currently authenticating the data.

Even though her research’s purpose is to develop a tool with the ability to inform the kind of treatment sequence a physician follows, it will also facilitate her future work through developing a gold-standard record that takes into account the chronological nature of cancer treatment that most clinical trials have been unable to establish.

“I hope that it becomes a historical control that shows how effective new [cancer] therapies are,” she says, including ones she and her team at Hutchinson hope to pioneer.

Partnership Needed for Precision Medicine

Artificial intelligence (AI) – powered precision medicine integrates computing, statistics, biology, and medicine.

In this field, the most promising research features sustained partnership across institutions and disciplines.

“You do need to work across disciplines now,” says Dr. Liewei Wang of the Mayo Clinic. “It’s not so simple that any individual could do these studies themselves.”

As documented in a 2017 paper, Wang and his far-flung colleagues created a machine-learning (ML) algorithm for predicting whether a psychiatrist ought to prescribe a given drug to a depression patient based on the personal medical record of that patient.

The model was trained using the medical records and genetic data of over 800 Mayo Clinic patients in a 10-year duration and could forecast with an accuracy rate of 85 to 90 percent whether the particular drug would help minimize depressive symptoms.

According to Wang, that accuracy rate is compared to the 50 to 55 percent accuracy rate of a psychiatrist.

Wang started his study 10 years ago armed with a grant from the National Institutes of Health and with a team made up of the Duke Institute for Brain Sciences, the Mayo Clinic Center for Individualized Medicine, and the computer science department at the University of Illinois Urbana-Champaign.

Even though the research’s outcomes have been replicated across an array of data sets, Wang is still waiting to test them in a clinical environment.

In Seattle, Krakow makes up part of a recently formed research collective dubbed the Brotman Baty Institute for Precision Medicine that combines the resources of the Seattle Children’s Hospital, Fred Hutchinson, and the University of Washington (UW) School of Medicine “a platform for enabling labs to collaborate between institutions, within institutions, and also access technologies they might not have otherwise had access to.”

Established back in 2017, the institute has completed research that showed the effects of numerous variants in a critical breast cancer gene, and currently among the three research centers that recruit participants for the All of Us project.

MORE – Data Science – 8 Powerful Applications

Data both Hinders and Enables Research

There are numerous obstacles to artificial intelligence in precision medicine, one of the barriers being that the technology is not advanced enough.

Peter Robinson, a computational biologist at the Jackson Laboratory located in Farmington, Conn, said that even though artificial intelligence (AI) was in a position to master chess and observe the moves on the board, it cannot simply observe a doctor’s moves.

The other obstacle involves data, particularly electronic health records. “The deficiencies of EHRs are major and a major stumbling block to applying AI or machine learning to medicine,” says Robinson.

A typical hospital in the United States utilizes 16 different EHR platforms.

If a given patient has data drawn from a non-life-threatening hospital admission on a given platform and cancer admission data on another – each with various permissions and formats- an artificial intelligence (AI) system might not have access to information it requires to recommend individualized treatment.

Even a universal electronic health records system may not help in solving the problem.

In spite of requests from researchers, EHR vendors are yet to incorporate genetic data into their records, and electronic health records are mostly insufficient, incorrect, or incompletely detailed – and researchers are concerned about training artificial intelligence (AI) on faulty data.

Artificial intelligence has a hard time digesting the numerous formats that comprise medical data.

A few years back, Krakow was thrilled with the possibility of utilizing IBM’s Watson in processing both unstructured and structured patient data, primarily for creating her individualized treatments, but realized “it just wasn’t at that stage” when she discovered Watson’s failed implementation, at the New York-based Memorial Sloan Kettering Cancer Center.

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...