Recently, two leading IT companies partnered with a mission of deploying deep-learning artificial intelligence (AI) in a bid to minimize the time taken in between medical imaging, diagnosis, and the start of treatment.
The project, a partnership between GE Health and Intel, promises to provide doctors with automated diagnostic alerts for certain conditions within seconds upon completion of medical imaging.
It utilizes the Intel Distribution of OpenVINO toolkit, operating on Intel processor-based X-ray machines in an effort to facilitate the streamlining and prioritizing of patient care.
By leveraging this particular system, X-ray experts, radiologists, and critical care teams will get an instant notification to assess critical findings that may expedite patient diagnosis.
David Ryan, the general manager of Intel Internet of Things Group Health and Life Sciences Sector, said that the artificial intelligence (AI) imaging models are optimized for both deployment and interference by utilizing OpenVINO’s model optimizer component.
In turn, the optimized models are included in the GE application featuring the inference engine APIs of OpenVINO.
As the machines acquire the X-ray images, the inference engine facilitates clinical diagnosis by running them.
Keith Bigelow, a GE Healthcare senior VP of Edison Portfolio Strategy, claimed that medical imaging is the biggest and fastest-rising data source in the medical industry.
Even if medical imaging accounts for 90% of the entire healthcare data, over 97% of it remains unused and unanalyzed.
“Before now, processing this massive volume of medical imaging data could lead to longer turnaround times from image acquisition to diagnosis to care. Meanwhile, patients’ health could decline while they wait for diagnosis, especially when it comes to critical conditions, rapid analysis and escalation is essential to accelerate treatment,” he said.
Keith Bigelow is convinced that the crucial implementation of this revolutionary technology lies in offering earlier detection of a possibly life-threatening incident –a collapsed lung, which is also referred to as pneumothorax.
He added that radiologists can now start deploying optimized predictive algorithms with the potential of detecting and scanning for pneumothorax “within seconds at the point of care.”
“Deploying deep learning solutions on existing infrastructure delivers the potential to power more efficient and effective care, enhance decision-making, and drive greater value for patients and providers,” he said. “For the more than 12,000 Australians diagnosed with lung cancer each year, this means a higher chance of survival.”
Ryan asserted that deep learning was a promising technique for radiology, as its models can easily be trained to identify the desired features in a given image.
“Furthermore, training is done by giving numerous labeled example images to the models, without having to specify the exact features to look for. Deep learning can identify details that can be missed by the human eye,” he said.
Ryan added that in future applications, deep learning models could be utilized in identifying incidental findings, as well as assist radiologists in managing their workload, minimize ‘retakes’ and improve the quality of scans, which can lead to the unwanted exposure to radiation.
“Deep learning is also showing promising results in image reconstruction from the imaging modalities. Future applications of deep learning can extend beyond imaging data to include electronic health records, pathology, cellular microscopy data, etc. to help develop targeted drugs and achieve precision in medicine,” Ryan said.