Continuous technological advancement undeniably affects numerous areas of life. The medical field, to be specific, is one of the fields that has greatly benefited from technologies such as artificial intelligence.
Recently, MIT and Massachusetts General Hospital created a deep learning algorithm that can evaluate breast tissue density, particularly in mammograms.
Researchers are convinced that the artificial intelligence (AI) technique has the same level of accuracy as radiologists, particularly when it comes to evaluating tissue density, which is breast cancer’s independent risk factor.
A convolutional neural network (CNN) was recently trained on tens of thousands of top quality digital mammograms in a bid to evaluate four Breast Imaging Reporting and Data System density categories including heterogeneous, scattered, dense and fatty.
Researchers argue that the recent effort marks the first time that such a deep learning model has been utilized in a clinic on real patients successfully. The model operates at a similar level as that of seasoned mammographers.
“A deep learning algorithm was used to reliably and accurately assess mammographic breast density in a large clinical practice,” conclude researchers in a study published earlier this month in the journal Radiology. “Given the high level of agreement between the deep learning algorithm and experienced mammographers, this algorithm has the potential to standardize and automate routine breast density assessment.”
The deep learning model integrated into regular clinical practice, particularly at the breast imaging division of Massachusetts General Hospital. It used over 10,000 mammograms collected from January to May 2018.
On the other hand, the deep learning model attained 94% agreement among the Massachusetts General Hospital’s radiologists, especially in a binary test that was conducted to find out whether the breasts were either fatty and scattered or heterogeneous and dense.
Moreover, the model corresponded with the evaluations of radiologist at 90% across all the four categories of the Breast Imaging Reporting and Data Systems (BI-RADS).
During general testing via the original dataset, the model corresponded with the interpretations of human specialists at 77% across all four Breast Imaging Reporting and Data Systems (BI-RADS) categories. When it came to binary tests, the model matched the radiologist’s interpretations at 87%.
“MGH is a top breast imaging center with high inter-radiologist agreement, and this high-quality dataset enabled us to develop a strong model,” says Adam Yala, second author and a doctoral student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
“Our motivation was to create an accurate and consistent tool, that can be shared and used across healthcare systems,” Yala adds.
Moving forward, researchers plan on scaling the deep learning model into hospitals.
“Building on this translational experience, we will explore how to transition machine learning algorithms developed at MIT into clinics benefiting millions of patients,” explained Regina Barzilay, a senior author and the Delta Electronics Professor at CSAIL as well as the Department of Electrical Engineering and Computer Science at the Massachusetts Institute Technology.
“This is a charter of the new center at MIT—the Abdul Latif Jameel Clinic for Machine Learning in Health at MIT—that was recently launched, and we are excited about new opportunities opened up by this center,” adds Barzilay.