Recently, Japanese researchers were able to detect the origin and nature of various cancer cells with 96 percent accuracy, all thanks to applying deep-learning approaches to a selection of phase-contrast microscopy imagery.
This technique could eventually lead to improved cancer treatments.
The team of researchers drawn from Osaka University leveraged a convolutional neural network or CNN, which is a typical scheme utilized in deep learning, in a bid to analyze the images.
Convolutional neural networks operate by applying a selection of mathematical functions and connected filters, which can be trained how to extract particular features.
In medical imaging, convolutional neural networks are modeled mainly on the human visual system, particularly with low layers that help in capturing the exact details such as edges, and higher levels that aid in capturing sophisticated features reflecting the entire image.
Training AI How to Detect Tumors
The concept of applying artificial intelligence to medical imagery, specifically for clinical reasons, has been there for some time.
However, many people have questioned the capability of artificial intelligence (AI) to distinguish the fine aspects that help in differentiating tumors, primarily those that have built some form of resistance to standard therapies.
Detecting the origin of a given tumor and its type marks a significant step as far as personalizing treatment for a given patient is concerned.
For instance, there is no point of exposing patients to radiotherapy, especially if their tumors are radioresistant since this could be a waste of valuable time and detrimental to their health.
However, until recently, tumor classification was primarily performed through visual inspection, which made the whole process prone to human-error and time-consuming.
In the recent study, the researchers from Osaka University designed the convolutional neural network in a bid to categorize cells into five different categories: X-ray-resistant and untreated human cervical tumors (ME-180 type); carbon-ion beam-resistant or X-ray-resistant or mouse tumors (NR-S1 type); untreated (control).
Then, they moved into training the convolutional neural network using a database of 8000 phase-contrast microscopic imagery consisting such types of cells before validating it through 2000 more images.
Need for a Comprehensive Database
The convolutional neural network (CNN) managed to detect 96 percent of the cells, particularly in the validation dataset even though it experienced challenges when it came to identifying the human cell lines.
This was mostly the case for the X-ray resistant cells that only managed a 91 percent success rate in comparison to the 99 percent rate for the other tested types of tumor.
The pattern was validated after all the 4096 features derived by the convolutional neural network for all images of both datasets were combined to create a 2D map.
Even though the three sets of cells drawn from mice were different from one another, the two groups of human cells were somewhat close to one another.
The findings from the study serve as a considerable stepping stone towards a grand design.
In the future, the researchers look forward to training the system on additional cancer cell types, with the ultimate objective of creating a universal system that can spot and differentiate such cells automatically.