Overview:

Ophthalmic tumors are tumors inside the eye. They are collections of cells that grow and multiply abnormally and form masses. They can be benign or malignant.

Using HSA KIT module ”Ophthal Tumor Classifier” AI algorithms as a guided tool in hospitals and research centers can certainly be a valuable approach to improving healthcare and treatment of diseases in ophthalmology, including the detection and analysis of Ophthalmic tumors. AI has the potential to analyze large amounts of medical data quickly and accurately, and to identify patterns and insights that may not be immediately apparent to human experts.

By utilizing AI algorithms to distinguish tumors from non-tumor tissue, specialists at HSA can potentially improve the accuracy and speed of their diagnoses, leading to earlier detection and treatment of ocular tumors. This can be especially important for malignant tumors, where early detection and intervention can be critical for improving patient outcomes.

HS Analysis integratable with Hardware/Software devices:

HSA KIT module matches perfectly with the following devices to work on Ophthalmic solutions:

FF 450plus

The medical experts working with this device on retinal diagnostic procedures which produce a unique level of diagnostic accuracy, and deliver images with the clearest details, thus providing a dependable basis for first-class treatment outcomes.
Combined with HSA KIT, it allows the medical professionals to work more efficiently on the diagnostic procedure and reach end results faster and more accurately.

CLARUS 700

This device from Zeiss was developed as a comprehensive ultra wide-angle fundus camera for ophthalmologists. This enables ultra-wide-angle images to be recorded in true color and with first-class image quality. It offers the full range of imaging modalities, including fluorescence angiography. Output from this device can be analyzed using HSA KIT to annotate and highlight tumor cells and abnormal tissue faster using deep learning algorithms.

Heidelberg Engineering HRA + OCT Spectralis

The SPECTRALIS® is an ophthalmic imaging platform with an upgradable, modular design. This platform allows clinicians to configure each SPECTRALIS to the specific diagnostic workflow in the practice or clinic.
Multimodal imaging options include: OCT, multiple scanning laser fundus imaging modalities, widefield and ultra-widefield, scanning laser angiography and OCT angiography.
These images can then be used in HSA KIT software to highlight tumor cells accurately and in a short-time.

Ophthalmoscopy

Ophthalmoscopy is an examination of the back part of the eye (fundus), which includes the retina, optic disc, choroid, and blood vessels. The scanning laser ophthalmoscope namely the “Optos Panoramic Ophtaloscope P200Tx” was used in HSA KIT module ”Ophthal Tumor Classifier”.

This confocal scanning laser ophthalmoscope is a widefield digital imaging device that can capture images of the retina from the central pole to the far periphery. The retinal images are captured automatically and in a patient friendly manner, with no scleral depression or contact with the cornea. The images captured by these devices will be scanned and analyzed using the HSA KIT software to generate a highlighted and annotated output image of tumor cells.

Modalities used for diagnosis

The world of intraocular imaging is exploding with new technology. For years, fundus photography, fluorescein angiography, and ocular ultrasonography prevailed, but now there are even more choices, including microimaging modalities such as optical coherence tomography (OCT), OCT angiography (OCTA), fundus autofluorescence (FAF), and indocyanine green angiography (ICGA). There are also the macroimaging technologies computed tomography (CT) and magnetic resonance imaging (MRI). Yet to be explored fully for use with intraocular tumors are multispectral imaging, dark adaptation, and adaptive optics. With all of these refined tools, how is a clinician to choose the appropriate one?

The most effective tool for assessing intraocular tumors is indirect ophthalmoscopy in the hands of an experienced ocular oncologist. Numerous factors go into the equation of tumor diagnosis, such as lesion configuration, surface contour, associated tumor seeding, presence and extent of subretinal fluid, shades of tumor coloration, intrinsic vascularity, and others that hint at the diagnosis and direct our thoughts on management. For example, an orange-yellow mass deep to the retinal pigment epithelium (RPE) or in the choroid with surrounding hemorrhage and/or exudation would be suspicious for peripheral exudative hemorrhagic chorioretinopathy (versus choroidal metastasis from renal cell carcinoma or cutaneous melanoma). The combination of features leads to pattern recognition.

Detection and Analysis

HSA KIT software is a useful tool to aid in the early detection and analysis of eyelid tumors. With its assistance, primary care providers and ophthalmologists can potentially improve their ability to differentiate between tumors and non-tumor conditions, ultimately leading to earlier diagnoses and more effective treatment options.

The software use various techniques, such as image recognition algorithms and deep learning models, to identify and classify tumors based on characteristics such as size, shape, and location. It also help to compare images of a patient’s eyelids over time to detect any changes that may indicate tumor growth or progression.

Overall, the development of HSA KIT module ”Ophthal Tumor Classifier” represents a promising step towards improving the early detection and analysis of eyelid tumors, which ultimately lead to better patient outcomes and a reduction in morbidity and mortality.

Screenshot of HSA Software with Eye Tumor.

OptosAdvance

OptosAdvance is a comprehensive image management solution for eyecare. It enables clinicians to review, annotate, securely refer and archive images from many eyecare diagnostic devices in their practices using a single, industry-standard DICOM solution.

This along with the AI technology incorporated within the HSA KIT, will allow health care professionals to use both software swiftly and get accurate results efficiently.

Creation of Ground Truth Data

To train a deep learning model, ground truth data (GTD) is necessary. Ground truth data is data that has been labeled or classified by human experts, which serves as a reference point for the model to learn from. In the context of Opthalmic tumors, GTD could consist of images of the eye that have been manually classified by experts as either containing a tumor or not containing a tumor.

Generating GTD involves several steps. The first step is to load the image files in the DICOM format into the HSA KIT module. The images are then manually sorted into the two categories of tumor and non-tumor. These images are then used to train the deep learning model using the HSA KIT.

Automated Classification

The HSA KIT module uses AI to automatically classify images to detect tumor cells through the use of GTD, which the software uses to train a model. This allows for more efficient and effective analysis of images.

The ”Ophthal Tumor Classifier” module in HSA KIT include deep learning model HyperTumorEyenet, which is developed by HS Analysis GmbH. HyperTumorEyenet (Type1) is based on Efficientnet-b4 while HyperTumorEyenet (Type2) is based on ResNeSt-50D.

The way the module functions:

• The software simply categorize which images include tumors and which do not, as seen in the image below.

• Then the program develops a Deep Learning model which can automatically recognize tumors in other images.

• The non-tumor and tumor eyes are automatically distinguished using our DL model

In HSA KIT module 810 GTD files were used in two stages of implementation. Initially, the model was trained on only 405 files which were available at the time. Afterward, when more files were acquired, additional 405 files were added, bringing the total to 810 files

A screenshot of the HS Analysis software interface during an image
classification process.

In this figure above. the images highlighted in red indicate that the software classified the sample as tumor and the one highlighted in green indicate that the sample is classified as a non-tumor.

HS Analysis software also provide a summary report in a tabular format that includes information about the files name processed, number of files, the class and class name of each file (tumor or non-tumor), as shown in figure below

Screenshot of HSA KIT table results

The use of proprietary deep learning HS Analysis software can provide healthcare professionals with more accurate and reliable results when analyzing medical images. These accurate results can be of great help to doctors in making informed decisions for their patients, especially in cases where early detection and treatment can be critical to a patient’s health outcome

Deep Learning

Artificial intelligence (AI) is a general term encompassing computer algorithms capable of performing tasks requiring human intelligence. Figure below summarizes the relationship between different AI approaches.

Layers of artificial intelligence approaches applied to medical imaging.

Machine learning (ML) is a subset of AI in which algorithms are trained to solve tasks through feature learning instead of an explicit rules-based approach. When presented with a “training” cohort, the algorithm identifies salient features, which are subsequently used to make predictions. Hence, the “machine” “learns” from the data itself.
Deep learning is a subset of machine learning that focuses on neural networks and algorithms for training neural networks. deep learning comprises many types of neural networks, such as CNNs, RNNs, LSTMs, GRUs. Although there is overlap in terms of the types of tasks that these architectures can solve, each one has a specific reason for its creation. A deep learning model requires at least two hidden layers in a neural network (very deep learning involves neural networks with at least ten hidden layers).
This approach in particular is most prevalent in the domain of image recognition and analysis and has demonstrated superior problem-solving capabilities. One of the most recent examples of AI advantages is its usage in medical imaging for disease diagnosis.

Training process in HSA KIT

•Easy to learn software

•No tedious manual workflows

•Build you own custom model in 3 steps

  1.Classify

  2.Train

  3.Automate

• Comparable results

• Ready-to-use APPs

State of the art of Classification architectures:

HyperTumorEyenet (Type1)

Comparison of HyperTumorEyenet (Type1) model trained with 50% of GTD
versus 100% of the data.

Here we can see two figures which show the performance of the Type one model trained with two separate amounts of GTD 50% and 100%.

The model achieved an accuracy of 86% with half of the dataset, while it reached an accuracy of 96% when trained with the complete dataset. This indicates that the additional data provided by the complete dataset allowed the model to better capture the patterns and relationships between the features and the ground truth data.

Moreover, the precision of the model achieved a precision of 90.38% when trained with the complete dataset, compared to a precision of 83.93% when trained with half of the dataset., which is an important metric when dealing with medical diagnosis. High precision means that the model correctly identified true positives while minimizing the false positives.

The recall of the model improved from 94% to 100% when trained with the complete dataset. A higher recall indicates that the model was able to identify a high percentage of true positives in both cases.

The f1 score of the model also improved from 87.04% to 96.15% when trained with the complete dataset. A higher f1 score means that the model’s ability to balance both precision and recall improved with more data.

Finally, the Cohens Kappa score also improved from 72% to 92% when trained with the complete dataset. A higher Cohens Kappa score indicates a better agreement between the predicted and actual labels, which is important for applications where accuracy is critical.

HyperTumorEyenet (Type2)

Comparison of HyperTumorEyenet (Type2) model trained with 50% of GTD
versus 100% of the data.

Similarly as mentioned in the previous slide, the Type2 model was trained on two different amounts of GTD, and the performance of the model was evaluated,

On the left side, the model was trained with half of the dataset, and it achieved an accuracy of 88%, the precision of 83%, recall of 94%, f1 score of 88%, and Cohen’s Kappa score of 76%.

On the right side, the model was trained with the complete dataset, and it achieved significantly higher results. Specifically, the model reached an accuracy of 92%, the precision of 90%, recall of 94%, f1 score of 92%, and Cohen’s Kappa score of 84%.

Accuracy Performance with Increasing GTD Training
Data.

Comparison of Model Accuracy Performance with Increasing GTD Training
Data.

This figure demonstrates that the performance of the accuracy of two models improves when the AI is given more data to train on, as increasing the amount of data that an AI system is trained on can lead to improved performance, particularly in supervised learning settings where the AI is learning to make predictions based on labeled examples.

When an AI system is exposed to more examples, it has more opportunities to learn and generalize patterns in the data, which can improve its accuracy on new, unseen examples. This indicates that the more information the AI system is exposed to, the better its performance becomes, but it’s worth noting the increase of the quantity of data is related to quality of data to have better performance.

Comparison of both decoders

Comparison when Fewer dataset were available.

Comparison of the performance of both models in detecting eye tumors
when trained on a limited amount of data.

When fewer datasets were available, the Type 2 model outperformed the Type 1 model across all metrics except for recall, which was similar for both models at 94%. This suggests that the Type 2 model may be more robust when working with smaller datasets, as it was able to achieve better performance across multiple metrics.

Comparison when larger dataset were available.

Comparison of both Model Performances on Detecting Eye Tumors when trained on a larger dataset.

The module found that when a larger dataset was provided, the Type1 model performed better than the Type2 model across all metrics evaluated. The metrics include precision, recall, F1 score, and Cohen’s Kappa score, which are commonly used measures of a model’s accuracy and consistency.

These results suggest that the Type1 model may be more effective in accurately identifying and classifying cases than the Type2 model.

Differences in Research Findings: A Comparative
Analysis

Accuracy Results between the findings of a research and HSA KIT .

HSA KIT module observations are consistent with previous research. Specifically, in both cases the model based on EfficientNet-b4 outperformed the one based on ResNeSt-50D in terms of accuracy, as in previous research EfficientNet-b4 obtained accuracy of 83%, which outperforms ResNeSt-50D that achieved an accuracy of 81.13%,while outcomes of HSA KIT module of HyperTumorEyenet (Type1) was found to outperform HyperTumorEyenet (Type2) by 4%, as HyperTumorEyenet (Type1) model achieved an accuracy of 96%, while HyperTumorEyenet (Type2) had an accuracy of 92%.

Interpretation of xAI

Heatmap comparison.

The results of HSA KIT module of xAI showed that Grad-Cam method accurately identifies regions of high tissue activity, which are represented with warm colors, while regions of low activity are marked with cool colors or not marked if there is no tumor, as opposed to Saliency Maps where the output of this technique had slightly lower localization accuracy to the salient regions.

The table above shows that when comparing the outputs of GradCam and Saliency maps, GradCam result were more accurate and precise in terms of heatmap output, in addition to the localization accuracy, GradCam Images were more understandable and easier to explain than Saliency maps. The reason for this is that saliency maps rely solely on the gradients of the output class score with respect to the input image pixels, which can be noisy and less informative than the gradients of the output class score with respect to the feature maps of the last convolutional layer used in Grad-CAM. The module’s results indicate that there is no substantial difference in the performance of both models in detecting tumor areas, but HyperTumorEyenet (Type1) yielded more accurate and precise results, as indicates from the images above. HyperTumorEyenet (Type1) GradCAM heatmap output is more condensed and concentrated, making it a superior option for medical diagnosis. In Conclusion, HSA KIT suggest that the use of GradCam may enhance the interpretability of these models and provide medical utility.

Focus of HSA KIT module ”Ophthal Tumor Classifier”

The module investigates the performance of HyperTumorEyenet (Type1) and HyperTumorEyenet (Type2) neural networks in detecting eye tumors, and compares their performance. Limited and completed datasets to train the AI models and evaluates their performance using various metrics such as accuracy, precision, and recall was used. In addition, it explores the effectiveness of two xAI techniques, GradCAM and Saliency Map, for interpreting the neural networks and identifying important regions of the input images that contribute to the final prediction. Overall, it suggest that both HyperTumorEyenet (Type1) and HyperTumorEyenet (Type2) are effective in detecting eye tumors, with HyperTumorEyenet (Type1) performing significantly better.

In Conclusion, HSA KIT module suggests that HyperTumorEyenet (Type1) is more efficient in detecting eye tumors than HyperTumorEyenet (Type2) when there is a larger dataset, however when less dataset is available HyperTumorEyenet (Type2) yielded better results. Additionally, it suggests that the use of GradCam may enhance the interpretability of these models and
provide medical utility as it also demonstrates that GradCam output was more accurate and precise when based on the HyperTumorEyenet (Type1) model. Furthermore, the results show that GradCam technique was more reliable and easier to understand than the output of Saliency Map.

Note: This website will be updated in future.