Eight million patients in Germany (800 million worldwide) suffer from chronic skin diseases. Approximately 2% of these patients suffer from psoriasis and 2% from neurodermatitis. At the clinics for dermatology of the University Medical Centre Mannheim (UMM) and the University Hospital Würzburg (UKM), this leads to at least 10,000 patient contacts per year. Waiting times for patients for a university treatment appointment average over 6 months. Access to the health care system and an optimal and fast therapy is therefore more difficult. Analyzing psoriasis skin disease in HSA KIT deep learning software is able to detect a skin that caused by psoriasis, Psoriasis is a chronic autoimmune disease that primarily affects the skin, causing it to develop red, scaly patches, Specialists at HS Analysis are able to use advanced deep learning AI software to diagnose this disease to use it in clinics, institutions and health care facilities.
Knowing your psoriasis type can help your healthcare provider create a treatment plan. Most people experience one type at one time, but it is possible to have more than one type of psoriasis.
HSA KIT works on the development of AI machine and deep learning methods which refers to the simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence. AI systems are designed to perceive their environment, reason about it, and take appropriate actions to achieve specific goals. Deep learning can be utilized to define or identify skin diseases by leveraging its ability to learn intricate patterns and features from large datasets.
The HS Analysis‘ touch:
One key technology of automatic interpretation of tissue samples in the HS Analysis software is the latest artificial intelligence. We are developing a DL in the cloud to analyze smartphone images in 2D, but also surface features (heatmaps and thus 3D) with CNNs. we have the ability to create ground truth data and train models for both detection of skin and detection of plaques.
How we develop our AI: Firstly, we collect real data images and annotate them based on colors for the skin we use yellow and we annotate all the shown skin areas in the picture to get the best possible AI model with minimum mistakes.
Examples on Skin AI annotations:
The next step Is we annotate inflamed areas on the skin using red color and then we train the model too be able to automatically detect the plaques.
Examples on plaques AI annotations:
This is the result of skin and plaques Model after training and applying on other skin area with different opacity:
we see here a segmentation of skin in orange that detected by deep learning and also, we see another color that show in red which is describe plaques detected by the second deep learning. both model that have been trained by high techniques can detect all skin and inflamed area.
Ground Truth Data (GTD)
Ground Truth Data (GTD) refers to the data that is manually annotated or labeled and used to train, validate or test machine learning models. When it comes to a 2D image Ground Truth Data consists of precise annotations or labels that describe the objects, patterns or features found in the image.
For instance in object detection scenarios the Ground Truth Data, for a 2D image would involve bounding boxes or segmentation masks around each object of interest in the image. Additionally it would include corresponding class labels for each object. In cases of image classification tasks the Ground Truth Data would consist of assigned class labels, for the image.
This table describe the whole classes of annotation per number of files and also the totlae annotations numbers:
|Files number||Annotated image/ Base ROI||Number of skin annotations||Percentage from total annotation||Number of hyperpigmentation annotations||Percentage from total annotation||Number of inflamed annotations||Percentage from total annotation|
|The total number of all annotations||8529|
After creation of GTD, the settings in this Table were used for 3 different architectures to train a model.
|Model Type||Dataset approach||Epochs||Learning Rate||Batch Size|
Model training :
We train each model individually Then we tests and optimize the models, we use our testing dataset to evaluate how well our AI model performs on the task of distinguishing psoriasis from other skin conditions and normal skin.
In this image we can see the process of developing AI and this is one of our example not just this image, we applying the model on various image and skin area that was just for example of developing.
At training we have datasets that consist of various images, each image is detected 100 times, The model type for training:
Classification: assign objects to different classes
Object Detection: detect object and draw bounding box around it
Segmentation: detect object and draw exact border around object Instance
Segmentation: segmentation + differentiate between touching objects
we select Instance Segmentation model type which detect object and draw exact border around objects plus differentiate between touching objects the structures depends on the AI that we want which are skin or inflamed , and we use (horizontal-flip, vertical-flip, and rotation) in order to not change the meaning of the image, and we train the newest model version based on the existing versions.
- Horizontal Flip:
- Suitable for: Natural scenes, animals, objects without a specific orientation.
- Not suitable for: Text, scenes with a clear left-to-right or right-to-left context, images with directional signs, etc.
- Vertical Flip:
- Suitable for: Reflections in water, some abstract art.
- Not suitable for: Most real-world images, as a vertical flip can make them look unnatural. For example, flipping a person upside down.
- Small rotations (e.g., ±10°) can be suitable for most images to simulate the effect of tilting a camera.
- Large rotations (e.g., 90°, 180°) can change the context and might not be suitable for all images. For instance, rotating a portrait of a person by 90° or 180° would look odd.
The Future Of HSA KIT AI’S
A very important aspect of this project is looking to improve the AI model in the future and there are a lot of ideas that we can integrate and develop to improve the quality and versatility of the AI model to include and be able to detect the skin and plaques for many different photos and different types of noise and artefacts and get a better and more accurate result despite these artefacts. We take a look at the challenges and artefacts and find there are different types of them and the task is for the AI to be able to detect the correct targets despite these artefacts being present in the image.
Here are some of these Types of noise or artefacts that we want to improve on:
1-Image blurring: blurring is one of the most common things that will hinder the AI from correctly detecting and solving this problem will be very helpful to detect skin and plaques from other background objects.
2-Lighting and Shadows: Another very common artefact is the light and shadow presence and the different contrast that happens on the image that detours the detection.
3-Out of focus images: Almost all of the images that were used to train the Al model are from smartphones and sometimes the patient sends images that are out of focus.
4-Edges and boarders accuracy: To have an excellent and highly accurate model annotating the edges and boarders accurately is very important to not include any other unwanted objects or pixels that will affect the model accuracy.
5-Unwanted images: Having the ability to automatically exclude images that are not useable to annotate be it by not including any skin or having the identity of the patient visible and having inappropriate or private images being sent.
Being able to beat these challenges in the future will certainly make the AI model extremely accurate and having state of the art annotations.
Explainable Artificial Intelligence (xAI)
The primary goal of xAI is to achieve understandable AI decisions. This is realized through methods such as Feature Visualization, Feature Attribution, and the use of Surrogate Models. These techniques aim to visually show which parts of data the model deems important, assign scores to individual data features based on their impact on the output, and approximate complex model decisions using simpler, interpretable models. The importance of xAI cannot be understated; it fosters trust, aids in model validation, and ensures compliance with regulations that mandate transparency in automated decisions.
Activation Maps: Activation Maps offer visual insights into which parts of input data, like images, activate certain layers or neurons in a neural network. This is typically executed through the use of heatmaps that overlay the input data, thereby highlighting the areas of significant activation. The primary value of Activation Maps lies in its capability to elucidate which parts of the input data a neural network, especially a Convolutional Neural Network (CNN), is emphasizing or focusing on.
Original image ——————– Heatmap output
Activation Matrices: These are multidimensional arrays representing neuron output values in neural network layers, often seen in convolutional layers of a CNN. They provide insights into how input data is processed and transformed within the network. Visualizing these as heatmaps can aid in understanding the network’s feature detection and is useful for debugging and optimization.
Class Activation Mapping (CAM): CAM’s chief objective is to pinpoint which regions in an image play a pivotal role in determining its classification by a CNN. This is accomplished by leveraging the weights from the global average pooling layer in a CNN to generate a heatmap of the image, emphasizing the crucial regions. By identifying the regions in an image that significantly influence its classification, CAM serves as a powerful tool for the visual interpretation of CNN decisions, ensuring the model’s focus on the correct image features.
Note: This website will be updated in future.