After After
Before Before

10160 - Prostate Cancer Detection, AI

DEVELOPED FOR TUMOR DETECTION IN H&E STAINED PROSTATE TISSUE

Prostate cancer is the second most common cancer in men, with an estimated 1.1 million diagnoses worldwide in 2012, accounting for 15% of all cancers diagnosed [1]. Research in prostate cancer is important to help improve diagnosis and choosing the best treatments for individuals at all stages of the disease. Automated tumor detection can help identify regions-of-interest and provide numerical data in a scalable fashion.

This APP utilizes AI/deep learning and has been trained to detect tumors in images of prostate tissue stained with H&E. The deep learning architecture enables the APP to recognize complex structures and interpret the tissue context when analyzing an image, making it an efficient tool for detecting even small tumors that are not easily noticed. The APP does not grade the tumors.

136 slides were manually assessed and classified as positive or negative, resulting in 53 negative and 83 positive slides. The same slides were analyzed with the APP where the Total Tumor Length with a cut-off of 0 mm was used to classify slides as positive and negative. The agreement between manual and APP slide classification is shown below.

 Sensitivity97.9 %
Specificity83.0 %

 ProstatePerf3Figure 1  The plot shows the Total Tumor Length calculated by the APP for 136 samples; negative in orange; positive in green. The first non-zero occurence has a value of 0.258 mm, all negative samples to the left of this point and all positive samples to the right of this point were correctly classified compared to manual assessment.

KEYWORDS
Prostate cancer, H&E, Hematoxylin, Eosin, Deep learning, AI, Image analysis, DeepLabv3+, Adenocarcinoma

METHODS
The APP was developed using the DeepLabv3+ neural network available with Author™ AI. The neural network uses a cascade of layers of nonlinear processing units for feature extraction and transformation, with each successive layer using the output from the previous layer as input. DeepLabv3+ uses an encoder-decoder structure with atrous spatial pyramid pooling (ASPP) that is able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective field-of-views.

This means that instead of using stepwise upsampling blocks to incorporate features from different levels, this network only needs two upsampling steps, i.e. it is faster to train and analyze than e.g. the U-Net. All of this also means that the decoder module can refine the segmentation results along the object boundaries more precisely. For more information on the network architecture, see [2].

QUANTITATIVE OUTPUT VARIABLES
The output variables obtained from this protocol are:

  • Total Tumor Area [mm2]
  • Total Tumor Length [mm]
  • Tumor Percentage [%]

WORKFLOW
The APP contains four protocols:

  1. Tissue Detect: Outlines tissue on the slide for further analysis.
  2. Tumor Detect: Identifies possible tumors using AI.
  3. Post-Processing: Post-processes the classification result, improving accuracy and visualization.
  4. Calculate Results: Calculates results based on tumor outlines.

The principle of procedure for the APP is as follows:

  1. Load digital images of prostate tissue stained with H&E.
  2. Select the Prostate Cancer Detection, AI APP and run the analysis.
  3. Review results.

STAINING PROTOCOL
There is no staining protocol available.

ADDITIONAL INFORMATION
To run the APP, a NVIDIA GPU with minimum 4 GB RAM is required.
The APP utilizes the Visiopharm Engine™, Engine™ AI and Viewer software modules, where Engine™ and Engine™ AI offer an execution platform to expand processing capability and speed of image analysis. The Viewer allows a fast review together with several types of image adjustment properties e.g. outlining of regions, annotations and direct measures of distance, curve length, radius, etc.
By adding the Author™  and/or Author™ AI modules the APP can be customized to fit other purposes. These modules offer a comprehensive and dedicated set of tools for creating new fit-for-purpose analysis APPs, and no programming experience is required.

REFERENCES

LITERATURE

  1. European Association of Urology, Prostate Cancer, https://uroweb.org/guideline/prostate-cancer/, Accessed: February 19, 2020.
  2. Chen, L., et. al., Encoder-decoder with atrous separable convolution for semantic image segmentation, Proceedings of the European conference on computer vision (ECCV) 2018, 801-818, DOI

USERS

The APP was developed in collaboration with University Medical Center Groningen, The Netherlands.

RUO
FIGURE 1
FIGURE 1
Image of H&E stained needle biopsies from prostate tissue to be analyzed.
FIGURE 2
FIGURE 2
All relevant tissue is automatically outlined (in purple) for further analysis.
FIGURE 3
FIGURE 3
Any tumors are outlined in red.
FIGURE 4
FIGURE 4
Zooming in, the outlined tumors are reviewed in detail.