Electronic Thesis and Dissertation Repository

Thesis Format

Integrated Article

Degree

Doctor of Philosophy

Program

Medical Biophysics

Supervisor

Ward, Aaron D.

Abstract

Pathology reporting on radical prostatectomy (RP) specimens is essential to post-surgery patient care. However, current pathology interpretation of RP sections is typically qualitative and subject to intra- and inter-observer variability, which challenges quantitative and repeatable reporting of lesion grade, size, location, and spread. Therefore, we developed and validated a software platform that can automatically detect and grade cancerous regions on whole slide images (WSIs) of whole-mount RP sections to support quantitative and visual reporting. Our study used hæmatoxylin- and eosin-stained WSIs from 299 whole-mount RP sections from 71 patients, comprising 1.2 million 480μm×480μm regions-of-interest (ROIs) covering benign and cancerous tissues which contain all clinically relevant grade groups. Each cancerous region was annotated and graded by an expert genitourinary pathologist. We used a machine learning approach with 7 different classifiers (3 non-deep learning and 4 deep learning) to classify: 1) each ROI as cancerous vs. non-cancerous, and 2) each cancerous ROI as high- vs. low-grade. Since recent studies found some subtypes beyond Gleason grade to have independent prognostic value, we also used one deep learning method to classify each cancerous ROI from 87 RP sections of 25 patients as each of eight subtypes to support further clinical pathology research on this topic. We cross-validated each system against the expert annotations. To compensate for the staining variability across different WSIs from different patients, we computed the tissue component map (TCM) using our proposed adaptive thresholding algorithm to label nucleus pixels, global thresholding to label lumen pixels, and assigning the rest as stroma/other. Fine-tuning AlexNet with ROIs of the TCM yielded the best results for prostate cancer (PCa) detection and grading, with areas under the receiver operating characteristic curve (AUCs) of 0.98 and 0.93, respectively, followed by fine-tuned AlexNet with ROIs of the raw image. For subtype grading, fine-tuning AlexNet with ROIs of the raw image yielded AUCs ≥ 0.7 for seven of eight subtypes. To conclude, deep learning approaches outperformed non-deep learning approaches for PCa detection and grading. The TCMs provided the primary cues for PCa detection and grading. Machine learning can be used for subtype grading beyond the Gleason grading system.

Summary for Lay Audience

Prostate cancer (PCa) is the most prevalent non-skin cancer for Canadian men. Radical prostatectomy (RP) is a surgery that removes the prostate. It is considered to be one of the most effective treatments for PCa patients. However, approximately 30% of patients suffer from recurrence after surgery. Post-surgery patient care, which is advised by pathology reporting on RP specimens, is essential and can be life-saving. Pathology reporting usually provides information such as the presence of tumours, tumour location, and Gleason grade (i.e. a numerical indicator reflecting the aggressiveness of the tumour). However, current pathology interpretation on RP sections is typically qualitative and subject to intra- and inter-observer variability, which challenges quantitative and repeatable reporting of lesion grade, size, location, and spread. Graphical and quantitative reporting, which annotates and grades each tumour with quantitative tumour information associated, can potentially resolve those challenges to better advise post-surgery patient care and pathological studies. However, manually annotating and grading each cancerous region is not feasible in the standard clinical workflow, because tissue sections are enormous under the microscope. Therefore, there is an unmet need for an automatic system that can label and grade cancerous regions on whole slide images (WSIs) of RP specimens. The advancement of scanning technology enables the digitization of WSIs with enough resolution for pathology evaluation. Machine learning is a technique which can identify objects by training the machine with human-labeled examples. Previous research has demonstrated the feasibility of using machine learning to identify and grade regions of interest of prostate tissues. However, detecting and grading each tumour on whole-mount WSIs is still challenging due to the large sizes of high-resolution WSIs, and the staining variability across WSIs. We developed and validated a machine learning based system against expert annotations for PCa detection and grading on 299 whole-mount WSIs, and for PCa subtype grading on 87 whole-mount WSIs. The systems yielded areas under the receiver operating characteristic curve (AUCs) of 0.98 and 0.92 for PCa detection and grading, respectively, and AUCs ≥ 0.7 for seven of the eight subtypes. This demonstrates state-of-the-art performance and the potential for clinical translation of this tool.

Share

COinS