Deep learning-based auto-segmentation of targets and organs-at-risk for magnetic resonance imaging only planning of prostate radiotherapy
•Auto-segmentation using Deep Learning can be difficult with a small medical dataset.•Transfer learning allows deep learning networks to retrain easily on small datasets.•We successfully apply this method to auto-segment targets and OARs in prostate radiation therapy. Magnetic resonance (MR) only ra...
Gespeichert in:
| Veröffentlicht in: | Physics and imaging in radiation oncology Jg. 12; S. 80 - 86 |
|---|---|
| Hauptverfasser: | , , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Netherlands
Elsevier B.V
01.10.2019
Elsevier |
| Schlagworte: | |
| ISSN: | 2405-6316, 2405-6316 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | •Auto-segmentation using Deep Learning can be difficult with a small medical dataset.•Transfer learning allows deep learning networks to retrain easily on small datasets.•We successfully apply this method to auto-segment targets and OARs in prostate radiation therapy.
Magnetic resonance (MR) only radiation therapy for prostate treatment provides superior contrast for defining targets and organs-at-risk (OARs). This study aims to develop a deep learning model to leverage this advantage to automate the contouring process.
Six structures (bladder, rectum, urethra, penile bulb, rectal spacer, prostate and seminal vesicles) were contoured and reviewed by a radiation oncologist on axial T2-weighted MR image sets from 50 patients, which constituted expert delineations. The data was split into a 40/10 training and validation set to train a two-dimensional fully convolutional neural network, DeepLabV3+, using transfer learning. The T2-weighted image sets were pre-processed to 2D false color images to leverage pre-trained (from natural images) convolutional layers’ weights. Independent testing was performed on an additional 50 patient’s MR scans. Performance comparison was done against a U-Net deep learning method. Algorithms were evaluated using volumetric Dice similarity coefficient (VDSC) and surface Dice similarity coefficient (SDSC).
When comparing VDSC, DeepLabV3+ significantly outperformed U-Net for all structures except urethra (P < 0.001). Average VDSC was 0.93 ± 0.04 (bladder), 0.83 ± 0.06 (prostate and seminal vesicles [CTV]), 0.74 ± 0.13 (penile bulb), 0.82 ± 0.05 (rectum), 0.69 ± 0.10 (urethra), and 0.81 ± 0.1 (rectal spacer). Average SDSC was 0.92 ± 0.1 (bladder), 0.85 ± 0.11 (prostate and seminal vesicles [CTV]), 0.80 ± 0.22 (penile bulb), 0.87 ± 0.07 (rectum), 0.85 ± 0.25 (urethra), and 0.83 ± 0.26 (rectal spacer).
A deep learning-based model produced contours that show promise to streamline an MR-only planning workflow in treating prostate cancer. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ISSN: | 2405-6316 2405-6316 |
| DOI: | 10.1016/j.phro.2019.11.006 |