TY - JOUR
T1 - A physics-guided modular deep-learning based automated framework for tumor segmentation in PET
AU - Leung, Kevin H.
AU - Marashdeh, Wael
AU - Wray, Rick
AU - Ashrafinia, Saeed
AU - Pomper, Martin G.
AU - Rahmim, Arman
AU - Jha, Abhinav Kumar
N1 - Publisher Copyright:
© 2020 Institute of Physics and Engineering in Medicine.
PY - 2020/12/21
Y1 - 2020/12/21
N2 - An important need exists for reliable positron emission tomography (PET) tumor-segmentation methods for tasks such as PET-based radiation-therapy planning and reliable quantification of volumetric and radiomic features. To address this need, we propose an automated physics-guided deep-learning-based three-module framework to segment PET images on a per-slice basis. The framework is designed to help address the challenges of limited spatial resolution and lack of clinical training data with known ground-truth tumor boundaries in PET. The first module generates PET images containing highly realistic tumors with known ground-truth using a new stochastic and physics-based approach, addressing lack of training data. The second module trains a modified U-net using these images, helping it learn the tumor-segmentation task. The third module fine-tunes this network using a small-sized clinical dataset with radiologist-defined delineations as surrogate ground-truth, helping the framework learn features potentially missed in simulated tumors. The framework was evaluated in the context of segmenting primary tumors in 18F-fluorodeoxyglucose (FDG)-PET images of patients with lung cancer. The framework's accuracy, generalizability to different scanners, sensitivity to partial volume effects (PVEs) and efficacy in reducing the number of training images were quantitatively evaluated using Dice similarity coefficient (DSC) and several other metrics. The framework yielded reliable performance in both simulated (DSC: 0.87 (95% confidence interval (CI): 0.86, 0.88)) and patient images (DSC: 0.73 (95% CI: 0.71, 0.76)), outperformed several widely used semi-automated approaches, accurately segmented relatively small tumors (smallest segmented cross-section was 1.83 cm2), generalized across five PET scanners (DSC: 0.74 (95% CI: 0.71, 0.76)), was relatively unaffected by PVEs, and required low training data (training with data from even 30 patients yielded DSC of 0.70 (95% CI: 0.68, 0.71)). In conclusion, the proposed automated physics-guided deep-learning-based PET-segmentation framework yielded reliable performance in delineating tumors in FDG-PET images of patients with lung cancer.
AB - An important need exists for reliable positron emission tomography (PET) tumor-segmentation methods for tasks such as PET-based radiation-therapy planning and reliable quantification of volumetric and radiomic features. To address this need, we propose an automated physics-guided deep-learning-based three-module framework to segment PET images on a per-slice basis. The framework is designed to help address the challenges of limited spatial resolution and lack of clinical training data with known ground-truth tumor boundaries in PET. The first module generates PET images containing highly realistic tumors with known ground-truth using a new stochastic and physics-based approach, addressing lack of training data. The second module trains a modified U-net using these images, helping it learn the tumor-segmentation task. The third module fine-tunes this network using a small-sized clinical dataset with radiologist-defined delineations as surrogate ground-truth, helping the framework learn features potentially missed in simulated tumors. The framework was evaluated in the context of segmenting primary tumors in 18F-fluorodeoxyglucose (FDG)-PET images of patients with lung cancer. The framework's accuracy, generalizability to different scanners, sensitivity to partial volume effects (PVEs) and efficacy in reducing the number of training images were quantitatively evaluated using Dice similarity coefficient (DSC) and several other metrics. The framework yielded reliable performance in both simulated (DSC: 0.87 (95% confidence interval (CI): 0.86, 0.88)) and patient images (DSC: 0.73 (95% CI: 0.71, 0.76)), outperformed several widely used semi-automated approaches, accurately segmented relatively small tumors (smallest segmented cross-section was 1.83 cm2), generalized across five PET scanners (DSC: 0.74 (95% CI: 0.71, 0.76)), was relatively unaffected by PVEs, and required low training data (training with data from even 30 patients yielded DSC of 0.70 (95% CI: 0.68, 0.71)). In conclusion, the proposed automated physics-guided deep-learning-based PET-segmentation framework yielded reliable performance in delineating tumors in FDG-PET images of patients with lung cancer.
KW - PET
KW - automated segmentation
KW - deep learning
KW - oncology
KW - partial volume effects
UR - http://www.scopus.com/inward/record.url?scp=85094617524&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094617524&partnerID=8YFLogxK
U2 - 10.1088/1361-6560/ab8535
DO - 10.1088/1361-6560/ab8535
M3 - Article
C2 - 32235059
AN - SCOPUS:85094617524
SN - 0031-9155
VL - 65
JO - Physics in Medicine and Biology
JF - Physics in Medicine and Biology
IS - 24
M1 - 245032
ER -