Extending the relative seriality formalism for interpretable deep learning of normal tissue complication probability models

Research output: Contribution to journalArticlepeer-review

Abstract

We formally demonstrate that the relative seriality (RS) model of normal tissue complication probability (NTCP) can be recast as a simple neural network with one convolutional and one pooling layer. This approach enables us to systematically construct deep relative seriality networks (DRSNs), a new class of mechanistic generalizations of the RS model with radiobiologically interpretable parameters amenable to deep learning. To demonstrate the utility of this formulation, we analyze a simplified example of xerostomia due to irradiation of the parotid gland during alpha radiopharmaceutical therapy. Using a combination of analytical calculations and numerical simulations, we show for both the RS and DRSN cases that the ability of the neural network to generalize without overfitting is tied to 'stiff' and 'sloppy' directions in the parameter space of the mechanistic model. These results serve as proof-of-concept for radiobiologically interpretable deep learning of NTCP, while simultaneously yielding insight into how such techniques can robustly generalize beyond the training set despite uncertainty in individual parameters.

Original languageEnglish (US)
Article number024001
JournalMachine Learning: Science and Technology
Volume3
Issue number2
DOIs
StatePublished - Jun 1 2022

Keywords

  • NTCP
  • deep learning
  • interpretability
  • relative seriality

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Extending the relative seriality formalism for interpretable deep learning of normal tissue complication probability models'. Together they form a unique fingerprint.

Cite this