Simulation of Random Deformable Motion in Soft-Tissue Cone-Beam CT with Learned Models

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Cone-beam CT (CBCT) is widely used for guidance in interventional radiology but it is susceptible to motion artifacts. Motion in interventional CBCT features a complex combination of diverse sources including quasi-periodic, consistent motion patterns such as respiratory motion, and aperiodic, quasi-random, motion such as peristalsis. Recent developments in image-based motion compensation methods include approaches that combine autofocus techniques with deep learning models for extraction of image features pertinent to CBCT motion. Training of such deep autofocus models requires the generation of large amounts of realistic, motion-corrupted CBCT. Previous works on motion simulation were mostly focused on quasi-periodic motion patterns, and reliable simulation of complex combined motion with quasi-random components remains an unaddressed challenge. This work presents a framework aimed at synthesis of realistic motion trajectories for simulation of deformable motion in soft-tissue CBCT. The approach leveraged the capability of conditional generative adversarial network (GAN) models to learn the complex underlying motion present in unlabeled, motion-corrupted, CBCT volumes. The approach is designed for training with unpaired clinical CBCT in an unsupervised fashion. This work presents a first feasibility study, in which the model was trained with simulated data featuring known motion, providing a controlled scenario for validation of the proposed approach prior to extension to clinical data. Our proof-of-concept study illustrated the potential of the model to generate realistic, variable simulation of CBCT deformable motion fields, consistent with three trends underlying the designed training data: i) the synthetic motion induced only diffeomorphic deformations - with Jacobian Determinant larger than zero; ii) the synthetic motion showed median displacement of 0.5 mm in regions predominantly static in the training (e.g., the posterior aspect of the patient laying supine), compared to a median displacement of 3.8 mm in regions more prone to motion in the training; and iii) the synthetic motion exhibited predominant directionality consistent with the training set, resulting in larger motion in the superior-inferior direction (median and maximum amplitude of 4.58 mm and 20 mm, > 2x larger than the two remaining direction). Together, the proposed framework shows the feasibility for realistic motion simulation and synthesis of variable CBCT data.

Original languageEnglish (US)
Title of host publication7th International Conference on Image Formation in X-Ray Computed Tomography
EditorsJoseph Webster Stayman
ISBN (Electronic)9781510656697
StatePublished - 2022
Event7th International Conference on Image Formation in X-Ray Computed Tomography - Virtual, Online
Duration: Jun 12 2022Jun 16 2022

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X


Conference7th International Conference on Image Formation in X-Ray Computed Tomography
CityVirtual, Online


  • Deep Learning
  • Interventional CBCT
  • Motion Compensation
  • Motion Simulation

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Applied Mathematics
  • Electrical and Electronic Engineering
  • Computer Science Applications


Dive into the research topics of 'Simulation of Random Deformable Motion in Soft-Tissue Cone-Beam CT with Learned Models'. Together they form a unique fingerprint.

Cite this