TY - GEN
T1 - Intraoperative guidance of orthopaedic instruments using 3D correspondence of 2D object instance segmentations
AU - Bataeva, I.
AU - Shah, K.
AU - Vijayan, R.
AU - Han, R.
AU - Sheth, N. M.
AU - Kleinszig, G.
AU - Vogt, S.
AU - Osgood, G. M.
AU - Siewerdsen, J. H.
AU - Uneri, A.
N1 - Publisher Copyright:
© 2021 SPIE.
PY - 2021
Y1 - 2021
N2 - Purpose. Surgical placement of pelvic instrumentation is challenged by complex anatomy and narrow bone corridors, and despite heavy reliance on intraoperative fluoroscopy, trauma surgery lacks a reliable solution for 3D surgical navigation that is compatible with steep workflow requirements. We report a method that uses routinely acquired fluoroscopic images in standard workflow to automatically detect and localize orthopaedic instruments for 3D guidance. Methods. The proposed method detects, establishes correspondence of, and localizes orthopaedic devices from a pair of radiographs. Instrument detection uses Mask R-CNN for segmentation and keypoint detection, trained on 4000 cadaveric pelvic radiographs with simulated guidewires. Keypoints on individual images are corresponded using prior calibration of the imaging system to backproject, identify, and rank-order ray intersections. Estimation of 3D instrument tip and direction was evaluated on a cadaveric specimen and patient images from an IRB-approved clinical study. Results. The detection network successfully generalized to cadaver and clinical images, achieving 87% recall and 98% precision. Mean geometric accuracy in estimating instrument tip and direction was (1.9 ± 1.6) mm and (1.8 ± 1.3)°, respectively. Simulation studies demonstrated 1.1 mm median error in 3D tip and 2.3° in 3D direction estimation. Preliminary tests in cadaver and clinical images show the basic feasibility of the overall approach. Conclusions. Experimental studies demonstrate the feasibility and highlight the potential of deep learning for 3D-2D registration of orthopaedic instruments as applied in fixation of pelvic fractures. The approach is compatible with routine orthopaedic workflow, does not require additional equipment (such as surgical trackers), uses common imaging equipment (mobile C-arm fluoroscopy), and does not require vendor-specific device models.
AB - Purpose. Surgical placement of pelvic instrumentation is challenged by complex anatomy and narrow bone corridors, and despite heavy reliance on intraoperative fluoroscopy, trauma surgery lacks a reliable solution for 3D surgical navigation that is compatible with steep workflow requirements. We report a method that uses routinely acquired fluoroscopic images in standard workflow to automatically detect and localize orthopaedic instruments for 3D guidance. Methods. The proposed method detects, establishes correspondence of, and localizes orthopaedic devices from a pair of radiographs. Instrument detection uses Mask R-CNN for segmentation and keypoint detection, trained on 4000 cadaveric pelvic radiographs with simulated guidewires. Keypoints on individual images are corresponded using prior calibration of the imaging system to backproject, identify, and rank-order ray intersections. Estimation of 3D instrument tip and direction was evaluated on a cadaveric specimen and patient images from an IRB-approved clinical study. Results. The detection network successfully generalized to cadaver and clinical images, achieving 87% recall and 98% precision. Mean geometric accuracy in estimating instrument tip and direction was (1.9 ± 1.6) mm and (1.8 ± 1.3)°, respectively. Simulation studies demonstrated 1.1 mm median error in 3D tip and 2.3° in 3D direction estimation. Preliminary tests in cadaver and clinical images show the basic feasibility of the overall approach. Conclusions. Experimental studies demonstrate the feasibility and highlight the potential of deep learning for 3D-2D registration of orthopaedic instruments as applied in fixation of pelvic fractures. The approach is compatible with routine orthopaedic workflow, does not require additional equipment (such as surgical trackers), uses common imaging equipment (mobile C-arm fluoroscopy), and does not require vendor-specific device models.
KW - 3D-2D image registration
KW - Deep learning
KW - Image-guided surgery
KW - Instance segmentation
KW - Intraoperative imaging
KW - Object detection
UR - http://www.scopus.com/inward/record.url?scp=85105556142&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105556142&partnerID=8YFLogxK
U2 - 10.1117/12.2582239
DO - 10.1117/12.2582239
M3 - Conference contribution
AN - SCOPUS:85105556142
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2021
A2 - Linte, Cristian A.
A2 - Siewerdsen, Jeffrey H.
PB - SPIE
T2 - Medical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling
Y2 - 15 February 2021 through 19 February 2021
ER -