TY - GEN
T1 - Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration
AU - Kim, Ji Woong
AU - He, Changyan
AU - Urias, Muller
AU - Gehlbach, Peter
AU - Hager, Gregory D.
AU - Iordachita, Iulian
AU - Kobilarov, Marin
N1 - Funding Information:
VI. CONCLUSIONS In this work, we show that deep networks can reliably navigate a surgical tool to various desired locations within 137 µm accuracy in physical experiments and 94 µm in simulation on average. In future work, we hope to consider more realistic scenario including a physical scelera constraint, eye movement, and also using porcine eyes. VII. ACKNOWLEDGEMENTS This work was supported by U.S. NIH grant 1R01EB023943-01.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tooltip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 μm accuracy in physical experiments and 94 μm in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.
AB - A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tooltip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 μm accuracy in physical experiments and 94 μm in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.
UR - http://www.scopus.com/inward/record.url?scp=85092696744&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092696744&partnerID=8YFLogxK
U2 - 10.1109/ICRA40945.2020.9196537
DO - 10.1109/ICRA40945.2020.9196537
M3 - Conference contribution
AN - SCOPUS:85092696744
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 7351
EP - 7357
BT - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Y2 - 31 May 2020 through 31 August 2020
ER -