TY - JOUR
T1 - 3D Point Cloud-Based Visual Prediction of ICU Mobility Care Activities
AU - Liu, Bingbin
AU - Guo, Michelle
AU - Chou, Edward
AU - Mehra, Rishab
AU - Yeung, Serena
AU - Lance Downing, N.
AU - Salipur, Francesca
AU - Jopling, Jeffrey
AU - Campbell, Brandi
AU - Deru, Kayla
AU - Beninati, William
AU - Milstein, Arnold
AU - Fei-Fei, Li
N1 - Funding Information:
We would like to acknowledge members in the Stanford Partnership in AI-Assisted Care (PAC) for helpful discussions and support in data collection. We would also like to thank the anonymous reviewers for their comments which helped improve the paper.
Publisher Copyright:
© 2018 MLHC. All Rights Reserved.
PY - 2018
Y1 - 2018
N2 - Intensive Care Units (ICUs) are some of the highest intensity areas of patient care activities in hospitals, yet documentation and understanding of the occurrence of these activities remain sub-optimal due in part to already-demanding patient care workloads of nursing staff. Recently, computer vision based methods operating over color and depth data collected from passive mounted sensors have been developed for automated activity recognition, but have been limited to coarse or simple activities due to the complex environments in ICUs, where fast-changing activities and severe occlusion occur. In this work, we introduce an approach for tackling more challenging activities in ICUs by combining depth data from multiple sensors to form a single 3D point cloud representation, and using a neural network-based model to reason over this 3D representation. We demonstrate the effectiveness of this approach using a dataset of mobility-related patient care activities collected in a clinician-guided simulation setting.
AB - Intensive Care Units (ICUs) are some of the highest intensity areas of patient care activities in hospitals, yet documentation and understanding of the occurrence of these activities remain sub-optimal due in part to already-demanding patient care workloads of nursing staff. Recently, computer vision based methods operating over color and depth data collected from passive mounted sensors have been developed for automated activity recognition, but have been limited to coarse or simple activities due to the complex environments in ICUs, where fast-changing activities and severe occlusion occur. In this work, we introduce an approach for tackling more challenging activities in ICUs by combining depth data from multiple sensors to form a single 3D point cloud representation, and using a neural network-based model to reason over this 3D representation. We demonstrate the effectiveness of this approach using a dataset of mobility-related patient care activities collected in a clinician-guided simulation setting.
UR - http://www.scopus.com/inward/record.url?scp=85088099470&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088099470&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85088099470
SN - 2640-3498
VL - 85
SP - 17
EP - 29
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 3rd Machine Learning for Healthcare Conference, MLHC 2018
Y2 - 17 August 2018 through 18 August 2018
ER -