TY - GEN
T1 - Distillation-guided Representation Learning for Unconstrained Gait Recognition
AU - Guo, Yuxiang
AU - Huang, Siyuan
AU - Prabhakar, Ram
AU - Lau, Chun Pong
AU - Chellappa, Rama
AU - Peng, Cheng
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Gait recognition holds the promise of robustly identifying subjects based on walking patterns instead of appearance information. While previous approaches have performed well for curated indoor data, they tend to underperform in unconstrained situations, e.g. in outdoor, long distance scenes, etc. We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios. Specifically, GADER leverages a Double Helical Signature to detect segments that contain human movement and builds discriminative features through a novel gait recognition method, where only frames containing gait information are used. To further enhance robustness, GADER encodes viewpoint information in its architecture, and distills representation from an auxiliary RGB recognition model, which enables GADER to learn from silhouette and RGB data at training time. At test time, GADER only infers from the silhouette modality. We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets, especially with a significant 25.2% improvement on unconstrained, remote gait data.
AB - Gait recognition holds the promise of robustly identifying subjects based on walking patterns instead of appearance information. While previous approaches have performed well for curated indoor data, they tend to underperform in unconstrained situations, e.g. in outdoor, long distance scenes, etc. We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios. Specifically, GADER leverages a Double Helical Signature to detect segments that contain human movement and builds discriminative features through a novel gait recognition method, where only frames containing gait information are used. To further enhance robustness, GADER encodes viewpoint information in its architecture, and distills representation from an auxiliary RGB recognition model, which enables GADER to learn from silhouette and RGB data at training time. At test time, GADER only infers from the silhouette modality. We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets, especially with a significant 25.2% improvement on unconstrained, remote gait data.
UR - http://www.scopus.com/inward/record.url?scp=85211390793&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85211390793&partnerID=8YFLogxK
U2 - 10.1109/IJCB62174.2024.10744527
DO - 10.1109/IJCB62174.2024.10744527
M3 - Conference contribution
AN - SCOPUS:85211390793
T3 - Proceedings - 2024 IEEE International Joint Conference on Biometrics, IJCB 2024
BT - Proceedings - 2024 IEEE International Joint Conference on Biometrics, IJCB 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Joint Conference on Biometrics, IJCB 2024
Y2 - 15 September 2024 through 18 September 2024
ER -