TY - JOUR
T1 - Learning Geocentric Object Pose in Oblique Monocular Images
AU - Christie, Gordon
AU - Munoz Abujder, Rodrigo Rene Rai
AU - Foster, Kevin
AU - Hagstrom, Shea
AU - Hager, Gregory D.
AU - Brown, Myron Z.
N1 - Funding Information:
This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) contract no. 2017-17032700004. This work was further supported by the National Geospatial-Intelligence Agency (NGA) and approved for public release, 20-316, with distribution statement A – approved for public release; distribution is unlimited. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, NGA, or the U.S. Government.
Publisher Copyright:
© 2020 IEEE.
PY - 2020
Y1 - 2020
N2 - An object's geocentric pose. defined as the height above ground and orientation with respect to gravity, is a powerful representation of real-world structure for object detection, segmentation, and localization tasks using RGBD images. For close-range vision tasks, height and orientation have been derived directly from stereo-computed depth and more recently from monocular depth predicted by deep networks. For long-range vision tasks such as Earth observation, depth cannot be reliably estimated with monocular images. Inspired by recent work in monocular height above ground prediction and optical flow prediction from static images, we develop an encoding of geocentric pose to address this challenge and train a deep network to compute the representation densely, supervised by publicly available airborne lidar. We exploit these attributes to rectify oblique images and remove observed object parallax to dramatically improve the accuracy of localization and to enable accurate alignment of multiple images taken from very different oblique viewpoints. We demonstrate the value of our approach by extending two large-scale public datasets for semantic segmentation in oblique satellite images. All of our data and code are publicly available.
AB - An object's geocentric pose. defined as the height above ground and orientation with respect to gravity, is a powerful representation of real-world structure for object detection, segmentation, and localization tasks using RGBD images. For close-range vision tasks, height and orientation have been derived directly from stereo-computed depth and more recently from monocular depth predicted by deep networks. For long-range vision tasks such as Earth observation, depth cannot be reliably estimated with monocular images. Inspired by recent work in monocular height above ground prediction and optical flow prediction from static images, we develop an encoding of geocentric pose to address this challenge and train a deep network to compute the representation densely, supervised by publicly available airborne lidar. We exploit these attributes to rectify oblique images and remove observed object parallax to dramatically improve the accuracy of localization and to enable accurate alignment of multiple images taken from very different oblique viewpoints. We demonstrate the value of our approach by extending two large-scale public datasets for semantic segmentation in oblique satellite images. All of our data and code are publicly available.
UR - http://www.scopus.com/inward/record.url?scp=85094326332&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094326332&partnerID=8YFLogxK
U2 - 10.1109/CVPR42600.2020.01452
DO - 10.1109/CVPR42600.2020.01452
M3 - Conference article
AN - SCOPUS:85094326332
SN - 1063-6919
SP - 14500
EP - 14508
JO - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
JF - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
M1 - 9157635
T2 - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
Y2 - 14 June 2020 through 19 June 2020
ER -