TY - GEN
T1 - A semi-automatic 2D solution for vehicle speed estimation from monocular videos
AU - Kumar, Amit
AU - Khorramshahi, Pirazh
AU - Lin, Wei An
AU - Dhar, Prithviraj
AU - Chen, Jun Cheng
AU - Chellappa, Rama
N1 - Funding Information:
This research is based upon work supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) contract number D17PC00345. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied of IARPA, DOI/IBC or the U.S. Government.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/13
Y1 - 2018/12/13
N2 - In this work, we present a novel approach for vehicle speed estimation from monocular videos. The pipeline consists of modules for multi-object detection, robust tracking, and speed estimation. The tracking algorithm has the capability for jointly tracking individual vehicles and estimating velocities in the image domain. However, since camera parameters are often unavailable and extensive variations are present in the scenes, transforming measurements in the image domain to real world is challenging. We propose a simple two-stage algorithm to approximate the transformation. Images are first rectified to restore affine properties, then the scaling factor is compensated for each scene. We show the effectiveness of the proposed method with extensive experiments on the traffic speed analysis dataset in the NVIDIA AI City challenge. We achieve a detection rate of 1.0 in vehicle detection and tracking, and Root Mean Square Error of 9.54 (mph) for the task of vehicle speed estimation in unconstrained traffic videos.
AB - In this work, we present a novel approach for vehicle speed estimation from monocular videos. The pipeline consists of modules for multi-object detection, robust tracking, and speed estimation. The tracking algorithm has the capability for jointly tracking individual vehicles and estimating velocities in the image domain. However, since camera parameters are often unavailable and extensive variations are present in the scenes, transforming measurements in the image domain to real world is challenging. We propose a simple two-stage algorithm to approximate the transformation. Images are first rectified to restore affine properties, then the scaling factor is compensated for each scene. We show the effectiveness of the proposed method with extensive experiments on the traffic speed analysis dataset in the NVIDIA AI City challenge. We achieve a detection rate of 1.0 in vehicle detection and tracking, and Root Mean Square Error of 9.54 (mph) for the task of vehicle speed estimation in unconstrained traffic videos.
UR - http://www.scopus.com/inward/record.url?scp=85060895053&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85060895053&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2018.00026
DO - 10.1109/CVPRW.2018.00026
M3 - Conference contribution
AN - SCOPUS:85060895053
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 137
EP - 144
BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
PB - IEEE Computer Society
T2 - 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
Y2 - 18 June 2018 through 22 June 2018
ER -