Skip to content
Massachusetts Institute of Technology
  • on: April 23, 2024
  • in: arXiv

FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent

*
shared first author

This paper introduces FlowMap, an end-to-end differentiable method that solves for precise camera poses, camera intrinsics, and per-frame dense depth of a video sequence. Our method performs per-video gradient-descent minimization of a simple least-squares objective that compares the optical flow induced by depth, intrinsics, and poses against correspondences obtained via off-the-shelf optical flow and point tracking. Alongside the use of point tracks to encourage long-term geometric consistency, we introduce a differentiable re-parameterization of depth, intrinsics, and pose that is amenable to first-order optimization. We empirically show that camera parameters and dense depth recovered by our method enable photo-realistic novel view synthesis on 360° trajectories using Gaussian Splatting. Our method not only far outperforms prior gradient-descent based bundle adjustment methods, but surprisingly performs on par with COLMAP, the state-of-the-art SfM method, on the downstream task of 360° novel view synthesis - even though our method is purely gradient-descent based, fully differentiable, and presents a complete departure from conventional SfM. Our result opens the door to the self-supervised training of neural networks that perform camera parameter estimation, 3D reconstruction, and novel view synthesis.

Citation

@inproceedings{charatansmith2024flowmap,
    title = { FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent },
    author = { Smith, Cameron and 
               Charatan, David and 
               Tewari, Ayush and 
               Sitzmann, Vincent },
    year = { 2024 },
    booktitle = { arXiv },
}
  • Copy to Clipboard
  • Download