Deep Vanishing Point Detection: Geometric priors make dataset variations vanish

    
CVPR - 2022
Download the publication : dvpd_preprint_compressed.pdf [4Mo]  
Deep learning has improved vanishing point detection in images. Yet, deep networks require expensive annotated datasets trained on costly hardware and do not generalize to even slightly different domains, and minor problem variants. Here, we address these issues by injecting deep vanishing point detection networks with prior knowledge. This prior knowledge no longer needs to be learned from data, saving valuable annotation efforts and compute, unlocking realistic few-sample scenarios, and reducing the impact of domain changes. Moreover, the interpretability of the priors allows to adapt deep networks to minor problem variations such as switching between Manhattan and non-Manhattan worlds. We seamlessly incorporate two geometric priors: (i) Hough Transform – mapping image pixels to straight lines, and (ii) Gaussian sphere – mapping lines to great circles whose intersections denote vanishing points. Experimentally, we ablate our choices and show comparable accuracy to existing models in the large-data setting. We validate our model’s improved data efficiency, robustness to domain changes, adaptability to non-Manhattan settings.

Images and movies

 

BibTex references

@Article { LWPHEG22,
  author       = "Lin, Yancong and Wiersma, Ruben and Pintea, Silvia-Laura and Hildebrandt, Klaus and Eisemann, Elmar and
                  Gemert, Jan van",
  title        = "Deep Vanishing Point Detection: Geometric priors make dataset variations vanish",
  journal      = "CVPR",
  year         = "2022",
  note         = "https://arxiv.org/abs/2203.08586",
  url          = "http://graphics.tudelft.nl/Publications-new/2022/LWPHEG22"
}

Other publications in the database







Back