LightDepth: Single-View Depth Self-Supervision from Illumination Decline

*Equal contribution 1Universidad de Zaragoza 2École Polytechnique Fédérale de Lausanne

Published in IEEE/CVF International Conference on Computer Vision (ICCV)
2023

Teaser for LightDepth: Single-View Depth Self-Supervision from Illumination Decline

Abstract

Single-view depth estimation can be remarkably effective if there is enough ground-truth depth data for supervised training. However, there are scenarios, especially in medicine in the case of endoscopies, where such data cannot be obtained. In such cases, multi-view self-supervision and synthetic-to-real transfer serve as alternative approaches, however, with a considerable performance reduction in comparison to supervised case. Instead, we propose a single-view self-supervised method that achieves a performance similar to the supervised case. In some medical devices, such as endoscopes, the camera and light sources are co-located at a small distance from the target surfaces. Thus, we can exploit that, for any given albedo and surface orientation, pixel brightness is inversely proportional to the square of the distance to the surface, providing a strong single-view self-supervisory signal. In our experiments, our self-supervised models deliver accuracies comparable to those of fully supervised ones, while being applicable without depth ground-truth data.

Overview Video

BibTeX

@inproceedings{rodriguez2023lightdepth,
    title = {{LightDepth}: Single-View Depth Self-Supervision from Illumination Decline},
    author = {Rodr{\'i}guez-Puigvert, Javier and Batlle, V{\'i}ctor M and Montiel, Jos{\'e} MM and Martinez-Cantin, Ruben and Fua, Pascal and Tard{\'o}s, Juan D and Civera, Javier},
    booktitle = {IEEE/CVF International Conference on Computer Vision (ICCV)},
    pages = {21273--21283},
    year = {2023},
    doi = {https://doi.org/10.1109/ICCV51070.2023.01945},
}

Related Work

  • 2023: Endomapper dataset of complete calibrated endoscopy procedures
    @article{azagra2023endomapper,
        title = {Endomapper dataset of complete calibrated endoscopy procedures},
        author = {Azagra, Pablo and Sostres, Carlos and Ferr{\'a}ndez, {\'A}ngel and Riazuelo, Luis and Tomasini, Clara and Barbed, O. Le{\'o}n and Morlana, Javier and Recasens, David and Batlle, V{\'i}ctor M and G{\'o}mez-Rodr{\'i}guez, Juan J. and Elvira, Richard and L{\'o}pez, Julia and Oriol, Cristina and Civera, Javier and Tard{\'o}s, Juan D and Murillo, Ana C. and Lanas, Angel and Montiel, Jos{\'e} MM},
        journal = {Scientific Data},
        volume = {10},
        number = {1},
        pages = {671},
        year = {2023},
        issn = {2052-4463},
        doi = {https://doi.org/10.1038/s41597-023-02564-7},
        publisher = {Nature Publishing Group UK London},
    }
    
  • 2022: Photometric single-view dense 3D reconstruction in endoscopy
    @inproceedings{batlle2022photometric,
        title = {Photometric single-view dense 3D reconstruction in endoscopy},
        author = {Batlle, V{\'i}ctor M and Montiel, Jos{\'e} MM and Tard{\'o}s, Juan D},
        booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
        year = {2022},
        doi = {https://doi.org/10.1109/IROS47612.2022.9981742},
    }