SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views

ECCV 2024

Chao Xu1,2†, Ang Li3, Linghao Chen2,4†, Yulin Liu5, Ruoxi Shi2,5†, Hao Su2,5‡, Minghua Liu2,5†‡
1UCLA, 2Hillbot Inc., 3Stanford University, 4Zhejiang University, 5UC San Diego
† Work done during internship at Hillbot Inc. ‡ Equal Advisory.
SpaRP handles open-world 3D reconstruction and pose estimation from unposed sparse-view images, delivering results within 20 seconds.

Abstract

Open-world 3D generation has recently attracted considerable attention. While many single-image-to-3D methods have yielded visually appealing outcomes, they often lack sufficient controllability and tend to produce hallucinated regions that may not align with users' expectations. In this paper, we explore an important scenario in which the input consists of one or a few unposed 2D images of a single object, with little or no overlap. We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for these sparse-view images. SpaRP distills knowledge from 2D diffusion models and finetunes them to implicitly deduce the 3D spatial relationships between the sparse views. The diffusion model is trained to jointly predict surrogate representations for camera poses and multi-view images of the object under known poses, integrating all information from the input sparse views. These predictions are then leveraged to accomplish 3D reconstruction and pose estimation, and the reconstructed 3D model can be used to further refine the camera poses of input views. Through extensive experiments on three datasets, we demonstrate that our method not only significantly outperforms baseline methods in terms of 3D reconstruction quality and pose prediction accuracy but also exhibits strong efficiency. It requires only about 20 seconds to produce a textured mesh and camera poses for the input views.

Method

We begin by taking a sparse set of unposed images as input, which we tile into a single composite image. This composite image is subsequently provided to the Stable Diffusion UNet to serve as the conditioning input. The 2D diffusion model is simultaneously finetuned to predict NOCS maps for the input sparse views and multi-view images under known camera poses. From the NOCS maps, we extract the camera poses corresponding to the input views. The multi-view images are then processed by a reconstruction module to generate textured 3D meshes. Optionally, the camera poses can be further refined using the generated mesh for improved accuracy.

Comparison

(Interactive Meshes in other tabs)

Overall Comparison

StableFast3D*

iFusion

EscherNet

Ours

* Only the first image is used as input for single-image-to-3D methods.

StableFast3D*

iFusion

EscherNet

Ours

* Only the first image is used as input for single-image-to-3D methods.

StableFast3D*

iFusion

EscherNet

Ours

* Only the first image is used as input for single-image-to-3D methods.

StableFast3D*

iFusion

EscherNet

Ours

* Only the first image is used as input for single-image-to-3D methods.

More Results (Real-world Examples)


Real-World Examples: The input images are either sourced from amazon.com or captured using an iPhone.

BibTeX

@inproceedings{xu2025sparp,
    title={Sparp: Fast 3d object reconstruction and pose estimation from sparse views},
    author={Xu, Chao and Li, Ang and Chen, Linghao and Liu, Yulin and Shi, Ruoxi and Su, Hao and Liu, Minghua},
    booktitle={European Conference on Computer Vision},
    pages={143--163},
    year={2025},
    organization={Springer}
}