Chao Xu

I am a UCLA Ph.D. candidate, currently a visiting graduate at UC San Diego, fortunate to be advised by Prof. Hao Su. Previously, I received dual bachelor degrees in Computer Engineering from University of Illinois at Urbana-Champaign and Zhejiang University.

My research interests are focused on the fields of computer vision, robotics, and cognition. I actively engaged in pushing the boundaries of generalizable 3D vision: 1) object understanding, 2) 3D reconstruction and generation, etc.

Email  /  CV  /  Google Scholar  /  Semantic Scholar  /   /  GitHub

profile photo
SpaRP SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views
Chao Xu, Ang Li, Linghao Chen, Yulin Liu, Ruoxi Shi, Hao Su, Minghua Liu
ECCV 2024
Paper / Project

Given sparse unposed views, we train multiview diffusion models to predict their poses and reconstruct the 3D shape.

One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion
Minghua Liu*, Ruoxi Shi*, Linghao Chen*, Zhuoyang Zhang*, Chao Xu*, Xinyue Wei, Hansheng Chen, Chong Zeng, Jiayuan Gu, Hao Su
CVPR 2024
Paper / Project / Code Star / Demo

We propose a new pipeline in the One-2-3-45's paradigm: More consistent multi-view generation (Zero123++) and better reconstruction (multi-view conditioned 3D native diffusion models).

Zero123++ Zero123++: A Single Image to Consistent Multi-view Diffusion Base Model
Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, Hao Su
Technical Report
Paper / Code Star Hugging Face Spaces

We improve consistency and image conditioning in single-image multi-view generation.

One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
Minghua Liu*, Chao Xu*, Haian Jin*, Linghao Chen*, Mukund V. T, Zexiang Xu, Hao Su
NeurIPS 2023
Paper / Project / Code Star Hugging Face Spaces

We rethink how to leverage 2D diffusion models for 3D AIGC and introduce a novel forward-only paradigm that avoids the time-consuming optimization.

GAPartNet GAPartNet: Cross-Category Domain-Generalizable Object Perception and Manipulation via Generalizable and Actionable Parts
Haoran Geng*, Helin Xu*, Chengyang Zhao*, Chao Xu, Li Yi, Siyuan Huang, He Wang
CVPR 2023 Highlight (10% of accepted, scores: 5, 5, 5)
Paper / Project / Code

We learn parts on articulated objects across categories.

PartAfford PartAfford: Part‑level Affordance Discovery from Cross‑category 3D Objects
Chao Xu, Yixin Chen, He Wang, Song-Chun Zhu, Yixin Zhu, Siyuan Huang
ECCV 2022 Visual Object-oriented Learning meets Interaction Workshop
Paper / Video

We discover part affordances on 3D objects across categories under weak supervision.

Modified from Jon Barron's.