header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Apple Unveils LiTo: Reconstruct Full 3D Objects from a Single Image, with View-Consistent Relighting

According to 1M AI News monitoring, Apple's AI research team presented a paper at ICLR 2026 introducing the 3D generation method LiTo (Surface Light Field Tagging). LiTo can generate a complete 3D object from a single image, maintaining consistency in specular highlights, Fresnel reflections, and other lighting effects when changing viewpoints.

Prior to this, most 3D reconstruction methods could only handle either geometric shape or diffuse appearance, making it difficult to reproduce lighting details that vary with the viewpoint. LiTo encodes object geometry and view-dependent appearance into the same 3D latent space, and then generates results under a single-image condition through a latent flow matching model. The training data consist of thousands of 3D objects, each rendered from 150 viewpoints under 3 lighting conditions. The decoder learns full geometry and appearance reconstruction by randomly sampling subsets. Experiments show that LiTo outperforms the existing method TRELLIS in both visual quality and fidelity to the input image. The paper, written by Jen-Hao Rick Chang, Xiaoming Zhao (co-first authors), Dorian Chan, and Oncel Tuzel, has been publicly released on arXiv.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish