- > 연구성과 > 국외논문
공지사항
논문명 |
Object-Aware 3D Scene Reconstruction from Single 2D Images of Indoor Scenes |
논문종류 |
SCI |
저자 |
Mingyun Wen and Kyungeun Cho |
Impact Factor |
2.592 |
게재학술지명 |
Mathematics |
게재일 |
2023.01 |
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide
the identities of objects, and object identification is necessary for a scene to be functional in virtual
reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a
single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion
process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces
a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate
depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel
approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different
shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on
implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as
well as producing more accurate shapes compared to previous approaches for mesh reconstruction
from single images (e.g., topology modification networks). The proposed method was evaluated on
real-world datasets. The results showed that it could effectively improve the object-aware 3D scene
reconstruction performance over existing methods. |
|