NVIDIA AI creates 3D model from 2D picture

Posted on Tuesday, April 20 2021 @ 10:04 CEST by Thomas De Maesschalck
NVIDIA illustrates the capabilities of GANverse3D, a machine learning-based tool that can transform 2D pictures into 3D objects. With a single picture, and without the need to perform any 3D modeling work, this automatic tool creates a 3D animatable model. NVIDIA explains how it works on its research blog.
Fasten your seatbelts. NVIDIA Research is revving up a new deep learning engine that creates 3D object models from standard 2D images — and can bring iconic cars like the Knight Rider’s AI-powered KITT to life — in NVIDIA Omniverse.

Developed by the NVIDIA AI Research Lab in Toronto, the GANverse3D application inflates flat images into realistic 3D models that can be visualized and controlled in virtual environments. This capability could help architects, creators, game developers and designers easily add new objects to their mockups without needing expertise in 3D modeling, or a large budget to spend on renderings.

A single photo of a car, for example, could be turned into a 3D model that can drive around a virtual scene, complete with realistic headlights, tail lights and blinkers.
This looks interesting for some quick prototyping but whether it has real-world applications beyond that remains to be seen. The example shown by NVIDIA looks pretty low-resolution.



About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments