NVIDIA AI turns 2D pictures into 3D models

Posted on Tuesday, December 10 2019 @ 12:42 CET by Thomas De Maesschalck
NVDA logo
NVIDIA researchers created a new AI model that can turn 2D imagery into 3D objects. On the NVIDIA blog, they explain that this is necessary to give machine learning models the same capability to more accurately understand image data. This should be useful for applications that interact with the real world, like robots. Once trained, the model can produce a 3D object from a 2D image in under 100ms.
In traditional computer graphics, a pipeline renders a 3D model to a 2D screen. But there’s information to be gained from doing the opposite — a model that could infer a 3D object from a 2D image would be able to perform better object tracking, for example.

NVIDIA researchers wanted to build an architecture that could do this while integrating seamlessly with machine learning techniques. The result, DIB-R, produces high-fidelity rendering by using an encoder-decoder architecture, a type of neural network that transforms input into a feature map or vector that is used to predict specific information such as shape, color, texture and lighting of an image.
NVIDIA 2D to 3D AI model


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments