AI can turn a collection of 2D images into an explorable 3D world
An artificial intelligence algorithm can transform still images into a high-resolution, explorable 3D world, with potential implications for film effects and virtual reality.
By feeding the neural network a selection of images of a scene and a rough 3D model of the scene created automatically using off-the-shelf software called COLMAP, it is able to accurately visualise what the scene would look like from any viewpoint.
The neural network, developed by Darius Rückert and colleagues at the University of Erlangen-Nuremberg in Germany, is different to previous systems because it is able to extract physical properties from still images.
“We can change the camera pose and therefore get a new view of the object,” he says.
The system could technically create an explorable 3D world from just two images, but it wouldn’t be very accurate. “The more images you have, the better the quality,” says Rückert. “The model cannot create stuff it hasn’t seen.”
Some of the smoothest examples of the generated environments use between 300 and 350 images captured from different angles. Rückert hopes to improve the system by having it simulate how light bounces off objects in the scene to reach the camera, which would mean fewer still images are needed for accurate 3D rendering.
“Until now, creating photorealistic images from 3D reconstructions wasn’t fully automated and always had perceptible flaws,” says Tim Field, founder of New York-based company Abound Labs, who works on 3D capture software.
While Field points out the system still requires the input of accurate 3D data, and doesn’t yet work for moving objects, “the rendering quality is unparalleled”, he says. “It’s proof that automated photorealism is possible.”
Field believes the technology will be used for generating visual effects in films and virtual reality walkthroughs of locations. “It’s going to accelerate the already-hot research field of machine learning-based rendering for computer generated imagery,” he says.
More on these topics: