The technology could speed development, research, and virtual environment rendering – potentially shortening development and reducing the cost to deploy such virtual environment tools.
DeepMind has gained a new ability – generating a video of up to 30 seconds from a single image input. The new feature known as Transframer could offer developers a way to render generated videos much faster than traditional methods and reduce the obstacles developers face when developing 3D environments.
Essentially, DeepMind can now analyze key points in an image and functionally predict how it would move in a 3D video environment. Without explicit geometric information, Transframer can build a coherent video of 30 seconds by identifying the picture’s framing or contextual clues that provide markers for how an image might look if a human were to change the angle or move through the space.
A potential immediate application would be video game environments created through the predictive power of image analysis rather than the time-consuming rendering that game artists and developers use now. In other industries, this could speed development, research, and virtual environment rendering – potentially shortening development and reducing the cost to deploy such virtual environment tools.
See also: MIT, Toyota Share Self-Driving Video Dataset
Imagining an image from different perspectives
The new model shows early promise in benchmarking and has many developers excited about the possibilities. All industries should be excited, however, because this tech has the potential to reduce the obstacles in the way of developing more in the AR/VR world. DeepMind’s developers already imagine advancements in science and other industry research, and the new capability will offer even more functions as the team improves and continues to benchmark.
The proposed model also yielded promising results on eight other tasks – semantic segmentation and image classification, among several others. Google will continue to push the boundaries of what its machines can accomplish. The recently published paper outlining the video generation accomplishment, along with other commentary, is available to read.