Nvidia Presents Its Vision Of The Metaverse

At its annual GPU Technology Conference, the chip giant fleshed out the future of its Omniverse platform, which lets users develop 3d worlds and collaborate within them. The company is pitching the software as a fundamental building block in the metaverse, the concept of an interconnected virtual realm touted by the likes of Microsoft and Meta (formerly Facebook).

The underlying tech on which Omniverse develops its digital worlds is effectively established within the animation business: it’s Common Scene Description, a framework created by Pixar to let groups collaborate on creating high-end cg-animated scenes. Nvidia has described it as “the HTML of 3d worlds.”

On the convention, Nvidia additionally unveiled Omniverse Avatar, which permits digital characters to work together with people utilizing speech recognition, artificial speech, facial monitoring, and real-time animation. CEO Jensen Huang demonstrated the tech by presenting an animated toy model of himself, which fielded questions on scientific topics.

Those companies’ visions of the metaverse have so far emphasized social and workplace interactions, in environments that can be entirely fictional. But Nvidia is interested in creating precise digital replications of real objects and environments, which obey physical laws, for industrial purposes.

For example, Ericsson has worked with the company to model cities in order to work out where to place 5G antennas in real life. Omniverse can also be used to train the AI components of robots and self-driving vehicles, or to simulate and study forest fires.

Omniverse has been out there in beta mode since December 2020. The platform’s Enterprise model can now be licensed for $9,000/year.

PHP Code Snippets Powered By : XYZScripts.com