Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier
NVIDIA will present around 20 research papers at SIGGRAPH, the year’s most important computer graphics conference.
NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.
Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.
The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.
Innovations by NVIDIA researchers are regularly shared with developers on GitHub and incorporated into products, including the NVIDIA Omniverse platform for building and operating metaverse applications and NVIDIA Picasso, a recently announced foundry for custom generative AI models for visual design. Years of NVIDIA graphics research helped bring film-style rendering to games, like the recently released Cyberpunk 2077 Ray Tracing: Overdrive Mode, the world’s first path-traced AAA title.
The research advancements presented this year at SIGGRAPH will help developers and enterprises rapidly generate synthetic data to populate virtual worlds for robotics and autonomous vehicle training. They’ll also enable creators in art, architecture, graphic design, game development and film to more quickly produce high-quality visuals for storyboarding, previsualization and even production.
AI With a Personal Touch: Customized Text-to-Image Models
Generative AI models that transform text into images are powerful tools to create concept art or storyboards for films, video games and 3D virtual worlds. Text-to-image AI tools can turn a prompt like “children’s toys” into nearly infinite visuals a creator can use for inspiration — generating images of stuffed animals, blocks or puzzles.
However, artists may have a particular subject in mind. A creative director for a toy brand, for example, could be planning an ad campaign around a new teddy bear and want to visualize the toy in different situations, such as a teddy bear tea party. To enable this level of specificity in the output of a generative AI model, researchers from Tel Aviv University and NVIDIA have two SIGGRAPH papers that enable users to provide image examples that the model quickly learns from.
One paper describes a technique that needs a single example image to customize its output, accelerating the personalization process from minutes to roughly 11 seconds on a single NVIDIA A100 Tensor Core GPU, more than 60x faster than previous personalization approaches.
A second paper introduces a highly compact model called Perfusion, which takes a handful of concept images to allow users to combine multiple personalized elements — such as a specific teddy bear and teapot — into a single AI-generated visual:
Serving in 3D: Advances in Inverse Rendering and Character Creation
Once a creator comes up with concept art for a virtual world, the next step is to render the environment and populate it with 3D objects and characters. NVIDIA Research is inventing AI techniques to accelerate this time-consuming process by automatically transforming 2D images and videos into 3D representations that creators can import into graphics applications for further editing.
A third paper created with researchers at the University of California, San Diego, discusses tech that can generate and render a photorealistic 3D head-and-shoulders model based on a single 2D portrait — a major breakthrough that makes 3D avatar creation and 3D video conferencing accessible with AI. The method runs in real time on a consumer desktop, and can generate a photorealistic or stylized 3D telepresence using only conventional webcams or smartphone cameras.
A fourth project, a collaboration with Stanford University, brings lifelike motion to 3D characters. The researchers created an AI system that can learn a range of tennis skills from 2D video recordings of real tennis matches and apply this motion to 3D characters. The simulated tennis players can accurately hit the ball to target positions on a virtual court, and even play extended rallies with other characters.
Beyond the test case of tennis, this SIGGRAPH paper addresses the difficult challenge of producing 3D characters that can perform diverse skills with realistic movement — without the use of expensive motion-capture data.
Not a Hair Out of Place: Neural Physics Enables Realistic Simulations
Once a 3D character is generated, artists can layer in realistic details such as hair — a complex, computationally expensive challenge for animators.
Humans have an average of 100,000 hairs on their heads, with each reacting dynamically to an individual’s motion and the surrounding environment. Traditionally, creators have used physics formulas to calculate hair movement, simplifying or approximating its motion based on the resources available. That’s why virtual characters in a big-budget film sport much more detailed heads of hair than real-time video game avatars.
A fifth paper showcases a method that can simulate tens of thousands of hairs in high resolution and in real time using neural physics, an AI technique that teaches a neural network to predict how an object would move in the real world.
The team’s novel approach for accurate simulation of full-scale hair is specifically optimized for modern GPUs. It offers significant performance leaps compared to state-of-the-art, CPU-based solvers, reducing simulation times from multiple days to merely hours — while also boosting the quality of hair simulations possible in real time. This technique finally enables both accurate and interactive physically based hair grooming.
Neural Rendering Brings Film-Quality Detail to Real-Time Graphics
After an environment is filled with animated 3D objects and characters, real-time rendering simulates the physics of light reflecting through the virtual scene. Recent NVIDIA research shows how AI models for textures, materials and volumes can deliver film-quality, photorealistic visuals in real time for video games and digital twins.
NVIDIA invented programmable shading over two decades ago, enabling developers to customize the graphics pipeline. In these latest neural rendering inventions, researchers extend programmable shading code with AI models that run deep inside NVIDIA’s real-time graphics pipelines.
In a sixth SIGGRAPH paper, NVIDIA will present neural texture compression that delivers up to 16x more texture detail without taking additional GPU memory. Neural texture compression can substantially increase the realism of 3D scenes, as seen in the image below, which demonstrates how neural-compressed textures (right) capture sharper detail than previous formats, where the text remains blurry (center).
Three-pane image showing a page of text, a zoomed-in version with blurred text, and a zoomed-in version with clear text.
Neural texture compression (right) provides up to 16x more texture detail than previous texture formats without using additional GPU memory.
A related paper announced last year is now available in early access as NeuralVDB, an AI-enabled data compression technique that decreases by 100x the memory needed to represent volumetric data — like smoke, fire, clouds and water.
NVIDIA also released today more details about neural materials research that was shown in the most recent NVIDIA GTC keynote. The paper describes an AI system that learns how light reflects from photoreal, many-layered materials, reducing the complexity of these assets down to small neural networks that run in real time, enabling up to 10x faster shading.
The level of realism can be seen in this neural-rendered teapot, which accurately represents the ceramic, the imperfect clear-coat glaze, fingerprints, smudges and even dust.
Rendered close-up images of a ceramic blue teapot with gold handle
The neural material model learns how light reflects from the many-layered, photoreal reference materials.
More Generative AI and Graphics Research
These are just the highlights — read more about all the NVIDIA papers at SIGGRAPH. NVIDIA will also present six courses, four talks and two Emerging Technology demos at the conference, with topics including path tracing, telepresence and diffusion models for generative AI.
NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
Source: AARON LEFOHN/Siggraph
熱門頭條新聞
- Asians in Animation Annual Career Summit
- Asian Animation Summit 2024
- MATTEL AND OUTRIGHT GAMES RELEASE ‘BARBIE PROJECT FRIENDSHIP’
- New SAG-AFTRA and Ethovox Agreement Empowers Actors and Secures Essential A.I. Guardrails
- Comedy Adventure Loco Motive Coming to PC and Switch November 21st
- In conversation with ‘Ultraman Rising’ director Shannon Tindle: Insights on storytelling, innovation, and collaboration in animation
- Breaking Free: Disney Declares Independence from the Apple App Store
- Warner Bros Discovery confirms Max launch in seven Asian markets