SIGGRAPH 2019 to Debut Research Advances From 31 Countries
Technical, Art Papers Programs to Present a Combined 157 Projects
CHICAGO—Pushing the boundaries of computer science, SIGGRAPH 2019 announces its Technical Papers and Art Papers research programming. SIGGRAPH 2019 will run 28 July–1 August in downtown Los Angeles. Known in its 46-year history to deliver cutting-edge, global research, this year’s innovations are sure to inspire the computer science community.
“Each year, the Technical Papers program sets the pace for what’s next in visual computing and the adjacent subfields of computer science. I am excited to be part of presenting the amazing work of researchers who drive the industry and look forward to how this work ignites memorable discussions,” said SIGGRAPH 2019 Technical Papers Chair Olga Sorkine-Hornung. “This is the kind of content you’ll reflect on, and refer to, throughout the year to come.”
Along with new research from various academic labs, Facebook Reality Labs, NVIDIA, and Disney Research, highlights from the 2019 Technical Papers program include:
Semantic Photo Manipulation With a Generative Image Prior
Authors: David Bau, Massachusetts Institute of Technology and MIT-IBM Watson AI Lab; Hendrik Strobelt, IBM Research and MIT-IBM Watson AI Lab; William Peebles, Jonas Wulff, Jun-Yan Zhu, and Antonio Torralba, Massachusetts Institute of Technology; and, Bolei Zhou, The Chinese University of Hong Kong
We use GANs to make semantic edits on a user’s image. Our method maintains fidelity to the original image while allowing the user to manipulate the semantics of the image.
MeshCNN: A Network With an Edge
Authors: Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, and Daniel Cohen-Or, Tel Aviv University; and, Shachar Fleishman, Amazon
MeshCNN is a deep neural network for triangular meshes, which applies convolution and pooling layers directly on the mesh edges. MeshCNN learns to exploit the irregular and unique mesh properties.
Text-Based Editing of Talking-Head Video
Authors: Ohad Fried, Michael Zollhöfer, and Maneesh Agrawala, a Stanford University; Ayush Tewari and Christian Theobalt, Max Planck Institute for Informatics; Adam Finkelstein and Kyle Genova, Princeton University; Eli Shechtman and Zeyu Jin, Adobe; and, Dan B. Goldman, Google
Text-based editing of talking-head video supports adding, removing, and modifying words in the transcript, and automatically produces video with lip synchronization that matches the modified script.
SurfaceBrush: From Virtual Reality Drawings to Manifold Surfaces
Authors: Enrique Rosales, University of British Columbia and Universidad Panamericana; Jafet Rodriguez, Universidad Panamericana; and, Alla Sheffer, University of British Columbia
VR tools enable users to depict 3D shapes using virtual brush strokes. SurfaceBrush converts such VR drawings into user-intended manifold 3D surfaces, providing a novel approach for modeling 3D shapes.
Puppet Master: Robotic Animation of Marionettes
Authors: Simon Zimmermann, James Bern, and Stelian Coros, ETH Zurich; and, Roi Poranne, ETH Zurich and University of Haifa
We present a computational framework for robotic animation of real-world string puppets, based on a predictive control model that accounts for the puppet dynamics the kinematics of the robot puppeteer.
For even more highlights, check out the Technical Papers Preview on YouTube: https://youtu.be/EhDr3Rs5fTU.
In addition, the Art Papers program offers a platform to explore and interrogate research that focuses, specifically, on scientific and technological applications in art, design, and humanities. Highlights for 2019 include:
CAVE: Making Collective Virtual Narrative
Authors: Kris Layng, Ken Perlin, and Sebastian Herscher, New York University / Courant and Parallux; Corrine Brenner, New York University; and, Thomas Meduri, New York University/ Courant and VRNOVO
CAVE is a shared narrative virtual reality experience. Thirty participants at a time each saw and heard the same narrative from their own unique location in the room, as they would when attending live theater. CAVE set out to disruptively change how audiences collectively experience immersive art and entertainment.
Learning to See: You Are What You See
Authors: Memo Akten and Rebecca Fiebrink, Goldsmiths, University of London; and, Mick Grierson, University of the Arts, London
“Learning to See” utilizes a novel method in “performing” visual, animated content — with an almost photographic visual style — using deep learning. It demonstrates both the collaborative potential of AI, as well as the inherent biases reflected and amplified in artificial neural networks, and perhaps even our own neural networks.
To discover more highlights, check out the Art Papers Preview on YouTube: https://youtu.be/6uhyhW58A2M.
Technical Papers programming is open to participants at the Full Conference Platinum and Full Conference registration levels only. Art Papers programming is open to the Experiences level and above. Learn more about SIGGRAPH 2019 and register here: s2019.SIGGRAPH.org/register.
熱門頭條新聞
- DUBAI STUDIOS SIGNS PARTNERSHIP WITH INTERNATIONAL ACADEMY OF TELEVISION ARTS & SCIENCES
- United Media’s television channels command autumn ratings, leading markets across SEE
- Film Mode Entertainment Brings The Laughter Back to Life with INSIDE COMEDY
- Barotrauma Surpasses 47,000 Reviews on Steam, with an Overwhelming 93% Positive Rating
- Lexar Announces Professional Go Portable SSD with Hub,an Ultra-Compact Setup for Recording Mobile Video
- Autodesk Introduces AI Video-to-3D Scene Solution Wonder Animation
- Studio Far Out Games has had a rapid rise.
- More Than 120 Games Sign on to SAG-AFTRA Video Game Contracts