AI Builds AI: Startup’s AI Generates Compact Neural Networks

DarwinAI’s platform, in pilot tests with Audi, streamlines models for edge computing.

University of Waterloo researcher Alexander Wong didn’t have enough processing power for his computer vision startup, so he developed a workaround. That workaround is now the company’s product.

Ontario-based DarwinAI, founded by a team from the Ontario-based university, provides a platform for developers to generate slimmed-down models from neural networks. This offers a quicker way for developers to spin out multiple networks with smaller data footprints.

The company’s lean models are aimed at businesses developing AI-based edge computing networks to process mountains of sensor data from embedded systems and mobile devices.

Industries of all stripes — autonomous vehicles, manufacturing, aerospace, retail, healthcare and consumer electronics — are developing next-generation businesses with AI computing at the edge of their GPU-powered networks.

It’s estimated that by 2025 some 150 billion machine sensors and IoT devices will stream continuous data for processing.

Yet many find that talent and computing resources run high to build these various models.

DarwinAI’s position is that companies can reduce development time and costs — like DarwinAI did for themselves — by using its platform to spin out compact models from full-sized ones.

“We can enable AI at the edge for mobile devices and clients who need to put powerful neural networks into cars, watches, airplanes and other areas,” said Sheldon Fernandez, CEO and co-founder at DarwinAI.

Generative Synthesis: Hello, World

DarwinAI’s platform, dubbed GenSynth, is the result of pioneering research on what’s called generative synthesis. There’s an easy way to think of generative synthesis: It’s AI to create AI.

The startup’s founders late last year released a research paper on generative synthesis and then fused that with its proprietary research to launch the company’s offering.

DarwinAI’s platform relies on machine learning to probe and understand the architecture of neural networks for customers. Then its AI generates a new family of neural networks that are functionally equivalent to the original but smaller and faster, according to the company.

The company is a member of the NVIDIA Inception program that helps startups move to market faster.

The startup’s research has attracted interest from consumer electronics companies, aerospace and automakers, including Audi.

Audi’s case study with DarwinAI used the GenSynth platform to accelerate design of custom, optimized deep neural networks for object detection in autonomous driving.

The GenSynth platform helped Audi developers train models 4x faster and slash GPU processing time by three-fourths.

“They worked with two terrabytes of data, and we really reduced the testing time,” said Fernandez. “There’s real savings for their GPU training time and real benefits for the developers.”

DarwinAI developed GenSynth to reduce its own development time, tapping into NVIDIA GPUs on AWS and Microsoft Azure and local instances on premises to boost its coding cycles.

Many of DarwinAI’s early customers are now using the platform to speed their development. It also helps reduce the data processed on customers’ systems running NVIDIA Jetson modules on site and NVIDIA V100 Tensor Core GPUs in the cloud for training and inference.

“Deep learning is so complex that you need to collaborate with AI enabled by GPUs to do it properly — it will free up your time to do the creative work,” said Fernandez.

 

By: SCOTT MARTIN

PHP Code Snippets Powered By : XYZScripts.com