Arm and Nvidia partner to bring deep learning to IoT
March 28, 2018
Chip companies Nvidia and Arm announced at this week’s GPU Technology Conference in California that they are partnering to bring deep learning inferencing to the billions of mobile, consumer electronics and IoT devices that will enter the global marketplace.
Under this partnership, Nvidia and Arm will integrate the open-source Nvidia NVDLA deep learning accelerator into Arm’s Project Trillium platform for machine learning. The collaboration should make it easier for IoT chip companies to integrate AI into their designs and help put intelligent, affordable products into the hands of billions of consumers worldwide.
“Inferencing will become a core capability of every IoT device in the future,” said Deepu Talla, vice president and general manager of autonomous machines at Nvidia. “Our partnership with Arm will help drive this wave of adoption by making it easy for hundreds of chip companies to incorporate deep learning technology.”
Based on Nvidia’s Xavier autonomous machine system on a chip, NVDLA is a free, open architecture to promote a standard way to design deep learning inference accelerators. NVDLA’s modular architecture is scalable, configurable and designed to simplify integration and portability.
“Accelerating AI at the edge is critical in enabling Arm’s vision of connecting a trillion IoT devices,” said Rene Haas, executive vice president at Arm. “Today we are one step closer to that vision by incorporating NVDLA into the Arm Project Trillium platform, as our entire ecosystem will immediately benefit from the expertise and capabilities our two companies bring in AI and IoT.”
NVDLA brings a host of benefits that speed the adoption of deep learning inference. It is supported by Nvidia’s suite of developer tools, including upcoming versions of Tensor RT, a programmable deep learning accelerator. The open-source design allows for features to be added regularly, including contributions from the research community.
The integration of NVDLA with Project Trillium will give deep learning developers high levels of performance as they leverage Arm’s flexibility and scalability across the wide range of IoT devices.
“This is a win-win for IoT, mobile and embedded chip companies looking to design accelerated AI inferencing,” said Karl Freund, lead analyst for deep learning at Moor Insights & Strategy. “Nvidia is the clear leader in ML training and Arm is the leader in IoT end points, so it makes a lot of sense for them to partner on IP.”