Intel is attempting to democratize the deep learning application process by unveiling its Movidius Neural Compute Stick. According to the company’s press release, this is the first USB-based deep learning inference kit with a self-contained artificial intelligence (AI) accelerator that “delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge.”
Capable of sorting through large amounts of data patterns, Intel designed its compute stick for product developers, researchers, and creators. The Movidius Neural Compute Stick aims at reducing traditional barriers by “developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.” In other words, Intel wants to make AI development available to small developers who are normally excluded from the industry.
AI Digital Economy
As more developers adopt advanced machine learning approaches to build innovative applications and solutions, Intel is committed to providing the most comprehensive set of development tools and resources to ensure developers are retooling for an AI-centric digital economy.
Whether it’s cancer screening, mapping the human genome, or creating visual media displays, the brainpower of neural networks has been isolated to those with deep pockets. Now it will be available to smaller name developers.
The Next Generation of Developers
Between training artificial neural networks on the Intel® Nervana™ cloud, optimizing for emerging workloads such as artificial intelligence, virtual and augmented reality, automated driving with Intel® Xeon® Scalable processors, or taking AI to the edge with Movidius vision processing unit (VPU) technology, Intel offers a comprehensive list of AI portfolio tools.
“The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device,” said Remi El-Ouazzane, Vice President and GM of Movidius. “This enables a wide range of AI applications to be deployed offline.”
Machine intelligence development is fundamentally composed of two stages: (1) training an algorithm on large sets of sample data via modern machine learning techniques and (2) running the algorithm in an end-application that interprets real-world data. This second stage is referred to as “inference,” performing inference at the edge – or natively inside the device – bringing numerous benefits in terms of latency, power consumption, and privacy:
Compile: Automatically convert a trained Caffe-based convolutional neural network (CNN) into an embedded neural network optimized to run on the onboard Movidius Myriad 2 VPU.
Tune: Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enabling effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
Accelerate: Unique to Movidius Neural Compute Stick, the device can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.
Movidius Neural Compute Stick is now available for purchase through select distributors for MSRP $79.
- Democratizing AI Development
- Intel Technology Support