Skip to main content

Intel’s new hardware puts AI computation on a USB stick

Intel Movidius Neural Compute Stick
Image Credit: Intel

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Hoping to lower the bar to entry for those making artificial intelligence apps, Intel launched the Movidius Neural Compute Stick, the world’s first USB-based deep learning inference kit and self-contained AI accelerator.

The compute stick, which is akin to other Intel PC-on-a-USB products, can deliver deep learning neural network processing capabilities to a wide range of host devices at the edge of a network. It is designed for product developers, researchers, makers, and hardware hobbyists. Intel acquired vision processing startup Movidius for an undisclosed price in 2016.

The Movidius Neural Compute Stick aims to reduce barriers to developing, tuning, and deploying AI applications by delivering dedicated high-performance deep neural network processing in a small form.

Intel, which is investing heavily in all sorts of AI products, wants to ensure developers are “retooling for an AI-centric digital economy.”

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

“The Myriad 2 Vision Processing Unit (VPU) housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance — more than 100 gigaflops of performance within a 1W power envelope — to run real-time deep learning neural networks directly from the device,” said Remi El-Ouazanne, vice president and general manager of Movidius, in a statement. “This enables a wide range of AI applications to be deployed offline.”

Machine intelligence development is fundamentally composed of two stages: (1) training an algorithm on large sets of sample data via modern machine learning techniques and (2) running the algorithm in an end-application that needs to interpret real-world data. This second stage is referred to as “inference,” and performing inference at the edge — or natively inside the device — brings numerous benefits in terms of latency, power consumption, and privacy, Intel said.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.