Micron’s Automata Processor: An Order from Chaos in Big Data ApplicationsMay 10, 2016
It was back in 2013 when Micron introduced its Automata Processor – a technology that has been in the works for more than seven years now. This year, the company has plans to ship a few thousand Automata processors to key development entities, with the main aim to boost the software ecosystem forming around the technology.
Purpose-built to address the processing challenges associated with unstructured and vast data sets, Automata processor (AP) is capable of performing high-speed, comprehensive search and analysis of massive and complex data streams, dramatically improving the throughput in many Big Data application domains including data analysis, pattern matching, and graph analysis.
Unlike the traditional processor/computing architectures, AP has thousands—and possibly millions—of independent processing elements that allow various instructions to be programmed in parallel, rather than serial computation that standard CPUs do.
What makes it different from existing CPU architectures?
Traditional computer processors are based on “von Neumann architecture”—a serialized structure consisting several component sub-systems including a controller, an arithmetic logic unit, and a memory unit. During the serial movement of data between the CPU and memory in this architecture, the CPU runs into what is called “von Neumann bottleneck”—a phenomenon where the system’s ability to transfer data between the memory and processing unit significantly restricts the CPU speed. As industry's first non-Von Neumann computing architecture, Micron’s Automata processor offers a solution to this challenge.
The Automata Processor is currently provided as a PCIe card and utilizes Micron’s innovations in the intrinsic parallelism of DRAM architectures to provide extremely fast and highly scalable throughput, along with significant cost-effectiveness and energy-efficiency.
As Micron describes, “Unlike a conventional CPU, the Automata processor is a scalable, two-dimensional fabric comprised of thousands to millions of interconnected processing elements, each programmed to perform a targeted task or operation. Whereas conventional parallelism consists of a single instruction applied to many chunks of data, the AP focuses a vast number of instructions at a targeted problem.” Most of these problems lie in search and large-scale pattern matching. This means applications that face restrictions due to available memory on traditional CPU-based systems—such as cyber security, genomics, and other large-scale analytics problems—could find solution in Automata Processor.
According to Micron's Senior Applications Engineer, Matt Tanner, AP offers “performance levels that far exceed the conventional approaches” and is “witnessing growing interest from several different sectors including computational finance, cyber, network protection, bio-informatics, Sigint/Crypto, big data.”
To sum it up, the Automata processor can be viewed as a fitting alternative for targeted areas, which require deep memory pockets and fast solution for high-volume data problems. With a constant increase in the application areas requiring handling and comprehension of big data, Automata processor could become the next best substitute for traditional computing processes.