Artificial intelligence (AI) use at the edge is growing dramatically as embedded system providers increasingly develop brain chips thanks to advances in artificial intelligence (AI) and machine learning (ML). Microchip Technology, Inc. and Intelligent Hardware Korea (IHWK) are partnering to develop an analog compute platform to accelerate edge AI/ML inferencing for neurotechnology devices.
IHWK is developing a neuromorphic computing platform for neurotechnology devices and field-programmable neuromorphic devices. Silicon Storage Technology (SST), a Microchip Technology subsidiary, announced it will assist in the development of this platform by providing an evaluation system for its SuperFlash memBrain neuromorphic memory solution. The solution is based on Microchip’s nonvolatile memory (NVM), SuperFlash technology, optimized to perform vector matrix multiplication (VMM) for neural networks through an analog in-memory compute approach.
A Sheer Analytics & Insights report forecasts that the global market for neuromorphic computing will reach $780 million by 2028, growing at a 50% annual compound annual rate over the forecast years of 2020 – 2028.
“Neuromorphic computing technology mimics brain processes utilizing hardware that operates key processes within the analog domain,” said Mark Reiten, vice president of SST, Microchip’s licensing business unit. “Operation within the analog domain leverages non-von Neumann architecture to deliver AI-powered features at minimal power consumption. This is a significant improvement over mainstream artificial neural networks that are based on digital hardware and traditional von Neumann architecture. The digital approach consumes multiple orders of magnitude more power than the human brain to achieve similar tasks.”
The memBrain technology evaluation kit enables IHWK to demonstrate the power efficiency of its neuromorphic computing platform for running inferencing algorithms at the edge. The goal is to create an ultra-low-power analog processing unit (APU) for applications such as generative AI models, autonomous cars, medical diagnosis, voice processing, security/surveillance, and commercial drones.
This is the first collaboration between the two companies and the term of the collaboration may be several years. “IHWK intends to use the memBrain demo system that we have provided to experiment with design ideas in order to formulate a go-to-market strategy for the edge computing markets they are assessing,” Reiten said.
Current neural net models for edge inference can require 50 million or more synapses (weights) for processing, which creates a bandwidth bottleneck for off-chip DRAM required by purely digital neural net computing. The memBrain solution stores synaptic weights in the on-chip floating gate in ultra-low-power sub-threshold mode and uses the same memory cells to perform the computations. This provides improvements in power efficiency and system latency. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers 10× to 20× lower power usage per inference decision and significantly reduces the overall bill of materials (BOM).
“The synaptic weights are stored in the floating gate memory cell as conductance, which means the cell is used as a programmable resistor,” Reiten explained. “When an input voltage is applied to the cell in the horizontal direction on the bitline and combined with the cell conductance, the output which is measured as current is the multiplication of the input voltage (value 1) with the conductance (value 2). We sum the output current of many cells on the wordline to form a ‘neuron’ in the vertical direction in the array.”
IHWK is also working with Korea Advanced Institute of Science & Technology (KAIST), Daejeon, to develop the APU for device development and Yonsei University, Seoul, for device design assistance. The final APU should optimize system-level algorithms for inferencing and operate between 20-80 TeraOPS per watt — the best performance available for a computing-in-memory solution designed for use in battery-powered devices.
“As to the resulting life span of the battery using the technology, it really depends on the specific target market but the memBrain technology should be able to extend battery life of a product by at least 3× compared with comparably performing digital solutions,” Reiten said.
By using NVM rather than off-chip memory to perform neural network computation and to store weights, the memBrain technology can eliminate the massive data communications bottlenecks otherwise associated with performing AI processing at the edge. IHWK is leveraging the SuperFlash memory’s floating gate cells’ nonvolatility to achieve a new benchmark in low-power edge computing devices, supporting ML inference using advanced ML models.
Learn more about Microchip Technologynonvolatile memory