By Richard Quinnell, editor-in-chief
Machine learning (ML) is a branch of artificial intelligence (AI) that has been working its way into electronic systems for some years. Until now, though, the processing capability needed to implement ML has mostly constrained it to a cloud-based activity. That situation is about to change, however, and ML will be expanding to the edge as new generations of microcontrollers arise with ML capability built into their cores.
Machine learning is basically the use of an algorithm to formulate a system’s desired output response to input data without the developer defining the processing in between. Instead of telling the system what to do with the input data by writing a procedural program, machine learning has the system determine its own process based on the input and some success criteria. Currently, the key approach to ML is the artificial neural network (ANN), and there are many implementations, called frameworks, for creating an ML design, such as Tensorflow, Caffe, and Android NN.
Many functional ML systems exist, although they currently are cloud-based. The Alexa voice-recognition service is an obvious example. It starts out generic but, over time, is able to learn individual voice and speech patterns so that it can respond to different users differently. There are also ML systems that select which ads will appear on your browser based on what they learn about your interests and shopping history from your internet interactions. Industrial ML systems are determining how to optimally control complex chemical manufacturing processes or predict equipment maintenance requirements based on the system data they receive.
But the cloud is no longer going to be a requirement. Microcontroller vendors are beginning to develop chips that bring machine learning to the edge without needing a network connection to massive processors. These local processors are going to be able to handle the tasks on their own.
Image Source: Pixabay.
The seeds of ML at the edge could be seen at CES 2018. CEVA was showing off its NeuPro low-power ML processor IP based on its CEVA Deep Neural Network software. Similarly, Lattice Semiconductor was offering a reference design for AI based on its FPGA technologies. ARM talked about its new technology group focused on product development specific to supporting ML in its processor products. It was clear at the show that ML is coming to microcontrollers near you soon.
Recently, ARM dropped the other shoe and announced that it is making available new processor architecture dedicated specifically to ML applications. Code-named Project Trillium, the effort has produced two new processor types: an ML processor and an object-detection processor. The object-detection processor is a second-generation design that works to identify people and other objects in a 60-Hz HD video image stream. The ML processor is an entirely new design optimized for the workflow activities common to a multitude of ML frameworks. The two can be used together to, for instance, perform facial recognition in a group photo. The object detection processor would identify all of the faces and send face-only data to the ML processor for classification and individual recognition.
The key impact of this announcement, though, lies with the ML processor. The IP for this design will be made available to ARM licensees by mid-year, which means that by 2020 (if not earlier), developers should have ML processors available for their system designs. These processors are capable of performance in the tera-OPS range, which will allow them to perform substantial ML tasks locally without the need for network support. This can greatly reduce the bandwidth needs and latency of such tasks relative to today’s cloud-based systems.
ARM is not the only one pursuing ML for the edge. The CEVA NeuPro mentioned above is IP that can also be used in edge-based ML. Furthermore, there are startups, still in stealth mode but soon to announce their technology, that are creating ML processors with performance and power attributes suitable for bringing ML to battery-operated IoT devices.
So machine learning and AI are on the verge of precipitating out of the cloud to flood into a host of new applications. Embedded system developers will be well-served to start learning about what was once an esoteric computer science and begin thinking about how such capabilities might be leveraged in their application areas. Just as the advent of the programmable microprocessor transformed digital electronic design, ML processors are poised to start becoming an essential tool for embedded-system designers.
Learn more about Electronic Products Magazine