Sensor fusion has been around for many years and is ubiquitous in mobile device designs. Sensor fusion “fuses” data from different types of sensors to improve measurement accuracy, such as for motion tracking and orientation. These mobile devices integrate sensors, such as accelerometers, gyroscopes, magnetometers, proximity, pressure and temperature, as well as integrated inertial measurement units (IMUs), depending on the application. IMUs combine accelerometers and gyroscopes in the same package to measure motion, position and navigation. They are sometimes fused with pressure and ultrasonic sensors.
More sensors are being integrated in applications to increase functionality as they continue to shrink. Many sensor manufacturers are adding data processing and intelligence into these sensors to save power, reduce development time and increase functionality and accuracy.
However, for more advanced and complex algorithms and applications, sensor data processing still happens on a microprocessor (MPU) or microcontroller (MCU). Here, we start to see more machine-learning (ML) capabilities bringing processing to the edge.
Use cases
It is almost a given that smartphones leverage sensor fusion to improve functionality. They are used in many use cases, such as gaming, navigation and even augmented-reality applications.
“Today’s mobile devices have been fusing data of accelerometers, gyroscopes, pressure sensors, magnetometers, GPS in mobile and wearables for many years now,” said Ted Karlin, senior director of marketing and applications for sensing, compute and connectivity products at Infineon Technologies AG. “The greatest opportunities now are in optimizing the solution for better intelligence to the user with less false data [false positives] and doing so at significantly less power to conserve battery life.”
Sensor fusion provides enhancement of augmented reality and better navigation and spatial awareness, which improves location-based services and contextual awareness for various applications, Karlin said.
But sensor fusion is not a main driving factor in smartphone and mobile phone uses cases, said Sahil Choudhary, director of product management at TDK InvenSense. While there is a lot of adoption, it is primarily used for general fusion to run gaming applications or rotation or orientation detection, he said.
For wearables, such as smartwatches and fitness bands, sensor fusion is used more for navigation. “If you’re running or doing a certain activity, you want to combine the data of the accelerometer and gyroscope and assist with your GPS, like a Garmin band, for example, to identify where you are,” Choudhary added.
He also explained that sometimes just an accelerometer or gyroscope is not good enough when detecting activity tracking in wearables. It requires the full combination of sensor fusion to understand what the motion for the activity is, such as push- or pull-ups, or even for gestures.
One new application area that is gaining a lot of traction is hearable devices, specifically true wireless stereo (TWS) earbuds, started by Apple, Choudhary said, and the entire industry has started moving to this form factor.
Apple also introduced spatial audio that tracks head orientation, which translates into sensor fusion tracking of different movements of the head, he added. Spatial audio has been described by many as virtual surround sound.
Another new use case is earbuds with heart rate monitoring. TDK recently started to work with a TWS company that has developed earbuds that can identify heart-rate patterns. “They can’t do that without knowing the motion and context of the person to be able to measure those heartbeats accurately and provide the metrics related to the heart rate,” Choudhary said.
For example, heart-rate variability is a very important metric and requires an accelerometer to understand what the person is doing, but it may not be good enough because a body can have all kinds of complex movements, activities and postures, Choudhary said. This requires sensor fusion to identify the context of the body so that all those accurate measurements can be taken, he added.
In these applications, IMUs are used in combination with photoplethysmography (PPG) heart rate sensors and other medical-grade sensors for accurate measurement.
Karlin sees more sensors used in AR/VR and other electronics to monitor the user’s behavior and reactions: “Perhaps not traditional sensor fusion in the sense we have been thinking of it, but using non-ideal performing and low-power sensors for initial detection and then a more sophisticated but higher-power sensor for confirmation and additional determinations for further intelligence.”
The use of more sensors also brings its own challenges. “More sensors take more space,” Karlin said. “Many sensors also require access to the environment and therefore need an opening to the outside of the product enclosure, which creates ingress issues and placement of the sensors in spaces not ideal for product design.”
Value of software
Over the past few years, mobile device designers have also moved to processing the sensor fusion data on the sensor chip itself to reduce power consumption instead of on an MPU or MCU.
“Customers can do sensor fusion on a processor, but they’d rather do it on a sensor to save some power,” Choudhary said.
A second reason is that many companies want to save integration time and effort to create a complex algorithm like sensor fusion, which requires a lot of expertise, he added, particularly for customers who have time to market as one of their main agendas.
Choudhary has seen more traction in sensor fusion on-chip over the last three years because of the TWS market and spatial audio: “It has become a huge feature, and customers want to save more power, so power consumption is driving sensor fusion on the sensor chip.”
But there are design constraints due to limited processing memory. If the sensor fusion application becomes more complex—as an example, for a robotic vacuum cleaner where feedback is needed from the robot as well as assistive navigation—it is a more complex sensor fusion application, which is difficult to do on the chip, he explained.
TDK offers sensor fusion on-chip in power-sensitive applications like mobile devices, wearables and TWS earbuds, where basic sensor-fusion applications like head or wrist movements can be done at very low power. When there is more complexity, TDK provides sensor fusion libraries that run on the host computer. “We compile it on different processors so that people can use it on their platforms easily,” Choudhary said.
For embedded programmers, the challenge is the constrained memory to run complex algorithmic sensor fusion and still maintain accuracy and power. “We want to get high accuracy, but we still want to do it in a few kilobytes of memory, and this is only going to get more complex because the power consumption demands are only going to increase,” he said.
Consumers want better battery life, and they want all these sensors and features running all the time, so there is more pressure to make sure these complex algorithms run at the lowest power inside limited memory, Choudhary said.
Sensor solutions
One of TDK’s latest solutions for sensor fusion is the VibeSense 360, a combination motion sensor and software solution for hearables that offers ML as well as voice vibration detection and head tracking. A key feature enabled by the solution is always-on spatial audio for an immersive 360° spatial audio experience, thanks to a low-power six-axis IMU at 280 µA with TDK’s software suite for head-tracking accuracy.
The VibeSense 360 solution can be used in hearables, wearables and some smartphones. It runs on TDK’s ICM-45605-S IMU for TWS applications as well as the higher full-scale range and slightly
higher-temperature ICM-45686-S.
The VibeSense 360 is supplied as a customized hearable solution package with the hardware (IMU) and software. TDK programs the chip with software, such as for head tracking or a combination of head tracking or ML, based on the customer’s requirements.
Choudhary said the VibeSense 360’s head-tracking software (library) is a key piece of the solution, allowing TWS customers to do spatial audio effectively.
Other features enabled by VibeSense 360 include transparency for talk that determines when the user is speaking using vocal vibration detection and can automatically shift from ANC to transparency mode and pause any audio playing; keyword assist that differentiates between user audio and ambient noise to assist the microphones by ignoring surrounding audio, creating fewer false keyword triggers and providing secure keyword detection; and machine learning using TDK’s ML algorithms that run on the IMU.
Choudhary said ML has opened a lot of new use cases and opportunities that were previously restricted to very specific sensor fusion applications. Sensor fusion and ML work in parallel, with sensor fusion providing orientation and ML offering more specific motion classification, he said.
TDK also offers the SmartBug multi-sensor wireless module, originally developed to monitor any IoT application. “We wanted IoT developers to have something to collect data, because data is key, and once they get that data, they can keep building all kinds of models and solutions out of it,” Choudhary said.
With the second generation, TDK merged SmartBug with ML with the addition of the higher full-scale-range ICM-45686-S chip. SmartBug 2.0 now allows users to do more sensor fusion and ML, Choudhary said. “You can literally stick SmartBug 2.0 onto a headphone and it will track your head movements accurately.”
SmartBug 2.0 adds TDK’s VibeSense 360 head-tracking solution, collecting raw sensor and head orientation data at very low power of 280 µA at 50 Hz. It also enhances TDK’s Air Motion solution with the addition of swipe gestures, remote orientation gestures and accurate cursor tracking for smart TV remotes, and it improves data collection with updated streaming and logging parameters for longer-duration streaming and logging of multi-sensor and algorithm data.
Data collection is key in SmartBug 2.0 for both IoT developers and ODMs. Easing data-collection headaches, the sensor inference framework is TDK’s software ML framework that allows users to collect IMU sensor data, select custom features and then build, test, deploy and run those models on the ICM-45686-S IMU through the SmartBug 2.0 module. Algorithms include exercise classification (squats, jumping jacks, lateral raises and push-ups) and wrist-gesture classification (fight, turn, shake and still). Custom motion classification algorithms can run on the IMU at a current consumption as low as 30 µA.
“The goal of SmartBug was to enable the toughest thing in the sensor industry today, which is to collect data because there are so many boards, so many chips and so many protocols, so it is not easy,” Choudhary said. “If you make it into a simple thing like SmartBug, you solve a big problem.”
SmartBug can be connected to the ML framework, SmartEdgeML, available at TDK’s website, where users can build ML models, for things like detecting right or left gestures, whether you are standing or sitting, or anything else that you’d like to detect, and test it wirelessly.
Bosch Sensortec also offers IMUs with sensor fusion software. One of its latest products is an ultra-low-power programmable smart sensor that combines an accelerometer, gyroscope and fusion software. The BHI360 is a six-axis IMU-based sensor system with a 32-bit programmable MCU that enables full customization. The sensor, housed in a 20-pin LGA package measuring 2.5 × 3 × 0.95 mm3, is 50% smaller than its previous generation.
The IMU also includes an ultra-low-power MPU, along with the sensor fusion software and algorithms in a single package. The integrated sensor fusion library offers 3D audio with head orientation for personalized sound experiences as well as simple gesture recognition. Applications include wearables, hearables, smartphones, tablets and smart devices.
The low-power custom processor can run simpler sensor processing algorithms, such as gesture detection or step counting, eliminating the need to wake the main device processor and keeping the power consumption ultra-low. It also helps achieve better power consumption for high-end algorithms.
The sensor solution offers a total current consumption of less than 600 µA for 3D orientation. It also offers fast SPI (50 MHz) and I2C (3.4 MHz) host interfaces, as well as multiple SPI, I2C and GPIO interfaces for external sensors.
Bosch Sensortec said the new smart sensor system is simple to integrate because of its small size, and the ready-to-use algorithms simplify integration.
A product variant, the BHI380 offers additional algorithms. It provides self-learning AI software for a variety of fitness tracking, such as swim-tracking software and pedestrian dead-reckoning. Applications include pedestrian navigation, 3D audio, personalized fitness tracking and human-machine interaction.
Another space- and power-saving IMU for hearables is STMicroelectronics’ highly integrated LSM6DSV16BX sensor, housed in a 2.5 × 3.0 × 0.74-mm VFLGA package, for sports and general-purpose earbuds. The all-in-one motion and bone-conduction sensor combines a six-axis IMU for head tracking and activity detection with an audio accelerometer for detecting voice through bone conduction in a frequency range that exceeds 1 kHz. It delivers longer listening time and improved hearing.
The LSM6DSV16BX leverages ST’s Qvar charge-variation detection technology for user-interface controls, such as touching and swiping. This makes the IMU suited for applications like TWS headphones and augmented-, virtual- and mixed-reality headsets.
The sensor also uses ST’s Sensor Fusion Low Power (SFLP) technology, specifically designed for head tracking and 3D sound, and the in-the-edge processing resources featured in ST’s third-generation MEMS sensors. These include the finite state machine for gesture recognition, the machine-learning core for activity recognition and voice detection, and adaptive self-configuration, which automatically optimizes performance and efficiency. These help to reduce system latency and save overall power, offloading the host processor, ST said.
The processing resources in combination with the high integration is said to reduce system power consumption by up to 70% and PCB area by 45%. It also reduces the number of pin connections by 50%, saving external connections, and shrinks the package by 14% compared with previous ST MEMS inertial sensors.
AI processors
A good mix of sensor data processing capabilities ensures accuracy and low power consumption. Now, AI and ML algorithms are adding even greater capabilities, particularly in more complex sensing applications.
“All these sensors take power from a limited battery and then need a processor, which runs advanced algorithms to take the data and turn it into useful information for consumption by other software elements or the user,” Karlin said. “The next logical step is to now layer onto the fused data AI and ML to bring more real-time decision-making to these embedded systems.”
This brings a whole new class of AI processors that brings ML capabilities to the processing at the edge, Karlin said.
“Through ML, the sensor fusion algorithms can continue to become better by reducing false positives, increasing discrimination of motion to be ignored, and perhaps new use cases could be on the horizon,” he added. “With the desire for ML/AI to bring real-time decision-making to these embedded systems, you will need an NPU in such devices.”
An example is Infineon’s PSoC Edge MCU family that uses an Arm Ethos-U55 NPU that brings the neural network engine on the MCU at the edge. It is a single monolithic MCU with multiple Cortex cores and a single NPU for multi-modal sensor data, Karlin said.
Offering a variety of ML libraries, the PSoC Edge devices are based on the Arm Cortex-M55, including Helium DSP support, paired with the Ethos-U55 Cortex-M33, and with Infineon’s ultra-low-power NNLite neural network accelerator. It supports always-on sensing and response, making these devices suited for a range of IoT applications, such as wearables, smart home, security and robotics.
However, ML at the edge raises some issues around power and space constraints, especially for wearables. For wearables, Karlin thinks a smartwatch would be the smallest device possible at this time, but not a fitness brand.
The PSoC Edge E83 would be the minimum solution, as it uses the Ethos-U55 NPU, and the E84 would be a step up, bringing in the addition of the MIPI interface for displays, he said.