Advertisement

Synaptics: A natural progression to edge-AI processors

Once best known for interface products such as fingerprint sensors and touchpads, Synaptics’ portfolio now expands into edge-AI processors.

At one time Synaptics Inc. was best known for its interface products, including fingerprint sensors, touchpads, and display drivers for PCs and mobile phones. The company is now making a big push into the consumer IoT market as well as computer vision and artificial intelligence (AI) solutions at the edge, propelled by several acquisitions over the past several years. The company sees big opportunities in computer vision across all markets, and recently launched its edge-AI processors that target real-time computer vision and multimedia applications.

The company’s recent AI roadmap spans from enhancing image quality of high-resolution cameras using the high-end VS680 multi-TOPS processor to battery-powered devices at a lower resolution with the Katana Edge-AI system-on-chip (SoC).

Last year, Synaptics introduced the Smart Edge AI platform, consisting of the VideoSmart VS600 family of edge-computing video SoCs with a secure AI framework. The SoCs combine a CPU, NPU, and GPU, designed specifically for smart displays, smart cameras, video soundcards, set-top boxes, voice-enabled devices, computer-vision IoT products.

The platform uses the company’s Synaptics Neural Network Acceleration and Processing (SyNAP) technology, a full-stack solution for on-device deep-learning models for advanced features. With the inferencing done on the device, it targets privacy, security, and latency issues.

Other features of the VS600 SoCs include an integrated MIPI-CSI camera serial interface with an advanced image signal processing engine for edge-based computer vision inference. It also uses the company’s far-field voice and customizable wake word technology for edge-based voice processing and the SyKURE security framework.

Synaptics secure processing example on VS680 SoC

An example of how Synaptics manages security on the VS680 edge SoC. (Image: Synaptics)

On the other end of the spectrum is an ultra-low-power platform for battery-operated devices. Built on a multi-core processor architecture optimized for ultra-low-power and low latency for voice, audio, and vision applications, the Katana Edge-AI platform features the company’s proprietary neural network and domain specific processing cores, on-chip memory, and use of multiple architectural techniques for power savings. It also can be combined with the company’s wireless connectivity offerings for system-level modules and solutions.

“There is a ton of applications where plugging in is just not viable, so there is an interest in battery power whether it is in the field, or industrial, and particularly at home,” said Patrick Worfolk, Synaptics’ senior vice president and chief technology officer. “With this particular Katana platform we’re targeting very low power.”

Typical applications for the Katana SoC for battery-powered devices include people or object recognition and counting; visual, voice or sound detection; asset or inventory tracking, and environmental sensing.

The Katana platform also requires software optimization techniques coupled with the silicon, which is where Synaptics’ recently announced partnership with Eta Compute comes into play. The Katana SoC will be co-optimized with Eta Compute’s Tensai Flow software. The companies will work together to offer application-specific kits that will include pre-trained machine learning models and reference designs. It also will allow users to train the models with their own datasets using frameworks such as TensorFlow, Caffe, and ONNX.

Synaptics’ entry into consumer IoT was aided by two acquisitions in 2017 – Conexant Systems LLC and  Marvell Technology Group’s Multimedia Business Unit. Conexant gave the company access to advanced voice and audio processing solutions for the smart home, including far-field voice technology for trigger word detection and keyword spotting, while Marvell’s Multimedia Business Unit delivered extensive IP for advanced processing technology for video and audio applications, particularly digital personal assistants, as well as the smart home.

With the Conexant acquisition, Synaptics gained a portfolio of audio products, providing the right architecture to do keyword spotting at the edge and that is done through neural networks, said Worfolk. The multimedia team carved out from Marvell Technology were developing video processors, and those devices are used in streaming media, but also things like smart displays, he added.

“As these smart display products integrate cameras, you run into all the same challenges around performance and privacy, and there’s more and more drive to do those algorithms in the edge device. The natural structure for those type of algorithms today with the best performance are AI algorithms,” said Worfolk.

In 2020, Synaptics bolstered its IoT position with the acquisition of Broadcom’s wireless IoT business, adding Wi-Fi, Bluetooth, and GNSS/GPS technologies for a broad range of applications including home automation, smart displays and speakers, media streamers, IP cameras, and automotive. By pairing its edge SoCs with the wireless technology, it can open up opportunities beyond the consumer IoT market.

Synaptics also acquired DisplayLink Corp., adding its universal docking solutions and video compression technology to its portfolio. The company will combine the video compression technology with its existing and new video interface products and new wireless solutions.


Synaptics: A natural progression to edge-AI processors An explosion of new technologies is bringing AI-enabled vision to more applications than ever before. But is everything moving a bit too fast? EE Times takes a look at how the industry is keeping up with the rapid pace of development in an upcoming Embedded Vision Special Project.


Built on edge processing

Processing at the edge is not new to Synaptics. All processing of the raw data in its embedded sensing chips including fingerprint products and touch controllers happens on-chip due to concerns around power, latency, and security.

But even before Synaptics manufactured its first interface products, the company was founded in 1986 to research neural networks, but pivoted into other technologies before coming full circle to develop edge-AI processors for computer vision and multimedia applications today.

“We were founded to do neural network chips over 30 years ago. Back then all the chips were analog AI chips and it was challenging to scale well. In fact the company went off in a slightly different direction after the initial founding and started doing pattern recognition, which is a classic AI problem,” said Worfolk. “We’ve been doing AI for a long time, but we have recently migrated to deep learning and these deep neural networks have really taken over by storm.”

“With the breakthroughs in AI in the last decade more and more of these traditional algorithms have mapped over to AI algorithms enabling performance advantages when you do the processing at the edge,” he added.

Worfolk said the nature of the company’s products and the vertical markets that it participates in are driving the need for AI-based algorithms.

“We’ve entered the AI space vertically through our existing markets and then with those products we’re expanding into neighboring markets,” he said. “This is quite different from many of the startups you’ve seen in this space who have some sort of novel concept about some kind of AI processing and they are developing a chip that they want to go broadly across multiple markets.”

Next steps

In the early days, Synaptics developed its own algorithms for its chips. The keyword spotting and trigger word algorithms for voice digital assistants, as examples, are the company’s core algorithms. But Synaptics wanted to open up its silicon to allow third parties to run their own deep neural networks and other algorithms on its chips, so they needed a tool suite. That is not so easy to do.

The company entered into a partnership with Eta Compute to develop the software tools to train a deep neural network and compile it “so it can run on our silicon, and we could move a little faster and open up our chips to third parties,” said Worfolk.

There are other challenges in a market where innovation is happening at such a fast pace, which often can lead to performance tradeoffs.

“The field as a whole is very immature and in that sense it is moving very quickly. There are new announcements about new types of neural networks every single week and a lot of the work has been done through academic or big research groups that are trying to push the boundaries of performance,” said Worfolk.

“But there is a big gap between academic research and what can actually run on a small device. Although we are seeing algorithms, which are able to perform on new levels that we’ve never seen before on the vision or the audio side, they take more and more compute,” he said.

This often translates into tradeoffs between efficiency and flexibility. What typically happens is the first piece of silicon for a particular target market isn’t very flexible and as the neural networks that run on that silicon matures and becomes more stable to produce the desired functionality, we can look at a second-generation chip, which is optimized for that performance, said Worfolk.

“It’s what makes it so exciting for us because there’s always something new and interesting but it is a challenge from a business perspective,” he said.

Synaptics’ strategy is to build the silicon and then create an example application or demo that makes it easier to discuss with customers. “It also generates ideas around what you can do with the chip and makes sure that we understand what it takes to bring this chip to product,” said Worfolk.

At the 2021 Embedded Vision Summit, May 25-28, he plans to demo the Katana chip in a battery-powered people counter used to track usage in office conference rooms.

“This system is not just the Synaptics Katana chip, it also include cameras, motion sensors, and wireless communications, which is also part of our portfolio,” he said. “If you want to run on, for example, four AA batteries for two years, Katana is a platform that under appropriate use cases and conditions could operate on that kind of time frame.”

Synaptics people counting using Katana platform

(Image: Synaptics)

Worfolk also will showcase the more powerful VS680 multimedia SoC in an AI-based scaling application. The demo will show how the chip can be used for super-resolution enhancement, upscaling from full HD to 4K using a neural network based upscaler, which shows crisper and sharper images compared to a traditional hardware scaling algorithm.

Synaptics super resolution using Katana platform AI scaler

Super resolution using the VS680 AI scaler (Image: Synaptics)

“There is a lot of specmanship in silicon but at the end of the day you want to know if your deep neural network runs efficiently or effectively on the device or not,” he said, “so the goal of the presentation is to discuss sample applications that can run on  Synaptics’ devices.”

So how do you select the example application? “As we do our MRD [market requirements document] and PRD [products requirements document] for the chip, there are particular applications that we target, those that we view as flagship applications for the piece of silicon, and that drives the demo,” said Worfolk.

“We want something that is sufficiently challenging to show off the competitive advantages, but we also want a demo that has broad interest and is representative of what customers might want to do,” he added.

An example is the people counter for the conference room, which shows off the chip’s ability to run an object detection network at relatively low power, Worfolk said.

The Katana SoC is based on Synaptics’ custom NPU architecture and proprietary DSPs. It also includes an off-the-shelf CPU and DSP that makes it easy for its customers to run their algorithms on the chip, he said. This is coupled with Eta Compute’s tool chain that makes it easy for them to port their networks.

“We gain a great deal of efficiency for the particular networks we have in mind by architecting our own MPU solutions, so we have all the right features in our silicon, and then we add the NPU that is tailored for what we expect some of those verticals will need, and make it broad and flexible enough to go into adjacent markets,” said Worfolk.

“I suspect that many companies have fairly similar architectures, so it really is a matter of sizing the compute engines, the memory, and then the interfaces to the sensors,” he said. “When you know what neural network you want to run and the resources it requires, you can make sure that you don’t run into any bottlenecks,” he said. “So there is a bunch of co-engineering between the very bottom – the hardware design, the top – the neural network model architecture, and then the tools that map that neural network model architecture all the way down to running on the hardware. By considering all those together that is where you can have a real competitive advantage.”

There are a lot of companies that don’t have AI teams, so they are learning about AI as well as wanting to integrate AI into their products, so we have partners who can either train the models or optimize the models or support the tools, he added.

Worfolk views partnerships as a way to fill the company’s gaps, particularly in the IoT space where there is a broad range of applications and customers who do not have expertise in AI.

Advertisement



Learn more about Synaptics

Leave a Reply