Advertisement

Multi-dimensional sensing improves touchscreens

SigmaSense’s multi-dimensional sensing technology improves touchscreen performance, enabling advances for existing and new use cases.

Touchscreen technology has remained virtually unchanged for decades. SigmaSense has unveiled new multi-dimensional sensing technology that improves the performance of almost any application with a touchscreen. The company also announced that it is licensing its technology to NXP Semiconductors and that the two companies will collaborate on developing high-performance sensing products for applications that require faster and fully immersive software-defined experiences.

Target applications range from mobile, gaming, wearables and IoT to automotive, industrial, and digital signage. It can even improve the performance of electric vehicle (EV) batteries.

“The planned co-development defines a move to new data-centric design options driven by software-defined sensing. The quality and speed of data extraction from the physical world is becoming as important, if not more important than processing performance,” according to Gary Baum, SigmaSense’s senior vice president of emerging technology.

“SigmaSense extracts deeper, high-quality data that makes viable new touch sensing functionality in existing and new applications. This along with a shift from analog sensing to software-defined sensing offers greater programmability and design flexibility essential for developers to innovate new features and capabilities for HMI products,” he added.

Multi-dimensional sensing enables previously impossible designs. By measuring current direct-to-digital, SigmaSense delivers low-voltage, frequency domain sensing, an industry first. Fast, continuous, high-fidelity data capture with intelligent digital signal processing moves analog challenges to the digital domain, where design flexibility can deliver orders of magnitude improvement.

Block diagram of SigmaSense’s SigmaDrive software-defined sensing approach for multi-dimensional sensing in touchscreens.

SigmaSense’s SigmaDrive software-defined sensing approach (Source: SigmaSense)

The technology captures more granular data from the physical world enabling interactive advances such as high-speed touch interfaces of all sizes and shapes, new surface materials beyond glass, operation in rain and with gloves, foldable designs and economical large format interactive displays with the speed of a mobile experience.

Expanded use cases are found in large displays with gaming class, high-speed user experiences, as well as industrial/rugged applications. The technology also enables previously impossible use cases.

“With SigmaSense current-mode sensing, large displays can now simultaneously drive, sense and image the entire screen surface regardless of size. This eliminates previous speed limitations for large displays and the large latencies of the past,” explained Baum. “SigmaSense delivers 300-Hz report rates on 32-inch to 100-inch touch displays, driven by as little as 0.02 volts. Large displays in all kinds of environments including gaming tables, automotive cockpits and outdoor kiosks with over 1 mm of thick vandal resistant cover glass, are now possible.”

Legacy touch-sensing solutions used in rugged laptops/notebooks and kiosks are limited in their ability to operate in water or when gloves are used. SigmaSense claims its performance is consistent, working automatically with and without water, gloves or passive stylus and without any software mode change.

Not only can it be used to enhance existing applications, but the technology can also identify individual users via touch. Data can be accessed per individual or across devices achieving touch with intent—who is touching and what is their intent, for example, in gaming applications. Data accessed through touch can also be used in conference room applications to log into conferencing systems and share documents with only a touch from a document owner.

The technology also enables expanded functionality beyond the surface of the display for touchless 3D interaction. “Presence, hover, touch and pressure information is now provided where only an X-and Y-coordinate touch was available until now. Features such as gestures and presence detection for power management and proximity for new user experiences streamline, reduce cost and simplify functions,” said Baum.  “Examples include presence and touchless interaction in laptops for wake up, user ID login, screen brightness control, volume or pressure and haptics for new applications.”

A software-defined sensing approach means that independent channels can be allocated to multiple types of sensing via a single controller. The channels are shared between multiple sensors, even when locating the sensors separate from each other, and the channels do not require a uniform impedance to operate.

“The functions are software defined and the design flexibility enables possibilities such as sensing of temperature, humidity and any type of impedance transducers. The end results are lower cost, simplicity and improved performance of user experiences,” said Baum.

Advertisement

Leave a Reply