With the increasing demand for automation in the 21st century, robots are seeing unprecedentedly rapid growth across many industries, including logistics, warehousing, manufacturing and food delivery. Human-robot interaction, precise control and safe collaboration between humans and robots are the cornerstones of adopting automation.
Safety refers to multiple tasks in the context of robots, with collision detection, obstacle avoidance, navigation and localization, force detection and proximity detection being a few examples. All of these tasks are enabled by a suite of sensors, including LiDAR, imaging/vision sensors (cameras), tactile sensors and ultrasonic sensors. With the advancement of machine-vision technology, cameras are becoming increasingly important in robots.
Sensors in robotics
Charge-coupled device (CCD) and complementary metal oxide semiconductor (CMOS) sensors are the most common types of vision sensors. A CMOS sensor is a digital device that converts the charge of each pixel to its corresponding voltage, and the sensor typically includes amplifiers, noise correction and digitalization circuits. In comparison, a CCD sensor is an analog device that contains an array of photosensitive elements.
Although each has its strengths, with the development of CMOS technology, CMOS sensors are now widely considered an appropriate fit for machine vision in robots thanks to their smaller footprint, lower cost and lower power consumption compared with CCD sensors.
Vision sensors can be used for motion and distance estimation, object identification and localization. The benefit of vision sensors is that they can collect significantly more information with high resolution compared with other sensors like LiDAR and ultrasonic sensors.
Figure 1 compares different sensors based on nine benchmarks. Vision sensors have high resolution and low costs. However, they are inherently susceptible to adverse weather and lightness; therefore, other sensors are often needed to increase overall system robustness when robots work in an environment with unpredictable weather or difficult terrain. IDTechEx’s latest report, “Sensors for Robotics 2023-2043: Technologies, Markets, and Forecasts,” includes a more detailed analysis and comparison of these benchmarks.
Vision sensors for safety in mobile robots
Mobile robotics is one of the largest robotic applications in which cameras are used for object classification, safety and navigation. Mobile robots primarily refer to automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). However, autonomous mobility also plays an important role in many robots, ranging from food-delivery robots to autonomous agricultural robots (e.g., mowers). As an inherently complicated task, autonomous mobility requires obstacle avoidance and collision detection.
Depth estimation is one of the key steps in obstacle avoidance. The task requires one- or multiple-input red, green and blue (RGB) images collected from vision sensors. These images are used to reconstruct a 3D point cloud with the aid of machine-vision algorithms, thereby estimating the depth between the obstacle and the robot. At this stage in 2023, the majority of mobile robots (e.g., AGVs, AMRs, food-delivery robots, robotic vacuums) are still used indoors, such as in warehouses, factories, shopping malls and restaurants, where the environment is well-controlled with a stable internet connection and illumination.
Therefore, cameras can achieve their best performance and machine-vision tasks can be performed in the cloud, significantly reducing the computational power required for the robot itself, thereby leading to a lower cost. For example, cameras are only needed to monitor the magnetic tape or QR code on the floor for grid-based AGVs. While this has been widely used and trendy nowadays, this does not work well for outdoor sidewalk robots or inspection robots that work in areas with limited Wi-Fi coverage (e.g., under tree canopies).
To solve this problem, the in-camera computer-vision technique is emerging. As the name indicates, the image processing is all performed within the cameras. Due to the increasing demand for outdoor robots, IDTechEx believes that in-camera computer vision will be increasingly needed in the long term, especially for those designed to work in difficult terrain and harsh environments (e.g., exploration robots). However, in the short term, IDTechEx believes that the power-consumption nature of on-board computer vision, along with the high costs of chips, will likely hold back adoption.
IDTechEx believes that many robot OEMs would prefer to incorporate other sensors (e.g., ultrasonic sensors, LiDAR) as the first step to enhance the safety and robustness of the environment-perception ability of their products.
Improved detection range, mobility and efficiency
Conventional CMOS detectors for visible light are prevalent within robotics and industrial image inspection; however, there is extensive opportunity for more complex image sensors that offer capabilities beyond that of simply acquiring RGB intensity values. Extensive effort is currently being devoted to developing low-cost image sensor technologies that can detect aspects of light beyond the visible range into the short-wave infrared (SWIR) 1,000- to 2,000-nm range.
Extending wavelength detection into the SWIR range presents many benefits for robotics, as it enables the differentiation of materials that appear visually similar or identical within the visible range. This substantially improves recognition accuracy and paves the way for better and more functional robotic sensing.
In addition to imaging over a broader spectral range, further innovations include imaging over a larger area, acquiring spectral data at each pixel, and simultaneously increasing temporal resolution and dynamic range. On this front, a promising technology is event-based vision.
With conventional frame-based imaging, a high temporal resolution produces vast amounts of data that require computationally intensive processing. Event-based vision resolves this challenge by presenting a completely new way of thinking about obtaining optical information, in which each sensor pixel reports timestamps that correspond to intensity changes. As such, event-based vision can combine the greater temporal resolution of rapidly changing image regions with much-reduced data transfer and subsequent processing requirements.
Another promising innovation is the increasing miniaturization of image sensor technology, making it easier than ever to integrate into a robotic arm or component without impeding movement. This is an application that the burgeoning market of miniaturized spectrometers is targeting. Driven by the growth in smart electronics and IoT devices, low-cost miniaturized spectrometers are becoming increasingly relevant across different sectors. The complexity and functionalization of standard visible light sensors can be significantly improved through the integration of miniaturized spectrometers that can detect from the visible to the SWIR region of the spectrum.
The future being imagined by researchers at Fraunhofer is a spectrometer weighing just 1 gram and costing a single dollar. Miniaturized spectrometers are expected to deliver inexpensive solutions to improve autonomous efficiency, particularly within robotic imaging and industrial inspection, as well as consumer electronics.
Learn more about IDTechEx