By Jean-Jacques DeLisle, contributing writer
Only a few years ago, it was painfully common for a car GPS or smartphone GPS navigation system to claim that you were traveling in the wrong direction, tell the driver to take a turn into a lake, or even place your car in the middle of the woods. Unfortunately, these errors still happen on occasion. For critical applications, such as search-and-rescue, public safety, autonomous cars, drones, and even military navigation applications, GPS navigation failures could lead to more than just frustration and being late for a meeting.
GPS receivers are notoriously susceptible to unintentional or intentional electromagnetic interference (IEMI) and signal blockage in canyons, urban or otherwise, rugged terrain, underwater, underground, and in densely forested areas. For an autonomous drone on a rescue or surveillance mission with limited fuel, getting lost could be a costly and life-threatening inconvenience. To combat this limitation, Nvidia researchers have been exploring deep-learning and computer-vision techniques for drones, or unmanned aerial vehicles/unmanned aerial systems (UAVs/UASes) leveraging their new Jetson TX1 embedded AI supercomputer technology, for remote navigation without the need for GPS.
“Our whole idea is to use cameras to understand and navigate the environment,” shared Nikolai Smolyanskiy, NVIDIA researcher and the team’s technical lead. “All you need is a path the drone can recognize visually.”
Using an off-the-shelf drone, Nvidia researchers are developing machine-learning techniques to help drones navigate complex terrain where GPS technologies fail. (Image Source: Nvidia)
To minimize costs, the team used an off-the-shelf drone and two cameras. With some computer vision and deep-learning expertise, the team has already taught the drone prototype to traverse difficult wooded environments while avoiding common obstacles. To demonstrate and further develop this technology, the NVIDIA team literally lost a drone in the woods. The lack of uniformity of a wooded environment compared to an urban environment makes this demonstration particularly challenging, but more realistic.
“We chose forests as a proving ground because they’re possibly the most difficult places to navigate,” said Smolyanskiy. “We figured if we could use deep learning to navigate in that environment, we could navigate anywhere.”
The drone was trained by watching video that Smolyanskiy shot with three wide-angle GoPro cameras mounted on a mini Segway during an eight-mile trail trek in the Pacific Northwest. Moreover, the team used footage of trails in the Swiss Alps, taken by AI researchers at Istituto Dalle Molle di Studi sull’Intelligenza Artificiale (IDSIA) in Zurich to further develop the neural network, dubbed TrailNet. The result of this effort is what the team believes to be the most stable and longest flight of its kind, a kilometer-long flight with a steady position and stable flight pattern.
Though still in its early phases, the team is planning to develop software they can share that will help others build robotic navigation systems based on computer vision. Further development may also lead to technology that enables future drones to fly between two points on a map without supervision.
This may sound like a high-tech way of reproducing how a human would learn to navigate a trail, and to a large extent, it is. It is possible in the future that techniques will be developed that will enable autonomous systems to navigate by landmarks, the stars, and even by memory, much the way we would learn basic survival navigation.
Sources: Nvidia, New Atlas, Cornell University, YouTube 1, YouTube 2
Learn more about Electronic Products Magazine