Advertisement

New algorithm will give robots navigational abilities

This algorithm will orient robots in unfamiliar settings

algorithmforrobots1

This diagram visually shows the steps taken by the algorithm to range the orientation of points in a scene. It finds a variety of axis that will fit the point clusters (red, blue, green) where it will identify points of the scenario. (Image via Phys.org. )

When you’re driving in a foreign city and appear to be lost, you’ll use a building or location as an identifiable landmark. Your awareness of the landmark allows you to find your way and keep from heading in the wrong direction. The way that humans process information and re-identify objects comes naturally. For automated things like computers and robots, this process is not so easy. They need to be programmed to carry out specific tasks and to demonstrate a set skillset. These navigational tools we use were not available to robots or computers, until now. A new algorithm has been created to allow inanimate objects to gain a sense of direction.

MIT researchers will deliver a presentation at the IEEE Conference on Computer Vision and Pattern Recognition in June that will explain an algorithm they've developed to enable orientation skills in computers. This algorithm will allow computers to identify 3D scenes and gain location abilities. Scene understanding is a tremendous mountain for computer vision researchers to climb. This algorithm will alleviate this issue by simplifying the situation.

The algorithm will enable robots when piloting around buildings that they are unfamiliar with, similar to the concept of a person finding his/her way around a foreign city. This technology recognizes the dominant orientations at any time and categorizes them into a set of axes called “Manhattan Frames.” They are embedded into a sphere and as the robot gains mobility around foreign territory, it will see the sphere rotating in different directions. The robot would then be able to calculate its distance from the axes. When the robot needs to be reoriented, it will judge which landmark to turn toward, as it will be easily identifiable.

This algorithm lessens the difficulty encountered when it needs to determine which aspects of a scene lie in a particular plane, helping with obtuse plane segmentation. 3D models of the objects in the scene are made to conquer the plane segmentation problem. This can be made to match the stored 3D copies of familiar items.

The researchers working on this project are both students and advisors. The group includes Julian Straub, a graduate student of electrical engineering and computer science, and John Fisher, a senior research scientist. Other advisors that contributed to this project are John Leonard, Oren Freifeld, and Guy Rosman.

The team from MIT developed the algorithm to work with a form of 3D data that is compatible with Microsoft Kinect or laser rangefinders. The algorithm guestimates the orientation of many objects in the general vicinity that is then labeled as points on a sphere’s surface. Each point on the sphere refers to an angle that’s present in relation to the center of the sphere. The Manhattan Frames are fit into points on the sphere by the algorithm.

Fisher stated, “Think about how you navigate a room. You're not building a precise model of your environment. You're sort of capturing loose statistics that allow you to complete your task in a way that you don't stumble over a chair or something like that.”

Story via Phys.org

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply