Advertisement

Giving sight to robots

By independently identifying and analyzing target objects, vision systems can improve robotic success in many applications

BY BRENT EVANGER
Banner Engineering
Minneapolis, MN
http://www.bannerengineering.com

Machine vision systems are used to perform complex visual inspections for robotic applications, reporting precise, multi-dimensional feedback on the target part in the language robotics can understand and use. Vision systems also provide information on what action robotic components should take to interact with the target object. Knowing how to best apply vision in robotic applications requires an understanding of how vision systems work and a knowledge of the tools specifically designed for robotic application needs.

Vision sensors allow a machine to “see.” While traditional sensors analyze and interpret data from a single point, vision sensors input an entire image. These sensors consist of a camera that snaps a picture of the part. The picture is then transferred to memory, processed, analyzed and compared against predetermined parameters. By comparing features of the part to user-defined tolerances for each parameter, the vision sensor makes a decision on whether the part passes or fails the inspection and outputs the results for the purpose of robot control.

The camera and controller comprise the hardware elements of a vision system. The software elements include the control system, graphical user interface and image algorithms.

Applications

A vision system’s feature set includes its vision tools and method of communicating data. Robot applications that can benefit from a machine vision system are grouped into several classes. The most common application involves randomly oriented parts moving along a conveyer belt in a variety of different positions. The robot needs to adjust itself according to the orientation of the parts, grasp the items, and palletize them (see Fig. 1 ). In this case, the vision sensor provides the link between the randomly oriented part and the robot. For instance, a machine vision system can be used to control robots at a pick-and-place machine that is used to assemble electronic circuit boards.

Giving sight to robots

Fig. 1. Vision sensors can work with robots, providing information needed when palletizing parts, for instance.

Another common class of applications consists of robots that move parts from one station to the next station in a process. The vision system provides information to allow the robots to grab a target object and move it to the next station in a manufacturing or inspection system.

Coordinate transformation

When a machine vision camera detects an object in its field of view, the camera can locate it and determine the object’s x and y coordinates with respect to the upper left-hand corner of the image — the 0, 0 point. However, the robot operates with its own system of coordinates, centered on its own 0, 0 point, which usually does not correspond to the origin the vision system uses. To simplify communication between the vision sensor and the robot and allow the robot to easily perform the appropriate action, vision systems make use of robot coordinate transformation. This capability allows vision systems to convert information about the location of the point of interest in the camera’s frame of reference into the robot’s coordinate system.

Along with the x and y position coordinates, machine vision systems must often tell the robots the theta coordinate, θ, or the angle of rotation of a target object. The addition of the θ coordinate allows robots to not only know where a part is located, but also be able to pick it up. Vision tools can report the position of the object and how it is rotated, so the robot can adjust itself accordingly before picking the object up and completing a task.

The x, y, and θ coordinates of a given part can be determined using a variety of vision tools, which are part of the software components of a vision system. The tools vary in precision offered, but also in the amount of time required to analyze the point of interest. For instance, edge-based tools provide the x and y coordinates for wherever an edge is found on the product. When multiple edge detecting tools are combined with an analysis tool, the angle, or θ coordinate, can be found.

A more sophisticated blob tool can usually determine the x, y, and θ coordinates of the center of mass of an object — according to the two-dimensional average of all the pixel x and y positions and information about the overall shape of the part — allowing the robot to grab and pick up the object in a balanced way. Even more precise (and more time-consuming) are pattern-matching tools, which provide information on the center of an object, as well as how it is rotated, so the robot knows how it has to adjust to pick the object up.

Transferring data

Vision sensors transfer information to the robots in a number of different ways. The simplest and most cost-effective method uses an ASCII string. In this method, the camera detects the x, y, and θ positions and sends them to the robot via an RS-232 serial connection or a TCP/IP Ethernet connection. The robot controller does not request the information — it just receives whatever information the camera sends, whenever the camera sends it.

A remote command channel (RCC) feature allows a robot controller to instruct the camera to perform a task — such as taking a picture or reporting information about an image. With RCC, data is also sent over a serial or Ethernet connection, but in a more controlled fashion. The camera only sends information to the robot when it is requested, streamlining the flow of data and increasing overall efficiency.

In the third and most involved method, the Industrial Ethernet connection goes to a PLC or advanced robot controller. Like RCC, this method provides more regulated control for transferring the information from the camera. It additionally provides a better storage system for the information. The camera determines the x, y, and θ coordinates of the target objects, and all the pieces of data are uniquely mapped to a specific area in memory. Now, when the robot controller requests information, it receives the data in a much more orderly fashion.

In the simplest method of data transfer, operators must set up the robot controller to listen to whatever data the camera provides — and make sense of it. This creates a challenge for the robotic components. With RCC — a more sophisticated method of data transfer — the robot can request specific information at specific times, but it still has to interpret data from the format that the camera sends it in. Through use of a PLC or advanced robot controller, robots can ask for whatever information the camera knows, whenever the robot wants it, and the data is sent in an ordered and grouped fashion. However, some less sophisticated robots aren’t capable of this arrangement, and other operators may not want to use an expensive PLC as the controller.

Interconnectivity

In early robotic applications, robot controllers, and software were often proprietary to a specific company. To integrate other technology, such as a vision system, manufacturers had to design a custom solution. Since then, improvements on both sides have opened up otherwise closed systems, allowing components to work with more solutions. Along with more sophisticated capabilities — such as regulated control — and higher levels of computing processing, robotics and vision sensors can now be integrated more easily than ever before — making them ideal teammates for many industrial applications. ■

Advertisement



Learn more about Banner Engineering

Leave a Reply