Vision is becoming a must-have technology for robotic manufacturing. Sensing technology has advanced a lot in the last decade and plug-and-play systems for collaborative robotics have made it even easier to implement robot vision.
When collaborative robots are introduced into a manufacturing environment, workers sometimes get a little irritated when seemingly simple tasks turn out to be complex for the robot. The mantra "Easy for humans; hard for robots" is applicable to many robotic tasks, including dexterous manipulation and verbal programming.
Three aspects will determine if a vision system is simple or complex:
- Image capture.
- Image analysis.
- Data utility.
In general, you can improve image capture and analysis by following the simple principle of enhancing the features you want to detect and dampening everything else.
Try to ensure a sharp visual contrast between the background and the object, with parts clearly separated in the image. If this is possible, you can use simple image analysis tools to recognize the shape of the object. If not, more complex algorithms will be needed, which are often less reliable.
The intended use of the extracted data will also influence the overall complexity of the setup. If you are using the data as a simple binary check — pass/fail, yes/no — the task may be straightforward. However, if you need to send the data to the robot, so that the robot can pick up a part, this is a totally different story.
For one thing, you have to align three separate coordinate frames in the programming: robot frame, camera frame and world frame.
The Three Types of Robot Vision System
Various different sensing technologies are used in robot vision. They can loosely be grouped into categories by how many dimensions they can detect: 1D sensors, 2D sensors and 3D sensors. System complexity will tend to increase with higher-dimension sensors.
1D sensors are sufficient if you want to measure the height of a part that moves on a conveyor, or simply detect that a part is present. They're suited to tasks like item counting and basic pick-and-place. One example is a laser distance sensor, which shines a single laser point onto a surface and uses triangulation to get the distance value.
2D vision sensors are very common in manufacturing industries, usually with a fixed camera setup or attached to the wrist of the robot arm. With these sensors, you get the X and Y coordinates for each pixel value, which allows more complex vision techniques like part recognition and part inspection.
There are many choices on the market, including cameras with various resolutions and grayscale or color sensors. Some products also provide complete integration with certain robots.
3D vision sensors are the most advanced category, and they are used for more demanding manufacturing tasks. The most common are laser scanners, which rapidly move a laser beam over the environment to build up a 3D point cloud. These are used for tasks like robotic metrology, reverse engineering, bin picking, and exact measurement of defect depth.
More dimensions are not always better when it comes to robot vision. For many applications, a simple 2D vision sensor may outperform the more expensive 3D laser scanner simply due to the complexity of integrating the sensor data with the robot. Always be realistic about the functionality you actually need for the robot and pick one which will be flexible to your changing manufacturing processes.
Although algorithms are continually improving, there are a few common challenges for robot vision.
Template matching, a common algorithm, is affected by the following issues which are applicable to many vision algorithms.
- Parts overlap or touch each other. Non-overlapping parts are much easier to recognize than overlapping ones, as it's challenging for algorithms when the whole contour is not present. This doesn’t mean it’s impossible to detect overlapping parts, but it can reduce the reliability of the system as the algorithm might give multiple possibilities for the position or orientation of each part.
- Insufficient Contrast: To detect edges easily, good contrast is vital. The surface finish of the parts will influence the contrast, as will the lighting. If you have relatively dark, grey, metallic parts, you could opt for a white background. If the parts are shiny, there is a good chance that they’ll reflect light back into the camera, which will show up in the image as a white spot and confuse the algorithm.
- External Lighting Sources: Outside lighting can also interfere with vision algorithms. Possible solutions include "hiding" your system underneath a hood, using tinted windows, or adding an integrated or external light source. However, don’t go through all this trouble if it's not needed. Test first and judge afterward.
It's a good idea to start small when implementing a robot vision system. Use parts that have few or no variations between parts of the same model, but great variations between different models. This could include parts with distinct contours and recognizable features (e.g. drilled holes and consistent spots).
Take control of the lighting, to avoid influence from external sources, and change the background if necessary to improve the contrast in the image. Finally, and most importantly, test your system thoroughly to see how it behaves.