Cost-effective vision-enhanced industrial automation creates more value
Vision technology is increasingly being used in industrial automation across a host of sectors, enabling a range of more intelligent, more responsive products that are also more valuable to users.
The rising use of vision technology in embedded systems is called “embedded vision”. One showcase application is vision-enhanced industrial automation — which is what we’ll discuss in this blog — where vision technology is embedded in manufacturing plants that make anything from automotive safety systems right down to consumer electronics. Embedded vision can add valuable capabilities to existing products and even open up new markets.
To safely move in their environment and meaningfully interact with the items they’re making, manufacturing robots (as well as other industrial automation systems) have to see and understand what’s around them; using image sensors to ascertain spaces and depths, cost-effective vision processors are now making self-directed, adaptive industrial automation a reality.
Automated lines can work faster and more accurately than humans, but they’ve always previously relied on incoming parts arriving in fixed positions, which has complicated the manufacturing process. Parts coming along the line in different positions or orientations can cause assembly failures.
Just like humans, robots can use their “brains” to work out and get around their surroundings; they do this via cameras, vision processors and software algorithms, which lets them adapt to their surroundings (i.e., a production line). They can also use the benefits of vision processing further into the supply chain, say in areas such as finished-goods-inventory tracking.
Up until recently, such vision-augmented technology has been limited to a small number of complicated, pricey systems. Now though, performance and cost improvements are seeing vision-augmented technology moving into different mainstream manufacturing platforms — although there are still implementation challenges (but these are becoming easier to solve).
Next-generation inventory tracking
Embedded vision can help improve product tracking through production lines, with an example being a pharmaceutical packaging line using vision-guided robots to pick syringes from the conveyer belt, then pack them at very high speeds.
Product ID such as RFID and barcodes are useful to track materials, but they can’t detect if the syringes (or any goods) are damaged. However, because embedded vision offers manufacturers “intelligent” raw material and product tracking and handling, it is the next generation of inventory-management systems – particularly as vision-processing components become more and more integrated. Quality checks with embedded vision on moving products and raw items can detect damage, then automatically update the inventory database with any quality issues. What’s more, all this can be increasingly done cost effectively.
As noted earlier, embedded vision is very useful on the factory floor in handling and assembling raw materials. Here’s one way: firstly, cameras acquire images of parts; later vision processing sends data to a robot, allowing it to do certain tasks, such as pick up and place a component.
Industrial robots already offer manufacturers scalable and repeatable automation benefits, but they become far more flexible once vision processing is integrated into the line. For example: one robot can handle different parts because embedded vision lets it “see” the particular part it’s handling and so adapt. In the same vein, picking parts from bins is also easier because a camera can be used to find a certain item — within a pile — that’s oriented in a way the robot’s arm can handle.
Embedded vision is also very useful in highly precise assembly applications; here, cameras take an image of components once they’ve been picked up, allowing the robot to correct its position to take mechanical imperfections or different ways to pick the item up into account.
3D vision — so where depths can be judged as well — is a growing area that’s helping robots work out even more about what’s around them. 3D vision that’s also cost effective is being used in a whole host of applications, from vision-guided robotic bin picking to highly precise manufacturing. Advancements in the latest-generation vision processors allow them to handle the huge data sets and complex algorithms needed to determine depth information and then very quickly make decisions. In this way, 3D imaging (over 2D imaging) is making previously impossible vision-processing tasks now possible. A good example here is guiding robots to select parts in a disorganised bin.
Automated inspection has many benefits over human inspection in speed and accuracy. It also means 100% quality assurance on the goods out your factory door, rather than just a random selection of goods checked off the production line. (You may find this short whitepaper on objective QA interesting; it’s free to download).
Embedded vision is cost effective because the same images used for vision robotic guidance can be used for automated in-line inspection of the items being handled, thus making robots more flexible and as well as producing better results. This is the case even when the robot needs highly accurate movement — achieved by “visual servo control”, when a camera is on or nearby the robot, giving continuous visual feedback rather than one image at the start of the task, so the robot can correct for movement.
Sometimes the vision system is just one part of the jigsaw, so has to be synchronised with other equipment and protocols; for example, with a sorting system. Frequently, inspection is done to separate out faulty parts as they go along the production line. But as they move, they need to be found — individually — and tracked and correlated with the image analysis results to make sure the ejector is sorting out failures properly. (There are many different ways to synchronise sorting and vision; talk to a provider for more advice.)
Despite the increasing use of automation, humans are still an important element of a modern automated manufacturing plant, so there needs to be co-operation between the two different “co-workers”. Since modern collaborative robots — or “cobots” — aren’t confined in a protective cage, the shared work space must be safe. Logically, a robot (or any automated system) in a shared work space needs to have very high perception of its surroundings.
3D cameras are used in this scenario to create a reliable map of the robot’s environment, so it can adapt movements or speeds accordingly. We already have vision-based, driver-assistance safety systems in cars, so it makes sense to see vision-based industrial automation safety products that will create smart, flexible robots — and no doubt re-shape factory automation.
iQVision has years of experience customising applications from the simple to the complex. For more information or guidance on choosing the right vision solutions and technology for your application, please contact us via email or call 1300 IQVISION (1300 478 474).
You will find some excellent information on cobots here. As well, iQVision also has a host of information in our resource library, including case studies, whitepapers, videos, FAQ’s, our blog and brochures. They’re all free to download.