Machine vision is one of the founding technologies of industrial automation
Published on : Saturday 06-05-2023
Dick Slansky, Senior Analyst, PLM & Engineering Design Tools, ARC Advisory Group, Boston.
Is smart sensing just an application of smart sensors or is it a software strategy?
Machine Vision (MV) is fundamentally an industrial automation application. It is a tight integration of vision sensors and cameras (hardware) and software that enables the overall system to use the hardware vision images and make a variety of decisions that can include defect inspection, object placement, positioning, and measuring, identifying, sorting, robotic control, and a complete range of automation tasks.
Machine vision is one of the founding technologies of industrial automation. It has improved product quality, speed of production, and has optimised manufacturing and logistics for decades. More recently, this established technology is merging with artificial intelligence (AI) and remains a significant contributor to Industry 4.0.
Machines could ‘see’ before AI and machine learning (ML). As far back as the 1970s computers began using specific algorithms to spatially process images and recognise basic features of an object. Early MV systems could detect object edges for positing a part, detect color differences that indicate a defect, and discern blobs of connected pixels that indicate a hole or a feature.
This type of early machine vision was based on simple operations that did not require AI. Text or bar codes had to be sharp for optical readers to function well. Shapes had to be predictable and fit an exact predefined pattern. A classic MV system could not read handwriting, decipher a damaged label, or tell an apple from a lemon. Today, with AI/ML driven software, advanced pattern-matching algorithms are used to identify a very wide range of shapes, colors, and images to determine very accurate information from high degrees of randomness in object shapes and positions.
However, machine visions systems had a profound effect on manufacturing, the food and beverage and packaging industries. Machines can spot defects faster and more reliably than human inspection. Specialised MV cameras can use thermal imaging to detect heat anomalies and X-rays to spot microscopic flaws and metal fatigue.
Which industries in India use the technology of Machine Vision in manufacturing?
Some of the largest industries in India include Food Processing, Textiles, Pharmaceuticals, Steel, and Automotive. One of the largest conglomerates in India, TATA, is the manufacturing leader in vehicles, steel, food & beverage, and industrials. All these industries share a common thread: they all use vision systems in some capacity.
The automotive industry uses machine vision for inspection and quality assurance across various areas in manufacturing like body-in-white, paint, electric and electronics, and final assembly. MV is common in drug manufacturing and packaging. In food processing MV is used across many areas to control recipes, batch processing, packaging, and filling.
How complex is it to ‘tune’ machine vision for a particular application? For example, the specific lighting, scanning speeds, etc? Even choosing a suitable camera system would be tricky?
Selection of the proper MV equipment (camera, lighting, sensors, etc.), can be a complex exercise depending on the application. There are a number of factors to consider when implementing a MV system.
• Camera Resolution. The required camera resolution is determined by the field of view (the area the camera must view to see the robotic features, mechanical positioning system used to locate the part,or the label of identification information of the part) and precision (the robot must be positioned to pick up the part or object). Larger fields of view and more precise positioning require higher camera resolution. It is always better to have higher camera resolution than you think is needed.
• System Lighting. Lighting for the vision system is particularly important. Most MV systems require a dedicated lighting source, which should minimise variations in images from part to part, and preferably accentuate product features such as edges or specific markings that the camera is attempting to identify. However, today’s low light capabilities of high pixel digital camera sensors have offset some of the lighting requirements of lower resolution cameras.
• System Configuration. It is necessary to configure and integrate allthe various components of the MV system. This would include processing software, robotic control systems and programming, communication between the camera, host IC, and the robot. In lieu of a robot this could involve identification systems or other mechanical positioning equipment.
• System Calibration. After the MV system has been configured it must be calibrated to meet the operational requirements of the production system. Calibrating the camera is the first step with the purpose being to remove any perspective and radial distortion in the image introduced by the orientation of the camera and the properties of the lens. Calibration also provides more useful measurements in real-world units such as millimeters, rather than pixels.
• Camera-to-robot coordinates calibration. For the robot to use coordinates that are determined by the camera, the relationship between camera and robot must be established. This can be accomplished by placing a target such as a crosshair on the conveyor or platform and using the camera calibration data and a pattern locator or line-finding tool in the image processing algorithm to extract the coordinates of its center point. Using today’s ML algorithms would afford a robust pattern matching feature to the system enabling the MV to determine the right part or position based on a very large array of possible options. This would give the MV much more adaptability features.
• Setting up a reliable MV system does not end when the hardware installation, software configuration, and system testing are complete. Operator training is required. In addition, the system may need to manufacture new products sometimes. Setting up a UI that is easy to use and understand will enable faster, more efficient product changes, operator training, and system re-calibration.
Machine vision is a broad term. How is machine vision different from camera applications? What are the differences in configuration when the camera is moving and when the product is moving?
Machine vision is a very specific set of integrated components driven by software for a very specific set of applications (see question 3). Today’s industrial cameras are found in a variety of areas that are not necessarily a component in a configured MV system. These cameras can be found in production monitoring and a range of complicated measurement tasks. Quality control is another field that relies strongly on industrial image processing. Digital industrial cameras are generally more robust than the standard digital cameras used for vacation snapshots. For starters, they must be capable of handling an entirely different set of external influences, such as applications in areas with high ambient temperatures.
An industrial camera can be used for a broad range of different applications. Line scan cameras are often used to inspect endless webbed materials, for example in print inspection. Area scan cameras are used in a variety of factory automation applications, especially for identifying, sorting, and inspecting parts.Industrial cameras are also used in intelligent traffic systems as well as for retail and microscopy applications.These cameras are generally stationary.
Motion capture (MoCap) sensors, such as visual cameras and inertial measurement units (IMUs), are frequently adopted in industrial settings to support solutions in robotics, additive manufacturing, teleworking, and human safety. Motion capture is the process of digitally tracking and recording the movements of objects or living beings in space (workers). Different technologies and techniques have been developed to capture motion. Camera-based systems with infrared (IR) cameras, for example, can be used to triangulate the location of retroreflective rigid bodies attached to the targeted subject. Depth sensitive cameras, projecting light towards an object, can estimate depth based on the time delay from light emission to backscattered light detection.
What are applications of robotic arms equipped with vision cameras in use cases of product quality and inspection?
Camera-based visual inspection has been adopted in factories for all stages of production, from raw product analysis to finished goods monitoring. While visual inspection using fixed cameras is efficient, it needs multiple sophisticated cameras mounted on appropriate frames, configured individually to detect feature points on a product, and synchronised with the assembly line. Cameras positioned as an end effector on a robotic arm provides much better flexibility and are more adaptable to changing production situations. Visual inspection with robots is used throughout industry in automotive, A&D, Pharma, and many industrial production lines.
Robotic visual inspection refers to a camera and lighting gear mounted on the end-effector of a robot and the robot moves to inspect multiple points on the same object/test piece for features. The robot can be programmed to automatically detect a sequence of locations on the object.
Robotic visual inspection ideally needs small, agile robots with low payload capacity — they are only enough to carry a camera while satisfying the kinematic requirements. Also, most production lines with such inspection mechanisms have humans working nearby, thus, collaborative robot arms are the most suitable for these applications.
How is machine vision used in improving cobots and other control elements of robotics?
Cobots have become very useful because they can function in areas of work previously occupied only by their human counterparts. They are designed with inherent safety features like force feedback and collision detection, making them safe to work right next to human operators.
One example is a cobot being used for pick and place applications. Manual pick and place is one of most repetitive tasks performed by human workers today. The tedious nature of the task can often lead to mistakes, while the repeated physical motions can lead to strain or injury. Pick and place applications suited well to a cobot application. A pick and place task is where a workpiece is picked up and placed in a different location. This could mean a packaging function where the cobot picks a piece from a conveyor or tray and places it in a package; the latter often requires advanced vision systems. Pick and place functions typically require an end-effector that can grasp the object.
Another example is finishing tasks performed by human operators that require a manual tool and large amounts of force. The vibration from the tool for extended periods can cause injury to the operator. A cobot can provide the necessary force, repetition, and accuracy required for finishing jobs. These finishing jobs can include polishing, grinding, and deburring. The robot can be taught manually or via computer programming methods. This allows the robot to deal with different dimensioned parts. This is achieved through force sensing, either via the end-effector or internally.
Dick Slansky's responsibilities at ARC include directing the research and consulting in the areas of PLM (CAD/CAM/CAE), engineering design tools for both discrete and process industries, Industrial IoT, Advanced Analytics for Production Systems, Digital Twin, Virtual Simulation for Product and Production. Dick brings over 30 years of direct experience in the areas of manufacturing engineering, engineering design tools (CAD/CAM/CAE), N/C programming, controls systems integration, automated assembly systems, embedded systems, software development, and technical project management. Dick provides technical consulting services for discrete manufacturing end users in the aerospace, automotive and other industrial verticals. Additionally, he focuses on engineering design tools for process, energy, and infrastructure.
Leave a reply: