Computer Vision – Accelerating Business in Manufacturing
Published on : Thursday 02-09-2021
Every industry needs to have a plan and working model for enhancing business with computer vision, says Sundar Rajan Ganesan.
A picture is worth a thousand words – this is made true with computer vision, i.e., to analyse and provide meaningful insights from an image. We can classify, detect and segregate objects in images and can also analyse the video. Computer vision uses machine learning and deep learning techniques to process images at pixel level and assist humans for various vision-based tasks. In recent years, computer vision is used in every industry sector because of the advancement in machine learning and deep learning techniques and improved computational power. Computer vision paved the way for solutions such as quality control, anomaly detection, predictive maintenance, assembly automation, packaging, inventory management. In this paper we will see how computer vision is leveraged in manufacturing industries.
We are living in the world of Industry 4.0 with digital technologies driving innovation and artificial intelligence being democratised. Every industry must rethink and re-imagine improving efficiency and quality of a production plant. With the advancement in mobile technologies and the ubiquitous use of cameras and smart cameras, computer vision has become an integral part of our work and life. With Industry 4.0, manufacturing industries use computer vision for developing smart machines to smart factories powered by machine learning and deep learning. The Covid-19 pandemic has accelerated the use of computer vision for remote monitoring and inventory management.
Smart manufacturing industries are using computer vision to enable the Internet of Things which in turn provides tons of data; big data from the production plant. With deep learning, we can derive advanced insights from the big data thereby improving productivity and leverage for decision making. Deep learning applications in manufacturing industries include predictive maintenance, quality inspection, process monitoring, anomaly detection, etc. Convolution neural network (CNN) can be used for detecting defects in welded parts and assemblies; estimating the life of the part in an assembly based on the wear and tear, rust, warpages; monitoring the performance of the equipment based on vibration, pressure, temperature. There are also advanced CNNs published in papers such as multi-scale cascade convolutional neural networks (MC-CNN) developed for identifying faults in bearings using vibration signals and CNNs for detecting faults in custom machining processes.
Applications of computer vision in manufacturing
According to Gartner, 65% of enterprise captured videos and images will be analysed by machines rather than humans by 2023. We can derive different use cases with computer vision as per the matrix of intelligences and technologies from Forrester (illustration) for new business outcomes.
Computer vision applications generally have digital cameras and tools for image processing. Advanced cameras can capture images even in poor lighting and environments.
The production plant can be enabled with smart cameras for step by step quality checks and separate defect items for rework or scrap for the entire production cycle. Any complex part can be checked with deep learning techniques against every design criteria. Computer vision guides robots to select only the non-defect part and place it in the next assembly line. More importantly the moving objects or parts can be picked up by robots and analysed for defects. Computer vision can be very helpful in quality inspection of large numbers of parts being produced.
One of the major challenges faced is the quality of materials and parts from the supplier. With computer vision and machine learning, automated inspection of supplied materials and parts can be done which saves a lot of time for the receiving department. Any parts with defects can be identified and returned to the supplier. On the other hand, if possible, suppliers can themselves have the computer vision integrated in quality checks thereby avoiding unnecessary transport and return costs; more particularly the time wasted for return and replacement.
Flutura Decision Sciences & Analytics uses Cerebra Augmented Vision for detecting flaws such as short circuit, missing component, scratch, mouse byte, missing hole, insufficient solder, lifted pads, etc., in Printed Circuit Boards. Stemmer Imaging has developed a custom-made machine for automated finishing of marble which uses machine vision for reliable removal of undesired structures in the stone. For pharmaceutical packaging, 100% inspection is done by utilising multi-camera high-speed vision systems, developed in conjunction with Stemmer Imaging, in combination with metal detection and also automatic rejection of out-of-specification products.
With cameras installed in required areas in manufacturing sites and production plants, any deviation identified in the movement of machines or conveyor belts, workers can be cautioned for taking a safer place immediately. Any anomalies in the conveyor line can also be identified and automated steps for removal of the same can be initiated. In warehouses, intrusion of unidentified personnel and animals can be detected using computer vision and take appropriate action.
Anomaly detection is applicable in detection of faults, intrusion, ecosystem disturbances, defects and system health monitoring using computer vision.
Another major challenge for the manufacturing industry is to forecast machine failures and avoid downtime. One minute of downtime in the automotive industry can cost nearly $20000. With computer vision enabled in the production line, machines can be monitored continuously and spotted for any anomalies to alert for maintenance or replacement of parts as necessary. Predictive maintenance assists for production planning and avoids sudden shut down of production lines. Also, we can check for productivity of workers in the plant by analysing their time spent in machines, usage of tools and methods followed for better collaboration and efficiency. Assembly line operations can be optimised to improve productivity and discover hazardous movements if any before occurring.
According to a report by Deloitte Insights, predictive maintenance promises:
1. 20 to 50 % reduction in time required to plan maintenance
2. 10 to 20 % increase in equipment uptime and availability, and
3. 5 to 10 % reduction in overall maintenance cost.
Assembly automation is another area where computer vision can bring benefits. Tesla has automated almost 70% of the manufacturing process. Model Based Engineering, an approach uses only the models and assemblies created by CAD as a single source of truth throughout the product life cycle. With Model Based Engineering (MBE), assembly processes in a manufacturing industry can be guided by computer vision. Computer vision can be used for monitoring and guiding workers. Cobots (which work in collaboration with workers) use computer vision for helping the worker for manufacturing tasks.
Acquire Automation uses machine vision to inspect bottles in a complete 360-degree view for correct packaging – cap closure, position, label, etc.
Product packages should contain proper and clear details in the label. An example is printing of expiry date is important for any product. Failure to have this detail or unclear detail will create a serious impact for the company. Any misalignment or blurred text in the label can be identified using computer vision and can be rejected.
The package of the product is checked for any damages; also, it might have caused damage to the product itself. Such damaged products are identified and separated for further correction or rejection depending on the nature of the damage to the product.
Suntory PepsiCo is using Matrox Imaging-based solution for checking code label images for attachment, correctness, legibility. If the label is missing or not readable, then the product is removed automatically by an ejector from the line without actually stopping the production.
Taking stock in the warehouse is a tedious and boring task for workers. Computer vision can be used to take stock counting and update the status of items in the warehouse. Managers can be alerted with demands for the items and place orders for the high demand items on time. In a very large warehouse, finding the location of the item itself is a marathon task. Bar coded items with computer vision can help the workers to locate the items required easily.
Prophesee’s Event-Based approach to machine vision allows for significant reductions of power, latency and data processing requirements as compared to traditional frame-based vision systems. It improves productivity by counting objects moving across the field of view at a throughput of over 1,000 parts per second, in real time and with a compact, cost-efficient system.
Nothing is more important than human life and hence safety measures must be adhered to in any industry. With AI powered cameras installed in required areas, workers can be monitored for wearing personal protective equipment. Any unsafe practices or movements by workers in the plant can be alerted and accidents can be prevented. Also, if there is any deviation identified from the movement of machines or conveyor belts, workers can be cautioned for taking a safer place immediately. Material handling equipment can be alerted to stop if there is any block found due to either other machinery or workers on the path thereby avoiding accidents. In case of any accident, the computer vision system can alert the stakeholders like production manager, safety officer, and peer workers in the plant to take necessary and immediate actions. With the Covid-19 impact and new normal of working, computer vision lends a helping hand for implementing wellness practices in production plants such as social distancing, wearing of masks, temperature checking, etc., and provide alerts accordingly.
For example, forklift and pedestrian accidents can be prevented using computer vision. Algorithms can alert workers and managers for forklift movement and avoid accidents in plant areas, warehouses, etc.
Challenges and solutions
Data handling: Industries pump tonnes of data and handling data to derive actionable insights is a major challenge.
a. Data cleaning – This is the phase where most of the time is spent for creating error free data sets. Some of the steps we follow for data cleaning – Data is analysed, missing values are filled, outliers are replaced with mean value or removed, data size is reduced.
b. Classification – Labelling or categorising the given data into specific classes for both structured and unstructured data is called classification.
c. Data Mart – A large stream of data being sent from each sensor in IoT provides tonnes of data that are not easily manageable. Hence, we need to think of handling data with Data Mart building which paves the way for providing inferences to workers and managers with dashboards. Managing the data efficiently and identifying patterns for failures earlier can avoid downtime and help industries to be smarter.
Model handling: The model is continuously trained, and new versions are created periodically for betterment for different use cases. Managing the model for a particular use case is very important and should be handled properly which would otherwise result in many failures. This can be solved with MLOps which is mandatory for any ML Projects but not optional. There should also be acceptable model efficiency which we need to adhere to.
Explainability: Most of the computer vision algorithms are not easy to find the reason for the result of any task. Hence it lacks the explainability. Use cases have to be started earlier with algorithms that can provide transparency for arriving at the results.
Privacy: With cameras in industries, workers are exposed and hence need to get consent from them as well otherwise it can be an ethical issue. In some countries, face recognition and detection are prohibited and hence there is a need to check the law of the land.
Security: Leakage of data (images and videos) can lead to fake content creation which is a major threat. Model developed also is subjected to cyber-attacks and the development plan should include this risk mitigation similar to a normal software development program. ML models developed can also be cheated with fake data by an adversarial network to create the attacker's desired output. Security is not an afterthought and has to be considered from the initial phase.
Standards: Computer Vision being a part of AI, we have limited standards. With the AI standards being worked upon, at ISO and IEEE, we need to focus more on Responsible AI and safety. Working on standards is just the beginning and many milestones are to be covered with real scenarios and applications. National Institute of Standards and Technology (NIST) has drafted AI standards for trustworthiness, robustness, innovation and human centred. We need to develop computer vision applications on a case to case basis aligning to available standards and human ethics.
Deployment: Many of the use cases are just performing the job required at proof of concept level. But actual deployment in production is challenged by unstable models, dependencies, less understanding of business purpose and so on. It is necessary to create a pipeline for the ML/DL project to move smoothly to production.
Application integration: Having a use case and proving the concept is fine to start but it is more challenging with integrating with the applications already available in the enterprise. The integration part should be well thought through for any computer vision use case for hardware and software so that it does not become a hindering point for deployment.
Generative Adversarial Networks are used to create images (data) where collection of data is not feasible. With the limited number of data sets for images for surface defects, GAN can create more images than general image translation methods and can be used as a training model for detection of surface defects. High fidelity models can be created with partially labelled and limited data.
Edge computing is another technology that industries should leverage to reduce latency and effective use of the network. The smart cameras installed in required areas for checking the manufacturing line should be connected to edge computers wherein they sense only required information and send alerts accordingly but not simply transfer the data to the cloud. At the same time, cloud helps to update the model in the different edges based on the learning. The updated model can be tested with one edge and checked for improvement of the operations. If improved, then the model can be pushed to all the edges in a factory. In inventory management, drones with cameras can be used with the edge for identifying defects in assemblies, machines or parts of machines. Only if any defect is found in a machine part, then the next level deep check can be initiated by the edge itself, and a decision can be taken for scrapping or repair of the particular part or machine. This avoids transferring of videos or images being transferred to cloud and overloading the same.
More problematic are those industries which are in remote locations with no internet connectivity. With AIoT evolving and being scaled up, there is a necessity to have ‘Things’ to work offline (without internet access) which then triggered new ways of computing in addition to cloud computing. Dew computing can act as an intermediate system between cloud computers and local computers. Without internet connectivity, storage data and services can be accessed. The main limitation of cloud and fog computing is to have permanent internet connectivity. Complimenting fog and edge computing that have major internet dependency, an additional layer, such as dew computing is required to keep applications or browsers active and running. Dew computing may not be completely online but utilise cloud computing and collaborate for data and perform.
Any industry is getting transformed to digital and hence transforming industries to smart industries with available data is the need of the hour for Industry 4.0. Start with a simple use case that is implementable and creates a good impact for the business. Run the pilot and then try to scale it up phase by phase. Many of the computer vision projects are failing because of not considering the scalability and the cost requirement. Hence a detailed plan must be laid out from pilot to scaling up with the clear knowledge on what value the project will provide. Leveraging DevOps and AIOps can have the project live and updated as per the changes in the operating environment and the technology. Deriving business metrics from the use case is definitely required to attract sponsors from management without which the initiative itself may not be encouraged. Also, there are always additional insights and values tagged with AI/ML/DL projects that need to be thought through for business benefits. With smart selection of use cases and careful calibration of proof of concepts, the results obtained with computer vision may be exceeding the profits of enterprises and every industry needs to have a plan and working model for enhancing business with computer vision.
Sundar Rajan Ganesan is a Mechanical Engineer graduated from Government College of Engineering, Tirunelveli and has 21 years of experience. He is a Consultant and has worked for aerospace and subsea oil & gas industries in countries like Germany, France and Norway. He has worked for Digital Technology solutions for Conversational Experience & Intelligence. He is interested in chatbots, computer vision, extended reality, AIoT and digital twins. His current focus is in the area of smart solutions for Industry 4.0 leveraging AIoT.