paint-brush
How Unsupervised Learning Can Help in Defect Detection & Quality Control in Manufacturingby@mobidev
920 reads
920 reads

How Unsupervised Learning Can Help in Defect Detection & Quality Control in Manufacturing

by MobiDevJuly 19th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI defect detection is based on computer vision that provides capabilities for automating the whole AI quality inspection. The accuracy of quality control rose up from 90% to 99% percent, while approximately $49,000 labor costs were cut by production line. Traditional machine learning methods present several limitations to how we can train and utilize models for defect detection. We’ll discuss the advantages of unsupervised learning and elaborate on the approaches MobiDev uses in our practical experience. The majority of machine learning applications rely on machine learning.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - How Unsupervised Learning Can Help in Defect Detection & Quality Control in Manufacturing
MobiDev HackerNoon profile picture


As the American Society of Quality reports, many organizations have quality-related costs of up to 40% of their total production revenue. A large part of this cost comes from the inefficiency of manual inspection, which is the most common way to provide quality control in manufacturing.


The application of artificial intelligence for quality control automation presents a more productive and accurate way of doing a visual inspection of production lines. However, traditional machine learning methods present several limitations to how we can train and utilize models for defect detection.


So in this article, we’ll discuss the advantages of unsupervised learning for defect detection, and elaborate on the approaches MobiDev uses in our practical experience.


What is AI defect detection and where it is used?

AI defect detection is based on computer vision that provides capabilities for automating the whole AI quality inspection process using machine learning algorithms. Defect detection models are trained to visually examine items that pass the production line and recognize anomalies on their surface, and spot inconsistencies in dimensions, shape, or color.


The output depends on what the model is trained to do, but in the case of defect detection, the flow typically looks like this:

https://www.youtube.com/watch?v=UY6xbrcViVw


Applied to quality control processes, defect detection AI is efficient at inspecting large production lines and spotting faults even on the smallest parts of a final product. This relates to a large spectrum of manufactured products that may contain surface defects of different natures.


Defect detection in different branches of manufacturing

Intel describes a case of implementing computer vision to automate tire quality inspection. As is stated in the report, the accuracy of quality control rose up from 90% to 99% percent, while approximately $49,000 in labor costs were cut by the production line.


But such systems are not bound to stationary hardware in the factory. For instance, drones with cameras can be used for inspecting pavement defects or other outdoor surfaces, which significantly decreases the time required for covering large city areas.


The pharmacy industry also benefits from inspecting manufacturing lines for different products. For instance, Orobix applies defect detection to drug manufacturing with a specific type of camera that can be used by an untrained human operator. The same principle is applied to inspecting pharmaceutical glass defects, like cracks and air blobs caught in the glass.


Such examples can be found in the food industry, textile, electronics, heavy manufacturing, and other branches. But, there are some specific problems in how we can approach defect detection algorithms with traditional machine learning.


As manufacturers inspect thousands of products daily, it becomes difficult to collect sample data for training, and also to label it. This is where unsupervised learning comes into play.


What is unsupervised learning?

The majority of machine learning applications rely on supervised machine learning methods. Supervised learning entails that we provide ground truth information to the model by manually labeling collected data.


In terms of a production line, collecting and labeling data can be impossible, since there is no way we can gather all the variations of cracks or dents on a product to ensure accurate detection by the model.


Here we face four problems:

  • difficulties with obtaining a large amount of anomalous data
  • possibility of a very small difference between a normal and an anomalous sample
  • a significant difference between two anomalous samples
  • inability to know in advance the type and number of anomalies


Supervised vs unsupervised defect detection

Unsupervised machine learning algorithms allow you to find patterns in a data set without pre-labeled results and discover the underlying structure of the data where it is impossible to train the algorithm the way you normally would.


Unlike supervised learning, the flow of training becomes less labor intensive, as we expect the model to discover patterns in data with a higher threshold for variations.


Anomaly detection reveals previously unseen rare objects or events without any prior knowledge about them. The only available information is that the percentage of anomalies in the dataset is small.


Concerning defect detection, this helps to resolve the problem with data labeling and collecting huge amounts of samples. So let’s see how unsupervised learning methods can be used for defect detection model training.


How does unsupervised learning apply to defect detection?

Defect detection relates to the problem of anomaly detection in machine learning. While we don’t rely on labeling, there are other approaches in unsupervised learning that aim at grouping data and providing hints to the model.

  • Clustering is grouping unlabeled examples according to similarity.  Clustering is widely used for recommender engines, market or customer segmentation, social network analysis, or search result clustering.
  • Association mining aims to observe frequently occurring patterns, correlations, or associations from datasets.
  • Latent variable models are designed to model the distribution probability with latent variables. It is mainly used for data preprocessing, reduction of features in a dataset, or decomposing the dataset into multiple components based on features.


Uncovered patterns with unsupervised learning can be used for implementing traditional machine learning models. For instance, we might apply clustering to the available data, and then use those clusters as a training dataset for supervised learning models.


Concrete Crack Detection with Unsupervised ML

Having a vast experience in machine learning, we’ve conducted an experiment using the Concrete Crack dataset. The goal was to create a model capable of recognizing images with defects and normal ones using unsupervised learning.


Additionally, the study checks how the number of defect images affects certain algorithms used in this project.


Concrete cracks dataset examples

In the use case we selected, we assume that image labels cannot be known in advance during training. Only a test dataset is labeled in order to verify the quality of model prediction, as training occurs by an unsupervised approach. So here we used five different approaches to get classification results from an unsupervised learning model.


CLUSTERING

Since we don’t have any labeled ground truth data, grouping unlabeled examples are done with clustering. In our case, there are two clusters of images we need to single out from the dataset.


This was performed with a pre-trained VGG16 convolutional neural network for feature extraction and K-means for clustering. What clustering did here, is group images with and without cracks based on their visual similarities.


In a nutshell, clustering looks something like this:


K-means clustering

Clustering methods are easy to implement and usually are considered as a baseline approach for further deep learning modeling.


BIRCH CLUSTERING

Upon this approach, images were clustered based on visual similarity with a pre-trained ResNet50 neural network for feature extraction and Birch for clustering. This algorithm constructs a tree data structure with the cluster centroids being read off the leaf.


It is a memory-efficient, online-learning algorithm. The results of clustering were visualized with Principal Component Analysis:


Birch clustering results


As we can see, birch clustering shows a pretty good distribution of classes, even on points where the sample is quite far from its centroid.


CUSTOM CONVOLUTIONAL AUTOENCODER

Custom convolutional autoencoder contains two blocks: encoder and decoder. It helps to obtain features in the encoder part and reconstruct images from them in the decoder part.

Encoder-decoder visualization


As we don’t have labels for network training, we need to choose another approach to obtain classes – for example, an adaptively selectable threshold. The purpose of an adaptively selectable threshold is to divide two distributions (crack-free images and images with cracks) as precisely as possible:


Autoencoder distribution results

DCGAN

DCGAN generates images from z-space with the help of adversarial loss (BCALoss). Finally, we have three losses – generator loss, discriminator loss, and MSE loss (to compare generated images and ground truth).


We can build our classification on the same approach as in custom autoencoder – through comparison between losses on images with cracks and without with the help of an adaptive selectable threshold.


For threshold, it will be appropriate to use discriminator loss or MSE loss depending on their distributions.


GANOMALY

GANomaly uses the conditional GAN approach to train a Generator to produce images of the normal data. During inference, when an anomalous image is passed, it is not able to capture the data correctly.


It leads to poor reconstruction for defected images and good reconstruction for normal ones and gives the anomaly score.


GANomaly architecture
Image source:arxiv.org

How to approach unsupervised anomaly detection

Perhaps, the most beneficial side of unsupervised learning techniques is that we can avoid gathering tremendous amounts of sample data, and labeling it for training.


Applying unsupervised learning techniques to derive data patterns, we’re not limited to which model can be used for actual classification and defect detection.


However, unsupervised learning models are suited better for segmenting existing data into classes, since it’s quite difficult to check the model prediction accuracy, especially without a labeled dataset.


Written by Viktoriia Akhremenko, AI/ML Team Leader at MobiDev.

The full article was originally published here and is based on MobiDev technology research.