With the spread of COVID-19 wearing face masks became obligatory. At least for most of the population. This created a problem for the current identification systems. For example, Apple’s FaceID struggled to recognize faces with masks. In 2014, Facebook launches the DeepFace program. DeepFace’s accuracy rate was 97,25%, just 0.28% below that of a human. In 2015, Google went one better with FaceNet. On the widely used Labeled Faces in the Wild (LFW) dataset, FaceNet achieved a new record accuracy of 99.63% In 2017 Apple launches iPhone X with . With the release of researchers started to use new benchmarks. MegaFace metric tests models based on their ability to recognize faces in the presence of many “distractors”. “Distractors” exist when there are are many potential faces to choose from. Face ID MegaFace Current Challenge With the spread of COVID-19 wearing face masks became obligatory. At least for most of the population. This created a problem for the current identification systems. For example, Apple’s FaceID to recognize faces with masks. struggled https://faceidmasks.com/ https://faceidmasks.com offered one solution. They created face masks that mimic our facial features. But does this solve the problem on a global scale? New Solution We created a system that can recognize faces with medical masks. And now you can have the tools to build your own solution as the code is available for the public. How do we do that? We created such data augmentations that transform the first training dataset into that of the face with a medical mask. Donald Trump without and with augmented medical mask It increases the amount of training data using information available from this data, so it can capture as much data variation as possible. It creates more data to get better generalization in your model. There are several Data Augmentation techniques that include cropping, padding, and horizontal flipping. They are used to train large neural networks. It looks like this: What does Data Augmentation do? Example of augmentation Three-step solution: But in our case, it will solve warping texture task in 3 simple steps: Find facial landmarks Process Triangulation Complete Matching We will use python library in order to extract keypoints of initial face. Find facial landmarks face-alignment Process of facial keypoints extraction From the handcrafted with the medical masks by cropping and triangulation process we managed to extract ~250 masks, which will be matched to other persons. Facial masks database can be found in the solution repository via this . Triangulation process dataset link Process of facial keypoints extraction The last piece of our augmentation is to extract keypoints, triangulate ones that we need and them match the random extracted mask from previous step to our destination face. Medical mask matching Another advantage of this solution is that it can recognize faces in various positions, including rotation of the face. The database of medical masks is stored in JSON. It includes calculated parameters of rotation. This allows us to match images with face rotation for only with those masks that are falling in a concrete interval of rotation for a given face. VGGFace dataset samples augmented with medical masks You can replicate the whole process with this and even prepare your own dataset with a provided pipeline. Viewer requires iframe. colab notebook Pipeline for face recognition Deployment of neural network solution in Science starts from, guess what? Data! In order to bring this solution to reality, we will manage to train ArcFace model on : Data VGGFace2 dataset VGGFace2 source We already preprocessed the part of this dataset with medical masks and it's available for downloading from this . Google Drive link From the very beginning of this article we mentioned dataset which now is the indicator for face recognition solutions. From its leaderboard we will pick one of the best so far - ArcFace MegaFace ArcFace: Additive Angular Margin Loss for Deep Face Recognition One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large scale facerecognition is the design of appropriate loss functions that enhance discriminative power. This is how authors of ArcFace paper address this problem: In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to the exact correspondence to the geodesic distance on the hypersphere. We present arguably the most extensive experimental evaluation of all the recent state-of-the-art face recognition methods on over 10 face recognition benchmarks including a new large-scale image database with trillion level of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state-of-the-art and can be easily implemented with negligible computational overhead. This is a process of training a DCNN for face recognition supervised by the ArcFace loss: From original paper If you want to dive deeper with the theoretical part of this solution, we recommend to check out the and this . original research medium article Implementation The whole pipeline code with a detailed description provided in . We will use datasets and medical masks we mention above in the article and you are welcome to use it too by default, or get along with your own dataset - our pipeline is 100% scalable and user-friendly, so don't forget to check the evaluated results. google colab notebook Results We achieved 58 percents accuracy with our pipeline on test dataset. And it will be higher on LFW metric. The ability to show impressive results for such limited training time that pipeline face recognition with medical masks task. custom proves is able to solve