paint-brush
A Quick Introduction to AWS Rekognitionby@hackernoon-archives
1,690 reads
1,690 reads

A Quick Introduction to AWS Rekognition

by HackerNoon ArchivesNovember 22nd, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Amazon Rekognition is a service that makes it easy to add image analysis to your applications.

Company Mentioned

Mention Thumbnail
featured image - A Quick Introduction to AWS Rekognition
HackerNoon Archives HackerNoon profile picture

Amazon Rekognition is a service that makes it easy to add image analysis to your applications.

Using AWS Rekognition, you can build applications to detect objects, scenes, text, faces or even to recognize celebrities and identify inappropriate content in images like nudity for instance.

Rekognition allows also the search and the detection of faces.

It’s a cloud managed service, that has different SDKs for many programming languages:

You can also search and compare faces. Rekognition’s API enables you to quickly add sophisticated deep learning-based visual search and image classification to your applications.

Let’s create our virtual environment for our Python example and install Boto3 library as well as requests since we’re going to read online images:

mkdir rekognition_example
cd rekognition_example/
virtualenv -p python3 venv
. venv/bin/activate
mkdir app
cd app
touch app.py
pip install boto3

Download our mini ebook 8 Great Tips to Learn AWS.

Who’s There !?

This is a simple code in order to detect faces in an image.

This is the image we’re going to use:

Stephen Hawking, David Fleming, Martin Curley. source: wikimedia

import boto3, requests
session = boto3.Session(profile_name='default')
rekognition = session.client('rekognition')

response = requests.get('https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Stephen_Hawking_David_Fleming_Martin_Curley.png/640px-Stephen_Hawking_David_Fleming_Martin_Curley.png')
response_content = response.content

rekognition_response = rekognition.detect_faces(Image={'Bytes': response_content}, Attributes=['ALL'])
                           
print(rekognition_response)

Executing this code will give you a dict as an output. I am not going to copy paste it, a screenshot would be better :-)

I am just going to examine the output of one element (one person) from the list:

{
    'AgeRange': {'High': 53, 'Low': 35},
    'Beard': {'Confidence': 84.3370361328125, 'Value': False},
    'Pose': {'Yaw': 17.422698974609375, 'Pitch': -13.293052673339844,
             'Roll': -24.165315628051758},
    'Sunglasses': {'Confidence': 99.1271743774414, 'Value': False},
    'MouthOpen': {'Confidence': 99.53144836425781, 'Value': False},
    'BoundingBox': {
        'Top': 0.2964743673801422,
        'Left': 0.12259615212678909,
        'Width': 0.2548076808452606,
        'Height': 0.33974358439445496,
        },
    'Mustache': {'Confidence': 86.38748168945312, 'Value': False},
    'Landmarks': [
        {'Type': 'eyeLeft', 'X': 0.19534938037395477,
         'Y': 0.4522131681442261},
        {'Type': 'eyeRight', 'X': 0.27564379572868347,
         'Y': 0.4086972177028656},
        {'Type': 'nose', 'X': 0.2732294201850891,
         'Y': 0.5034009218215942},
        {'Type': 'mouthLeft', 'X': 0.23319387435913086,
         'Y': 0.567730724811554},
        {'Type': 'mouthRight', 'X': 0.2995980381965637,
         'Y': 0.5337028503417969},
        {'Type': 'leftPupil', 'X': 0.18825505673885345,
         'Y': 0.4541756510734558},
        {'Type': 'rightPupil', 'X': 0.26830026507377625,
         'Y': 0.4112653434276581},
        {'Type': 'leftEyeBrowLeft', 'X': 0.16133913397789001,
         'Y': 0.44483911991119385},
        {'Type': 'leftEyeBrowUp', 'X': 0.18029549717903137,
         'Y': 0.41238218545913696},
        {'Type': 'leftEyeBrowRight', 'X': 0.2113354504108429,
         'Y': 0.40193164348602295},
        {'Type': 'rightEyeBrowLeft', 'X': 0.25551289319992065,
         'Y': 0.3774777352809906},
        {'Type': 'rightEyeBrowUp', 'X': 0.27510780096054077,
         'Y': 0.36523035168647766},
        {'Type': 'rightEyeBrowRight', 'X': 0.29447996616363525,
         'Y': 0.37106069922447205},
        {'Type': 'leftEyeLeft', 'X': 0.1800023466348648,
         'Y': 0.4642027020454407},
        {'Type': 'leftEyeRight', 'X': 0.2112579345703125,
         'Y': 0.44617632031440735},
        {'Type': 'leftEyeUp', 'X': 0.19339144229888916,
         'Y': 0.44287851452827454},
        {'Type': 'leftEyeDown', 'X': 0.19702661037445068,
         'Y': 0.4585713744163513},
        {'Type': 'rightEyeLeft', 'X': 0.2617918848991394,
         'Y': 0.4201596677303314},
        {'Type': 'rightEyeRight', 'X': 0.29025232791900635,
         'Y': 0.4023495018482208},
        {'Type': 'rightEyeUp', 'X': 0.2727033197879791,
         'Y': 0.39943841099739075},
        {'Type': 'rightEyeDown', 'X': 0.2782060503959656,
         'Y': 0.41539862751960754},
        {'Type': 'noseLeft', 'X': 0.24730081856250763,
         'Y': 0.5213189721107483},
        {'Type': 'noseRight', 'X': 0.2855394780635834,
         'Y': 0.5008159875869751},
        {'Type': 'mouthUp', 'X': 0.27306443452835083,
         'Y': 0.5380690097808838},
        {'Type': 'mouthDown', 'X': 0.27977147698402405,
         'Y': 0.5668703317642212}

In the code above, you can see that there are a lot of useful information like age range, beard, pose, sunglasses, mouth, moustache ..etc

Where Is Hawking ?

In order to detect the celebrities in out photo, this is the code we’re going to use:

import boto3, requests
session = boto3.Session(profile_name='default')
rekognition = session.client('rekognition')

response = requests.get(url)

response_content = response.content

rekognition_response = rekognition.recognize_celebrities(Image={'Bytes': response_content})

print(rekognition_response)

Stephen Hawking, David Fleming, Martin Curley. source: wikimedia

If you execute this code, you’ll notice that your code will at least detect Hawking face with a confidence of 99.9 %:

{
    'CelebrityFaces': [{
        'Id': '3IT9O9a',
        'Face': {
            'Landmarks': [{'Y': 0.44734710454940796,
                          'X': 0.20180866122245789, 'Type': 'eyeLeft'},
                          {'Y': 0.4118923246860504,
                          'X': 0.282216340303421, 'Type': 'eyeRight'},
                          {'Y': 0.5019076466560364,
                          'X': 0.2700166702270508, 'Type': 'nose'},
                          {'Y': 0.5661360621452332,
                          'X': 0.23448686301708221, 'Type': 'mouthLeft'
                          }, {'Y': 0.530217707157135,
                          'X': 0.3082653880119324, 'Type': 'mouthRight'
                          }],
            'Pose': {'Roll': -19.350360870361328,
                     'Yaw': 9.268149375915527,
                     'Pitch': -9.697746276855469},
            'BoundingBox': {
                'Width': 0.25,
                'Top': 0.2958333194255829,
                'Left': 0.12812499701976776,
                'Height': 0.3333333432674408,
                },
            'Confidence': 99.92058563232422,
            'Quality': {'Sharpness': 99.9980239868164,
                        'Brightness': 39.42979049682617},
            },
        'Name': 'Stephen Hawking',
        'Urls': ['www.imdb.com/name/nm0370071'],
        'MatchConfidence': 100.0,
        }],
...
...

Is That Hawking ?

In order to use the face comparaison feature, I used these two photos and you’ll notice that I used the threshold of 70 out of 100 :

import boto3, requests
session = boto3.Session(profile_name='default')
rekognition = session.client('rekognition')

source_response = requests.get('https://cdn.thinglink.me/api/image/515911285833990144/1240/10/scaletowidth')source_response_content = source_response.content

target_response = requests.get('http://i.telegraph.co.uk/multimedia/archive/02648/Hawking_2648775k.jpg')
target_response_content = target_response.content

rekognition_response = rekognition.compare_faces(SourceImage={'Bytes': source_response_content}, TargetImage={'Bytes': target_response_content}, SimilarityThreshold=70 ) 
 
for faceMatch in rekognition_response['FaceMatches']:
        position = faceMatch['Face']['BoundingBox']
        confidence = str(faceMatch['Face']['Confidence'])
        print('The face at ' +
               str(position['Left']) + ' ' +
               str(position['Top']) +
               ' matches with ' + confidence + '% confidence')

The output was:

The face at 0.5144444704055786 0.136952742934227 matches with 99.94391632080078% confidence

Join Us !

We are creating an AWS online course for everyone and every level: Newbies, intermediate and skilled people.

Our goal is giving people the opportunity to learn DevOps technologies with quality courses and practical learning paths.

If you are interested in Practical AWS training, you can make a preorder now and get the training at almost its half-price.

You can also download our mini ebook 8 Great Tips to Learn AWS.