In this blog, I am going to show you how we can use rekognition for image analysis using lambda function.we will be going to perform label detection and object detection for an image so basically we are performing image analysis in this blog.
What is AWS Rekognition?
Rekognition is one of the AWS services to perform image and video analysis. So here all we need to provide is the image or video to the AWS Rekognition service and it will help us to identify an object, people, text, activities, and scenes.
Benefits of using Amazon Rekognition are as follows:
Common use cases for using Amazon Rekognition mentioned in the following:
For Image analysis, we are using four services of AWS.
So the flow for image analysis will be:
Step 1: Creating an IAM role:
a. AmazonRekognitionFullAccess
b. AWSLambdaExecute
Step 2: Create an S3 bucket to store images:
For the image analysis, I uploaded the following image.
Step 3: Create a Lamda Function:
Note: In place of buket_name and image_name plz mentioned your S3 bucket name and uploaded image name.
import json
import boto3
def lambda_handler(event, context):
client = boto3.client("rekognition")
#passing s3 bucket object file reference
response = client.detect_labels(Image = {"S3Object": {"Bucket": "bucket_name", "Name": "image_name"}}, MaxLabels=3, MinConfidence=70)
print(response)
return "Thanks"
import json
import boto3
def lambda_handler(event, context):
client = boto3.client("rekognition")
s3 = boto3.client("s3")
# reading file from s3 bucket and passing it as bytes
fileObj = s3.get_object(Bucket = "bucket_name", Key="image_name")
file_content = fileObj["Body"].read()
# passing bytes data
response = client.detect_labels(Image = {"Bytes": file_content}, MaxLabels=3, MinConfidence=70)
print(response)
return "Thanks"
START RequestId: 3c1284c2-8611-4888-9030-7942d144a180 Version: $LATEST
{'Labels': [{'Name': 'Person', 'Confidence': 99.60957336425781, 'Instances': [{'BoundingBox': {'Width': 0.15077881515026093, 'Height': 0.8669684529304504, 'Left': 0.6409724354743958, 'Top': 0.07304109632968903}, 'Confidence': 99.60957336425781}], 'Parents': []}, {'Name': 'Human', 'Confidence': 99.60957336425781, 'Instances': [], 'Parents': []}, {'Name': 'Transportation', 'Confidence': 94.79045104980469, 'Instances': [], 'Parents': []}, {'Name': 'Automobile', 'Confidence': 94.79045104980469, 'Instances': [], 'Parents': [{'Name': 'Transportation'}, {'Name': 'Vehicle'}]}, {'Name': 'Vehicle', 'Confidence': 94.79045104980469, 'Instances': [], 'Parents': [{'Name': 'Transportation'}]}, {'Name': 'Car', 'Confidence': 94.79045104980469, 'Instances': [{'BoundingBox': {'Width': 0.9211472868919373, 'Height': 0.803646445274353, 'Left': 0.03170054033398628, 'Top': 0.15337026119232178}, 'Confidence': 82.67008209228516}], 'Parents': [{'Name': 'Transportation'}, {'Name': 'Vehicle'}]}, {'Name': 'Apparel', 'Confidence': 94.33142852783203, 'Instances': [], 'Parents': []}, {'Name': 'Clothing', 'Confidence': 94.33142852783203, 'Instances': [], 'Parents': []}, {'Name': 'Pants', 'Confidence': 94.33142852783203, 'Instances': [], 'Parents': [{'Name': 'Clothing'}]}, {'Name': 'Jeans', 'Confidence': 81.03520202636719, 'Instances': [], 'Parents': [{'Name': 'Pants'}, {'Name': 'Clothing'}]}, {'Name': 'Denim', 'Confidence': 81.03520202636719, 'Instances': [], 'Parents': [{'Name': 'Pants'}, {'Name': 'Clothing'}]}], 'LabelModelVersion': '2.0', 'ResponseMetadata': {'RequestId': 'aea9947a-f69d-47fc-a559-ea36dc06c822', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'application/x-amz-json-1.1', 'date': 'Sat, 23 May 2020 05:28:12 GMT', 'x-amzn-requestid': 'aea9947a-f69d-47fc-a559-ea36dc06c822', 'content-length': '1409', 'connection': 'keep-alive'}, 'RetryAttempts': 0}}
END RequestId: 3c1284c2-8611-4888-9030-7942d144a180
REPORT RequestId: 3c1284c2-8611-4888-9030-7942d144a180 Duration: 2403.35 ms Billed Duration: 2500 ms Memory Size: 128 MB Max Memory Used: 71 MB Init Duration: 168.60 ms
What exactly you will get in response:
This rekognition API takes individual images as input and returns an ordered list of labels and a corresponding numeric confidence index. As you can see I have uploaded a person image with a car. So in response, you can see you will get all kinds of labels like a person, human, vehicle, car, and it also returns bounding boxes coordinates for items that are detected in images (for e.g height and width of persons face).
The response you will get is based on the image that you upload in the S3 bucket.
Try adding different images in the S3 bucket and check the image analysis.
I hope you like the blog.
Happy Coding 😃