Twitter sentiment analysis with Amazon Rekognition

During the AWS re:Invent event, a new service for image analysis has been presented: Amazon Rekognition. Amazon Rekognition is a service that makes it easy to add image analysis to your applications. With Rekognition, you can detect objects, scenes, and faces in images. You can also search and compare faces. Rekognition’s API enables you to quickly add sophisticated deep learning-based visual search and image classification to your applications.
You can read more here: Amazon Rekognition and here: AWS Blog.

In this post we are going to see how to use the AWS Rekognition service to analyze some Tweets (in particular the media attached to the Tweet with the Twitter Photo Upload feature) and extract sentiment information. This task is also called emotion detection and sentiment analysis of images.
The idea is to track the sentiments (emotions, feelings) of people who are posting Tweets (with media) about a given topic (defined by a set of keywords or hashtags).

Given a set of filters (keywords or hashtags) we will use the Twitter Streaming API to get some real-time Tweets and we will analyze the media (images) within the Tweet.
AWS Rekognition gives us information about the faces in the picture, such has the emotions identified, the gender of the subjects, the presence of a smile and so on.

For this example we are going to use Python3.4 and the official AWS Python SDK Boto3.

First, define a new client connection to the AWS Rekognition service.

I assume you know how to work with the Twitter Streaming API, I reported an example, by the way you can find the code of this project in this Github repository: tweet-sentiment-aws-reko.
Example of Tweet dowload using the Streaming API and Python Tweepy:

For each Tweet downloaded from the stream (we will discard the Tweet without media), we analyze the image and extract the sentiment information.
To detect faces we use the detect_faces method. It takes the image blob (or the link to an image stored within S3) and returns the details about the image.

To check if at least one face has been found within the image, check if the key ‘FaceDetails‘ exists in the response dictionary.
We can now loop through the identified details and store the results.

We can now print the results to understand the sentiments within the analyzed images, in particular:

  • How many smiling faces there were in pictures?
  • How many men/women?
  • Which emotions were detected? With which confidence? (in average)

This is an example of output (analysis performed on a set of 20 images).
result

We can use this simple system to track and analyze the sentiment within the photos posted on Twitter. Given a set of posted pictures we can understand if the people in the pictures are happy, are male/female and which kind of emotions are feeling (this can be very useful to understand the reputation of a brand/product).

Note that AWS Rekognition gives also information about the presence of beard, mustaches and sunglasses and can compute the similarity between two faces (this can be useful to understand if there are more pictures representing the same people).

I would like to improve the system, would be interesting to analyze also the text of each Tweet to see if is there a correlation (and how strong) between the sentiment of the text and the sentiment of the image. If you want to contribute do not hesitate to contact me (would be cool also to build a front-end to see the downloaded medias and the results in a nicer way).

Boto3 AWS Rekognition official documentation: Rekognition
Github repository with the full code: tweet-sentiment-aws-reko.