AWS AppSync – HTTP Resolver

AWS AppSync is an enterprise level, fully managed GraphQL service with real-time data synchronization and offline programming features.
AWS AppSync automatically updates the data in web and mobile applications in real time, and updates data for offline users as soon as they reconnect. AWS AppSync makes it easy to build collaborative mobile and web applications that deliver responsive, collaborative user experiences.

AWS AppSync enables you to use supported data sources (AWS Lambda, Amazon DynamoDB, or Amazon Elasticsearch Service) to perform various operations, in addition to any arbitrary HTTP endpoints to resolve GraphQL fields.

Here you can read more about AWS AppSync:

AppSync now supports HTTP endpoints as data sources: HTTP resolver.
It enables customers to use their existing backend services that use REST APIs with AppSync to leverage the power of GraphQL interfaces.

If you are not familiar with GraphQL, I suggest you these resources:

In this post we are going to see how to create a new AppSync API, a new HTTP data source, a new HTTP resolver and how to run a simple GraphQL statement to query our backend Rest API service.

To run the example I used Python 3.6.3. You need the Python AWS SDK Boto3.

If you have an older version of Boto3, please update it (you need at least version boto3-1.7.59).

Given our REST Api endpoint we are going to build an API to leverage the power of GraphQL interfaces.
We are going to use this dummy Rest API – Json Placeholder:



These GraphQL Types describe our data:

Let’s see how to create a new AppSync API.
Fist of all, create a new AWS AppSync client and then create a new API:

When you create a new API you need to specify the API name and the API authentication type. In the example I used the API_KEY authentication type. Here you can read more about authentication types: AWS AppSync Security .

Create a new API key:

Create new GraphQL types:

Data sources and resolvers are how AWS AppSync translates GraphQL requests and fetches information from your AWS resources.
AWS AppSync has support for automatic provisioning and connections with certain data source types. You can use a GraphQL API with your existing AWS resources or build data sources and resolvers. This section takes you through this process in a series of tutorials for better understanding how the details work and tuning options.

Create a new HTTP data source based on our API:

The next step is to create the HTTP Resolver.
A resolver uses a request mapping template to convert a GraphQL expression into a format that a data source can understand. Mapping templates are written in Apache Velocity Template Language (VTL).

This is how our request mapping template looks like.

This is how our response mapping template looks like.

Create the new resolver:

And we are done! We created an API that use REST APIs with AWS AppSync to leverage the power of GraphQL interfaces.

cURL request:


Combine Amazon Translate with Elasticsearch and Skedler to build a cost-efficient multi-lingual omnichannel customer care – Part 1 and 2 – Skedler Blog

I’ve just published a new blog post on the Skedler Blog.

In this two-part blog post, we are going to present a system architecture to translate customer inquiries in different languages with AWS Translate, index this information in Elasticsearch 6.2.3 for fast search, visualize the data with Kibana 6.2.3, and automate reporting and alerting using Skedler.

The components that we are going to use are the following:

  • AWS API Gateway
  • AWS Lambda
  • AWS Translate
  • Elasticsearch 6.2.3
  • Kibana 6.2.3
  • Skedler Reports and Alerts

System architecture:

You can read the full post – Part 1 – here: Combine Amazon Translate with Elasticsearch and Skedler to build a cost-efficient multi-lingual omnichannel customer care – Part 1.

Part 2 – here: Combine Amazon Translate with Elasticsearch and Skedler to build a cost-efficient multi-lingual omnichannel customer care – Part 2 of 2.

Please share the post and let me know your feedbacks.

ASP.NET Core + Azure Text Analysis + AWS Text Analysis – Twitch Live Stream

I am live on Twitch at 20:00 on Wednesday the 28th of March with Emanuele Bartolesi. We will be talking about ASP.NET Core, Text Analysis on AWS and Azure.

Here the link to the Twitch stream: Twitch Stream.
After the live streaming we will upload the video to Youtube.

Hope to see you there!

PyCon Nove – Speaker

PyCon Nove is the ninth edition of the Italian Python Conference.
The event will take place in Florence, the 19th – 22nd April 2018.

During the event I will be speaking about “Monitoring Python Flask Application Performances with Elastisearch and Kibana”.

Here you can find the complete schedule: Pycon Nove Schedule and the abstract of my talk.

You can reserve your spot here: Pycon Registration.

PS: after the conference on Friday evening don’t miss the Elastic Meetup!

Hope to see you there 🙂

AWS Transcribe – Use cases

At the re:invent2017 AWS presented AWS Transcribe (read more about new AWS announcements here: AWS Comprehend, Translate and Transcribe).

Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech to text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Use cases

Amazon Transcribe can be used for lots of common applications, for example:

  • transcription of customer service calls
  • generation of subtitles on audio and video content
  • conversion of audio file (for example podcast) to text
  • search for key words or not safe for work words within an audio file

The service is still in preview, watch the launch video here: AWS re:Invent 2017: Introducing Amazon Transcribe
You can read more about it here: Amazon Transcribe – Accurate Speech To Text At Scale.

Here how to use AWS Transcribe with the Python SDK. Please notice that the service is still in preview.
Here you can find the documentation: Boto3 AWS Transcribe.

Initialize the client:

Run the transcribe job:

Check the job status and print the result:

Here how the output looks like:

Soon I am going to use AWS transcribe to build a speech to text recognition system that will process customer care recordings in order to:

  • convert speech to text (using AWS Transcribe)
  • extract entites, sentiment and key phrases from the text (using AWS Comprehned)
  • index the result to Elasticsearch for fast search