AWS AppSync – HTTP Resolver

AWS AppSync is an enterprise level, fully managed GraphQL service with real-time data synchronization and offline programming features.
AWS AppSync automatically updates the data in web and mobile applications in real time, and updates data for offline users as soon as they reconnect. AWS AppSync makes it easy to build collaborative mobile and web applications that deliver responsive, collaborative user experiences.

AWS AppSync enables you to use supported data sources (AWS Lambda, Amazon DynamoDB, or Amazon Elasticsearch Service) to perform various operations, in addition to any arbitrary HTTP endpoints to resolve GraphQL fields.

Here you can read more about AWS AppSync:

AppSync now supports HTTP endpoints as data sources: HTTP resolver.
It enables customers to use their existing backend services that use REST APIs with AppSync to leverage the power of GraphQL interfaces.

If you are not familiar with GraphQL, I suggest you these resources:

In this post we are going to see how to create a new AppSync API, a new HTTP data source, a new HTTP resolver and how to run a simple GraphQL statement to query our backend Rest API service.

To run the example I used Python 3.6.3. You need the Python AWS SDK Boto3.

If you have an older version of Boto3, please update it (you need at least version boto3-1.7.59).

Given our REST Api endpoint we are going to build an API to leverage the power of GraphQL interfaces.
We are going to use this dummy Rest API – Json Placeholder:

Request:

Response:

These GraphQL Types describe our data:

Let’s see how to create a new AppSync API.
Fist of all, create a new AWS AppSync client and then create a new API:

When you create a new API you need to specify the API name and the API authentication type. In the example I used the API_KEY authentication type. Here you can read more about authentication types: AWS AppSync Security .

Create a new API key:

Create new GraphQL types:

Data sources and resolvers are how AWS AppSync translates GraphQL requests and fetches information from your AWS resources.
AWS AppSync has support for automatic provisioning and connections with certain data source types. You can use a GraphQL API with your existing AWS resources or build data sources and resolvers. This section takes you through this process in a series of tutorials for better understanding how the details work and tuning options.

Create a new HTTP data source based on our API:

The next step is to create the HTTP Resolver.
A resolver uses a request mapping template to convert a GraphQL expression into a format that a data source can understand. Mapping templates are written in Apache Velocity Template Language (VTL).

This is how our request mapping template looks like.

This is how our response mapping template looks like.

Create the new resolver:

And we are done! We created an API that use REST APIs with AWS AppSync to leverage the power of GraphQL interfaces.


cURL request:

Response:

PyCon Nove – Speaker

PyCon Nove is the ninth edition of the Italian Python Conference.
The event will take place in Florence, the 19th – 22nd April 2018.

During the event I will be speaking about “Monitoring Python Flask Application Performances with Elastisearch and Kibana”.

Here you can find the complete schedule: Pycon Nove Schedule and the abstract of my talk.

You can reserve your spot here: Pycon Registration.

PS: after the conference on Friday evening don’t miss the Elastic Meetup!

Hope to see you there 🙂

AWS Transcribe – Use cases

At the re:invent2017 AWS presented AWS Transcribe (read more about new AWS announcements here: AWS Comprehend, Translate and Transcribe).

Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech to text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Use cases

Amazon Transcribe can be used for lots of common applications, for example:

  • transcription of customer service calls
  • generation of subtitles on audio and video content
  • conversion of audio file (for example podcast) to text
  • search for key words or not safe for work words within an audio file

The service is still in preview, watch the launch video here: AWS re:Invent 2017: Introducing Amazon Transcribe
You can read more about it here: Amazon Transcribe – Accurate Speech To Text At Scale.

Here how to use AWS Transcribe with the Python SDK. Please notice that the service is still in preview.
Here you can find the documentation: Boto3 AWS Transcribe.

Initialize the client:

Run the transcribe job:

Check the job status and print the result:

Here how the output looks like:

Soon I am going to use AWS transcribe to build a speech to text recognition system that will process customer care recordings in order to:

  • convert speech to text (using AWS Transcribe)
  • extract entites, sentiment and key phrases from the text (using AWS Comprehned)
  • index the result to Elasticsearch for fast search

Amazon SQS FIFO

Amazon SQS is a distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component. Starting from the end of 2016 you can create a SQS FIFO.
FIFO queues are designed to ensure that the order in which messages are sent and received is strictly preserved and that each message is processed exactly once.
Here you can read more about the release of SQS FIFO:

These pictures found online clearly explain the differences between normal SQS and FIFO SQS:

sqs_standard
sqs_fifo

In this post we are going to see how to create a SQS FIFO queue, how to send some messages and how to consume them using Python. I am using Python 3.4 and the official AWS SDK: Boto3.

Connect to the sqs resource and create a new FIFO queue:

Please be aware that the name of a FIFO queue must end with the suffix .fifo and that FIFO queues are currently available only in the Oregon (us-west-2) and Ohio (us-east-2) regions. This feature will be available in more regions in the coming months.

The ‘ContentBasedDeduplication’ : ‘true’ attribute can be used when the messages are unique (usually a single producer and consumer). Here you can read more about content duplication in the FIFO queue: Recommendations for FIFO Queues

To send a message, use the send_message method:

In FIFO queues, messages are ordered based on message group ID. FIFO queue logic applies only per message group ID. Each message group ID represents a distinct ordered message group within an Amazon SQS queue.

To consume the messages in queue use the receive_message method:

Use the following lines to test that the messages are processed Firs-in First-Out:

You can run the code against a standard SQS queue and you will see that the messages are not consumed in order.
If your applications require messages to be processed in a strict sequence and exactly once you can use SQS FIFO.
Consider to use a SQS (both standard or FIFO) when:

when_use_sqs

Here you can find the code shown in this post: mz1991/AWS SQS FIFO

Here here some useful resources:

Create Amazon Lightsail instance using Python

At the end of November (during the re:invent 2017 event) AWS launched a new service: Amazon Lightsail.
Amazon Lightsail is the easiest way to launch and manage a virtual private server with AWS. With a couple of clicks you can launch a virtual machine pre-configured with SSD storage, DNS management, and a static IP address.
Today you can launch the following operating system:

  • Amazon Linux AMI
  • Ubuntu

or the following developer stack:

  • LAMP
  • LEMP
  • MEAN
  • Node.js

or the following application:

  • Drupal
  • Joomla
  • Redmine
  • GitLab

The following instance plans ara available:

lightsail_instance_type

In this post we are going to see how to launch a new Lightsail (MEAN stack) instance using the Python SDK.
If you want to read more about Amazon Lightsail, take a look to the following resource:

We are going to use Python 3.5 and the official Amazon SDK: Boto3 (Boto3 Documentation).
Define a new client for the Lightsail service:

Notice that Lightsail is available only in N. Virginia: Regions and Endpoints – Lightsail

We can now list and print all the available blueprints:

The output will look like this (we need the blueprint id to launch a new instance):

blueprints
Besides the blueprint, to launch a new instance, we have to specify also the instance type. We can get the available bundles using the get_bundles method:

For each bundle these are the available information (we need the bundleId):

We can now launch a new Lightsail instance (in the example MEAN developer stack and nano instance):

From the Lightsail dashboard we can follow the launch process:

lightsail_generating
Once the instance is up and running we can connect using SSH (if not specified, the default account key-pair will be used) and deploy our MEAN application.

lightsail_running
Check out the public IP of your instance to see the default Bitnami MEAN Stack page.

lightsail_homepage
In this post we saw how to use the AWS API to deploy a MEAN developer stack running in a Lightsail virtual private server.
Lightsail is a new service and is quickly evolving. I suggest you to take a look at it because is a super-fast way to launch development stack for easy deploy (when you do not need high performance and scalability).