ASP.NET Core + Azure Text Analysis + AWS Text Analysis – Twitch Live Stream

I am live on Twitch at 20:00 on Wednesday the 28th of March with Emanuele Bartolesi. We will be talking about ASP.NET Core, Text Analysis on AWS and Azure.

Here the link to the Twitch stream: Twitch Stream.
After the live streaming we will upload the video to Youtube.

Hope to see you there!

Extract business insights from audio using AWS Transcribe, AWS Comprehend and Elasticsearch – Part 1 and 2- Skedler Blog

I’ve just published a new blog post on the Skedler Blog.
In this two-part blog post, we are going to present a system architecture to convert audio and voice into written text with AWS Transcribe, extract useful information for quick understanding of content with AWS Comprehend, index this information in Elasticsearch 6.2 for fast search and visualize the data with Kibana 6.2. In Part I, you can learn about the key components, architecture, and common use cases. In Part II, you can learn how to implement this architecture.

The components that we are going to use are the following:

  • AWS S3 bucket
  • AWS Transcribe
  • AWS Comprehend
  • Elasticsearch 6.2
  • Kibana 6.2
  • Skedler Reports and Alerts

System architecture:

You can read the full post – Part 1 – here: Extract business insights from audio using AWS Transcribe, AWS Comprehend and Elasticsearch – Part 1.

Part 2 – here: Extract business insights from audio using AWS Transcribe, AWS Comprehend and Elasticsearch – Part 1.

Please share the post and let me know your feedbacks.

PyCon Nove – Speaker

PyCon Nove is the ninth edition of the Italian Python Conference.
The event will take place in Florence, the 19th – 22nd April 2018.

During the event I will be speaking about “Monitoring Python Flask Application Performances with Elastisearch and Kibana”.

Here you can find the complete schedule: Pycon Nove Schedule and the abstract of my talk.

You can reserve your spot here: Pycon Registration.

PS: after the conference on Friday evening don’t miss the Elastic Meetup!

Hope to see you there πŸ™‚

AWS Transcribe – Use cases

At the re:invent2017 AWS presented AWS Transcribe (read more about new AWS announcements here: AWS Comprehend, Translate and Transcribe).

Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech to text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Use cases

Amazon Transcribe can be used for lots of common applications, for example:

  • transcription of customer service calls
  • generation of subtitles on audio and video content
  • conversion of audio file (for example podcast) to text
  • search for key words or not safe for work words within an audio file

The service is still in preview, watch the launch video here: AWS re:Invent 2017: Introducing Amazon Transcribe
You can read more about it here: Amazon Transcribe – Accurate Speech To Text At Scale.

Here how to use AWS Transcribe with the Python SDK. Please notice that the service is still in preview.
Here you can find the documentation: Boto3 AWS Transcribe.

Initialize the client:

Run the transcribe job:

Check the job status and print the result:

Here how the output looks like:

Soon I am going to use AWS transcribe to build a speech to text recognition system that will process customer care recordings in order to:

  • convert speech to text (using AWS Transcribe)
  • extract entites, sentiment and key phrases from the text (using AWS Comprehned)
  • index the result to Elasticsearch for fast search

Application Performance Monitoring (APM) with Elasticsearch 6.1.1

In June 2017 Elastic joined forces withΒ Opbeat an application performance monitoring (APM) company. Read the official blog post here: Welcome Opbeat to the Elastic Family.

Adding APM (Application Performance Monitoring) to the Elastic Stack is a natural next step in providing our users with end-to-end monitoring, from logging, to server-level metrics, to application-level metrics, all the way to the end-user experience in the browser or client.

Elastic APM consists of three components:

  • Agents: libraries that run inside of your application process and automatically measure the duration of requests to your service and things like database queries, cache calls, external http requests and errors
  • The APM server (written in Golang) that processes data from agents and stores the data in Elasticsearch
  • Kibana UI: dashboards that gives you an instant overview of application response times, requests per minutes, error occurrences and more.

The APM server and the agents (right now available only for Python and NodeJS) are open source:

Read more about it here: Starting Down the Path of APM for the Elastic Stack

In this post we are not going to see how to install the APM server, you can find the instructions here: Open Source Application Performance Monitoring.

Once the APM Server is installed and started we can monitor the performance of our application. In this example we will see a Python Flask application.

Install the Python APM library:

Initialize the client:

Within the Flask route you can log some messages:

or exceptions:

Here is how the monitoring looks like in Kibana:

You can see the details of each request by clicking on it:

 

I really like the APM feature fully integrated with the Elastic Stack. I will integrate it within my Flask/Django applications.
If you want to read more about the new APM feature:

If you want to read more about this topic: Application Performance Monitoring with Elasticsearch 6.1, Kibana and Skedler Alerts.