Your training funded up to 50%

See available funding programs

Amazon Web Services ML Engineering Tutorial

Achieve a machine learning solution with Amazon Web Services.
AWS Training Partner
Private session

This training is available in a private or personalized format. It can be provided in one of our training centres or at your offices. Call one of our consultants of submit a request online.

Call now at 1 877 624.2344

  • Duration: 3 days
  • Regular price: On request

Course outline

Duration : 3 days

© AFI Expertise inc.

In each section, we’ll go through the different cloud provider’s services in order to achieve a successful ML solution with thoughtful considerations related to the data and the application, since it is the core of every ML project;

  1. Introduction to ML;
  2. Analyze the requirements;
  3. Implement (2 use case scenarios);
  4. Deploy.

We further analyze the different services offered by the cloud provider and present a way to use them in symbiosis in order to achieve a viable ML solution.

Audience

Developers with an interest in machine learning and Amazon Web Services

Prerequisites

  • Python programming
  • Hands-on Amazon Web Services plateform (nice to have)

Contents

Introduction to ML
We hereby define what is considered a ML project as well as the requirements in terms of data and applications.
To this end, we’ll go through a series of concrete use cases that better illustrate the needs for ML as a solution.
Analyze the requirements
This section is mainly about the computational infrastructure needed to get a ML project up-to-speed as well as the data requirements. We first present data storage and annotation requirements and follow up with the compute power and cost estimations. These requirements will inevitably change depending on the ML problem and the nature of the data i.e. sensors, text, images, videos, etc.
Data Storage
Every ML project relies on data, and that data needs to be stored somewhere. In this section, we list down several types of data involved in a ML project;
1. The actual data we need to take action on; images if we are object detection.
Text, if we are classifying documents.
2. Models; the trained models need to be stored and versioned in order to query them.
3. Predictions; monitoring models’ predictions is too often left undone. The predictions should be stored and monitored in order to analyze the performance of a deployed model.
Services
Amazon Web Services offers different storage granularities with their very specific uses cases;
● EFS
● S3
● Glacier
● Relational and NoSQL databases;
○ RDS
○ Elasticache
○ Timestream
Data Annotation
Having data for a ML solution is not enough; it has to be good data. To achieve this, it is important to consider annotation and verification tools. Again, it differs considering the nature of the data, and will be walking through a series of examples (text, images, etc.)
Services
● SageMaker, with human-in-the-loop
Computational Requirements
Training ML models is very different than deploying them. We look into the different services that suits the need of a ML solution lifecycle.
Services
● EC2
● Batch
● Lambda
● Costs Explorer & Budget
Implement the solution
Given a ML project scoped, we explore what are the provider’s services available in order to implement the solution i.e. use their available models or annotated data, train new models to achieve desired requirements.
We’ll be walking through and analyzing the tools for 2 different projects;
1. Document Retrieval
2. Image Classification
Made-up Solutions
Services
● AWS Natural Language Processing
● AWS Image Classification/Object Detection
Custom Solutions
Custom solutions require customs tools and we present how we can use them in symbiosis in order to achieve a viable ML solution, again in two different ML applications.
Services
● SageMaker
● EC2
● S3
Deploy the solution
Deploying a ML solution is very different from training it. It does require, however, some thinking; what’s the optimal way of serving the model, frequency of re-training the model, monitoring prediction and compute environment.
Most available models are deployable “as-is”. However, deploying custom models is more challenging.
Things to take into consideration;
● Serving the application with elastic load balancing;
● Monitoring a model’s predictions;
● Deploying a new model and “fallbacking” to a previous version.

Surround yourself with the best

Pierre-Edouard Brondel
Pierre-Edouard Brondel
Trainer and Desktop Application Consultant
Renowned as an educational expert in the IT and office technology field who has accumulated more than 25 years of experience, Pierre-Édouard is first and foremost passionate about human capital.
Marc Maisonneuve
Marc Maisonneuve
Trainer and Professional Efficiency Consultant
Frédéric Paradis
Frédéric Paradis
Certified Trainer and Cloud Architect
As a certified Microsoft trainer, Frédéric describes himself as a Cloud magician who easily navigates the mythical space between technology and reality.
Virginie Louis
Virginie Louis
Efficiency Trainer, Facilitator and Spatial Intelligence Consultant
Virginie sees herself first and foremost as a facilitator: she strays from the standard training to provide solutions that are adapted to her clients’ realities and objectives.
Be aware of trends, innovations and best practices, every month.
Confidentiality
Training center accredited by Emploi-Québec, Accreditation : 0051460
GST : 141582528 – QST : 1019557738
© 2021 AFI
AFI Expertise

The AFI experience brought to you by Edgenda

At AFI, our focus is to offer you relevant training and impactful learning experiences in line with your transformation approach. Do you need support for your approach? Check out Edgenda’s services: Edgenda.com