Duration : 3 days
© AFI Expertise inc.
In each section, we’ll go through the different cloud provider’s services in order to achieve a successful ML solution with thoughtful considerations related to the data and the application, since it is the core of every ML project;
We further analyze the different services offered by the cloud provider and present a way to use them in symbiosis in order to achieve a viable ML solution.
Developers with an interest in machine learning and Amazon Web Services
Introduction to ML
We hereby define what is considered a ML project as well as the requirements in terms of data and applications.
To this end, we’ll go through a series of concrete use cases that better illustrate the needs for ML as a solution.
Analyze the requirements
This section is mainly about the computational infrastructure needed to get a ML project up-to-speed as well as the data requirements. We first present data storage and annotation requirements and follow up with the compute power and cost estimations. These requirements will inevitably change depending on the ML problem and the nature of the data i.e. sensors, text, images, videos, etc.
Every ML project relies on data, and that data needs to be stored somewhere. In this section, we list down several types of data involved in a ML project;
1. The actual data we need to take action on; images if we are object detection.
Text, if we are classifying documents.
2. Models; the trained models need to be stored and versioned in order to query them.
3. Predictions; monitoring models’ predictions is too often left undone. The predictions should be stored and monitored in order to analyze the performance of a deployed model.
Amazon Web Services offers different storage granularities with their very specific uses cases;
● Relational and NoSQL databases;
Having data for a ML solution is not enough; it has to be good data. To achieve this, it is important to consider annotation and verification tools. Again, it differs considering the nature of the data, and will be walking through a series of examples (text, images, etc.)
● SageMaker, with human-in-the-loop
Training ML models is very different than deploying them. We look into the different services that suits the need of a ML solution lifecycle.
● Costs Explorer & Budget
Implement the solution
Given a ML project scoped, we explore what are the provider’s services available in order to implement the solution i.e. use their available models or annotated data, train new models to achieve desired requirements.
We’ll be walking through and analyzing the tools for 2 different projects;
1. Document Retrieval
2. Image Classification
● AWS Natural Language Processing
● AWS Image Classification/Object Detection
Custom solutions require customs tools and we present how we can use them in symbiosis in order to achieve a viable ML solution, again in two different ML applications.
Deploy the solution
Deploying a ML solution is very different from training it. It does require, however, some thinking; what’s the optimal way of serving the model, frequency of re-training the model, monitoring prediction and compute environment.
Most available models are deployable “as-is”. However, deploying custom models is more challenging.
Things to take into consideration;
● Serving the application with elastic load balancing;
● Monitoring a model’s predictions;
● Deploying a new model and “fallbacking” to a previous version.