Operationalize machine learning and generative AI solutions (AI-300T00) - Training Courses | Afi U.
afiU logo
Explore our 2025-2026 catalogue View all courses
Training and Coaching

Cultivate a learning organization and develop talent.

Customer Experience

Optimize your processes for operational excellence.

Employee Experience

Engage, empower, and enhance employee well-being.

Artificial Intelligence

Master AI and automate your processes.

Leadership

Develop key skills to inspire and mobilize.

Digital Tools

Boost collaboration and productivity within your teams

Strategy and Performance

Align your goals for sustainable growth.

Digital Transformation

Leverage technology to innovate and accelerate your growth.

ContactFAQ

New

Operationalize machine learning and generative AI solutions (AI-300T00)

Azure MLOps and GenAIOps training to design, deploy, and operate production-ready AI and machine learning solutions using Azure ML and Microsoft Foundry.

Upcoming sessions

No date suits you?

Notify me when a session is added.

  • Duration: 4 days
  • Regular price: $3,495
  • Preferential price: $2,970tip icon

Course outline

Duration : 4 Days

© AFI Expertise inc.

This course prepares learners to design, implement, and operate Machine Learning Operations (MLOps) and Generative AI Operations (GenAIOps) solutions on Azure. It covers building secure and scalable AI infrastructure, managing the full lifecycle of traditional machine learning models with Azure Machine Learning, and deploying, evaluating, monitoring, and optimizing generative AI applications and agents using Microsoft Foundry. Learners will gain hands-on knowledge of automation, continuous integration and delivery, infrastructure as code, and observability by using tools such as GitHub Actions, Azure CLI, and Bicep. The course emphasizes collaboration with data science and DevOps teams to deliver reliable, production-ready AI systems aligned with modern MLOps and GenAIOps best practices.

Audience

This course is intended for data scientists, machine learning engineers, and DevOps professionals who want to design and operate production-grade AI solutions on Azure. It is suited for learners with experience in Python, a foundational understanding of machine learning concepts, and basic familiarity with DevOps practices such as source control, CI/CD, and command-line tools, who are preparing to implement MLOps and GenAIOps workflows using Azure-native services.

Prerequisites

Data scientists, machine learning engineers and technical professonals responsible for deploying, automating and operating machine learning and generative AI solutions on Azure.

Objectives

  • Design and run machine learning experiments using Azure Machine Learning, including AutoML
  • Preprocess data, configure featurization, and evaluate, compare, and track models using MLflow and the Responsible AI dashboard
  • Optimize models through hyperparameter tuning and the use of sweep jobs
  • Design, run, and automate machine learning pipelines in Azure Machine Learning
  • Apply MLOps best practices by automating training, evaluation, and deployment with GitHub Actions
  • Deploy machine learning models in controlled, reproducible environments
  • Plan and implement a GenAIOps approach for generative AI applications
  • Select and compare language models based on real-world use cases
  • Manage, version, and safely deploy prompts and AI agents using Microsoft Foundry and GitHub
  • Evaluate, compare, and optimize generative AI agents using structured and automated experiments
  • Implement automated AI evaluations aligned with human-defined criteria
  • Monitor, analyze, and improve generative AI applications using monitoring and advanced tracing
  • Diagnose and debug complex generative AI workflows using trace data

Teaching method

Training delivered by a Microsoft Certified Trainer (MCT)

Contents

Experiment with Azure Machine Learning
  • Preprocess data and configure featurization
  • Run an automated machine learning (AutoML) experiment
  • Evaluate and compare models
  • Configure MLflow for model tracking in notebooks
  • Train and track models in notebooks
  • Evaluate models using the Responsible AI dashboard
  • Exercise – Find the best classification model with Azure Machine Learning
  • Module assessment

Perform hyperparameter tuning with Azure Machine Learning

  • Define a search space
  • Configure a sampling method
  • Configure early termination
  • Use a sweep job for hyperparameter tuning
  • Exercise – Run a sweep job
  • Module assessment

Run pipelines in Azure Machine Learning

  • Create components
  • Create a pipeline
  • Run a pipeline job
  • Exercise – Run a pipeline job
  • Module assessment

Trigger Azure Machine Learning jobs with GitHub Actions

  • Understand the business problem
  • Explore the solution architecture
  • Use GitHub Actions for model training
  • Exercise
  • Module assessment

Trigger GitHub Actions with feature-based development

  • Understand the business problem
  • Explore the solution architecture
  • Trigger a workflow
  • Exercise
  • Module assessment

Work with environments in GitHub Actions

  • Understand the business problem
  • Explore the solution architecture
  • Set up environments
  • Exercise
  • Module assessment

Deploy a model with GitHub Actions

  • Understand the business problem
  • Explore the solution architecture
  • Model deployment
  • Exercise
  • Module assessment

Plan and prepare a GenAIOps solution

  • Explore GenAIOps use cases
  • Select the right generative AI model
  • Understand the development lifecycle of a language model application
  • Explore available tools and frameworks to implement GenAIOps
  • Exercise – Compare language models from the model catalog
  • Module assessment

Manage prompts for agents in Microsoft Foundry with GitHub

  • Apply version control to prompts
  • Understand Microsoft Foundry agents and prompt versioning
  • Organize prompts in GitHub repositories
  • Develop safe prompt deployment workflows
  • Exercise – Develop prompt and agent versions
  • Knowledge check

Evaluate and optimize AI agents through structured experiments

  • Design evaluation experiments
  • Apply Git-based workflows to optimization experiments
  • Apply evaluation rubrics for consistent scoring
  • Exercise – Evaluate and compare AI agent versions
  • Knowledge check

Automate AI evaluations with Microsoft Foundry and GitHub Actions

  • Understand why automated evaluations matter
  • Align evaluators with human criteria
  • Create evaluation datasets
  • Implement batch evaluations with Python
  • Integrate evaluations into GitHub Actions
  • Exercise – Set up automated evaluations
  • Knowledge check

Monitor your generative AI application

  • Why do you need to monitor?
  • Understand key metrics to monitor
  • Explore how to monitor with Azure
  • Integrate monitoring into your application
  • Interpret monitoring results
  • Exercise – Enable monitoring for a generative AI application
  • Knowledge check

Analyze and debug your generative AI application with tracing

  • Why do you need tracing?
  • Identify what to trace in generative AI applications
  • Implement tracing in generative AI applications
  • Debug complex workflows with advanced tracing patterns
  • Make informed decisions through trace data analysis
  • Exercise – Enable tracing for a generative AI application
  • Knowledge check