Course Overview:
Welcome to the AWS MLOps & LLMops Training Course, an immersive learning experience designed for individuals looking to master machine learning operations (MLOps) and large language model operations (LLMops) within the Amazon Web Services (AWS) ecosystem. This comprehensive course provides a deep dive into the tools, practices, and methodologies necessary to efficiently deploy, monitor, and manage machine learning and large language models in production at scale.
Course Objectives:
By the end of this course, participants will:
- Understand the core principles and practices of MLOps and LLMops, including the importance of automation, monitoring, and governance in the ML lifecycle.
- Gain proficiency with AWS services and tools relevant to MLOps and LLMops, such as AWS SageMaker, AWS Lambda, Amazon CloudWatch, and AWS IAM.
- Learn to design, implement, and manage robust, scalable, and secure ML and LLM pipelines on AWS.
- Acquire hands-on experience through practical labs and projects, applying best practices for versioning, testing, and deploying ML and LLM models.
- Develop skills to monitor and optimize model performance, manage data and model drift, and ensure compliance with industry standards and regulations.
Curriculum
- 21 Sections
- 115 Lessons
- 10 Weeks
Expand all sectionsCollapse all sections
- Introduction4
- MLOps Fundamentals3
- AWS and MLOps7
- AWS Specific Tools and Configurations11
- 4.1DevOps Lifecycle Tools in AWS
- 4.2Creating and Configuring an AWS Account
- 4.3Security Setup: MFA, IAM Accounts, and Policies
- 4.4Introduction to S3 Buckets and EC2 Instances
- 4.5AWS Specific Tools and Configurations
- 4.6Creation of S3 Bucket from Console
- 4.726. Creation of S3 Bucket from CLI
- 4.8Version Enablement in S3
- 4.9Introduction EC2 instances
- 4.10Launch EC2 instance & SSH into EC2 Instances
- 4.11Housekeeping Activity
- Linux and Bash for MLOps3
- Core Concepts10
- 6.1CI/CD Pipeline Introduction
- 6.2Getting Started with AWS CodeCommit & Distributed Version Control Systems (DVCS)
- 6.3Initial Configuration & Basic Git Commands
- 6.4Setting Up Your Git Workspace
- 6.5Understanding Git Workflow
- 6.6Adding Files to the Staging Area & Understanding Staged Differences
- 6.7Unstaging, Resetting, and Reverting Changes in Git
- 6.8Working with AWS CodeCommit: Remote Commands, Security, and Integrations
- 6.9Cloning, Branching, and Handling Git Branches: Hands-On Parts 1 & 2
- 6.10Resolving Git Conflicts, Rebasing vs. Merging, and Using Git Stash
- Deployment & Security8
- 7.1Introduction to AWS CodeDeploy & YAML
- 7.2First Steps with AWS CodeDeploy: Hands-On Introduction and Deep Dive
- 7.3Exploring AWS CodePipeline: Creation and Automation with Manual Approval
- 7.4Introduction to Docker: Basics & Installation
- 7.5Pull the image from Docker Desktop
- 7.6Dockerfile
- 7.7Push the Docker Image to ECR
- 7.8Hands on – Amazon ECR for AWS CodeBuild
- Amazon SageMaker & Feature Engineering11
- 8.1Why Amazon SageMaker is Preferred for Machine Learning Workflows
- 8.2Domain Creation, Studio Setup, and Clean-Up Activities in SageMaker
- 8.3Feature Engineering Essentials, Data Wrangler Setup, and Transformation Techniques
- 8.4Data Quality and Insights Report
- 8.5Univariate Analysis & Bias Report
- 8.6Target Leakage
- 8.7Data Transformation
- 8.8Data Transformation – Custom Script
- 8.9Export to S3
- 8.10Feature Engineering on Sagemaker Notebook Instance
- 8.11Summary
- Advanced Concepts3
- Building and Managing MLOps Pipelines3
- Packaging, Deployment, and Kubernetes3
- LLMOps4
- Fundamentals of Large Language Models (LLMs)3
- Development Environment Setup3
- Data Preparation and Preprocessing3
- Model Training and Evaluation3
- Continuous Integration (CI) for LLM Development3
- Continuous Deployment (CD) for LLM Models3
- Use Cases and Applications12
- 19.1Question Answering
- 19.2Building an LLM-based question answering system
- 19.3Deploying the question answering model using CI/CD
- 19.4Text Generation
- 19.5Developing an LLM-based text generation application
- 19.6Automating the deployment of the text generation model
- 19.7Sentiment Analysis
- 19.8Implementing an LLM-based sentiment analysis pipeline
- 19.9Integrating the sentiment analysis model into a CI/CD workflow
- 19.10Named Entity Recognition (NER)
- 19.11Creating an LLM-based NER system
- 19.12Setting up a CI/CD pipeline for NER model deployment
- Capstone Projects12
- 20.1Knowledge Base Assistant
- 20.2Building an LLM-powered knowledge base assistant
- 20.3Implementing CI/CD for the assistant’s backend and frontend
- 20.4Creative Writing Tool
- 20.5Developing an LLM-based creative writing tool
- 20.6Automating the deployment and scaling of the writing model
- 20.7Customer Support Chatbot
- 20.8Creating an LLM-based customer support chatbot
- 20.9Setting up a CI/CD pipeline for chatbot training and deployment
- 20.10Content Generation Platform
- 20.11Building an LLM-powered content generation platform
- 20.12Implementing CI/CD for the platform’s backend and frontend
- Best Practices and Future Trends3