Certified
Online
On-site
Hybrid

MLOps, LLMOps & AI Deployment Engineering (Advanced)

Take your AI from notebooks to production-grade ML & GenAI systems

Duration:
5 days
Rating:
4.8/5.0
Level:
Advanced
1500+ users onboarded

Who will Benefit from this Training?

  1. DevOps & Platform Engineers
  2. Data Engineers & Analytics Engineers
  3. ML Engineers / Data Scientists moving into production
  4. Cloud / SRE teams supporting AI workloads
  5. Fraud / risk / forecasting models (Classical ML)
  6. AI copilots, RAG-based assistants, and LLM-powered products (GenAI)

What You'll Learn

  • Design MLOps & LLMOps architectures for cloud, on-prem, or hybrid environments
  • Use MLflow for experiment tracking, model registry & lifecycle management
  • Implement Feast as a feature store for consistent training & inference data
  • Build CI/CD pipelines for ML using GitHub Actions or Azure DevOps
  • Serve models using BentoML and Seldon Core on Kubernetes
  • Orchestrate ML workflows with Kubeflow Pipelines
  • Scale inference with Kubernetes autoscaling and modern serving patterns
  • Monitor models with Prometheus & Grafana, plus LLM observability using tools like Arize Phoenix / LangSmith
Read more
Read less

Training Objectives

  • Design MLOps & LLMOps architectures for cloud, on-prem, or hybrid environments
  • Use MLflow for experiment tracking, model registry & lifecycle management
  • Implement Feast as a feature store for consistent training & inference data
  • Build CI/CD pipelines for ML using GitHub Actions or Azure DevOps
  • Serve models using BentoML and Seldon Core on Kubernetes
  • Orchestrate ML workflows with Kubeflow Pipelines
  • Scale inference with Kubernetes autoscaling and modern serving patterns
  • Monitor models with Prometheus & Grafana, plus LLM observability using tools like Arize Phoenix / LangSmith
  • Deploy a production-grade RAG system with vector DB + embedding model + chat model

Build a high-performing, job-ready tech team.

Personalise your team’s upskilling roadmap and design a befitting, hands-on training program with Uptut

Key training modules

Comprehensive, hands-on modules designed to take you from basics to advanced concepts
Download Curriculum
  • Foundations of MLOps & AI Deployment
    1. MLOps vs DevOps vs DataOps vs LLMOps
    2. Architecture of modern ML & LLM systems
    3. From experimentation to production: challenges & patterns
  • Experiment Tracking & Model Management (MLflow)
    1. Tracking metrics, parameters & artifacts
    2. Model registry: staging, production, rollback
    3. Integrating MLflow with CI/CD and storage
  • Feature Stores with Feast
    1. Why feature stores matter in production
    2. Offline vs online features
    3. Building and using a feature repository with Feast
  • CI/CD for ML (GitHub Actions / Azure DevOps)
    1. CI/CD patterns for ML & GenAI
    2. Automated training, validation & deployment
    3. Approval workflows and gated promotions
  • Production Model Serving with BentoML & Seldon
    1. Production-ready model servers vs custom APIs
    2. Packaging models with BentoML
    3. Kubernetes-native deployment with Seldon Core
    4. A/B testing, canary and shadow deployments
  • Pipelines & Orchestration with Kubeflow
    1. Designing reusable pipeline components
    2. End-to-end ML workflows
    3. Scheduling, caching & lineage
  • Scaling Inference on Kubernetes
    1. Horizontal & event-based autoscaling
    2. CPU vs GPU inference strategies
    3. Load testing and performance tuning
  • Monitoring, Observability & LLM Evaluation
    1. Classical monitoring: latency, errors, data drift
    2. LLM observability: hallucination detection, RAG quality, trace analysis
    3. Dashboards and alerts with Prometheus, Grafana & LLM tooling
  • Governance, Security & Lifecycle Management
    1. Model approvals, access control & audit trails
    2. Compliance, rollback strategies & long-term lifecycle management
  • Classical MLOps Mini-Capstone
    1. Build and deploy a traditional ML model with MLflow, Feast & Kubernetes
    2. Implement CI/CD and monitoring end to end
  • LLMOps & Serving Large Models
    1. LLM serving patterns: vLLM vs Ollama vs TGI
    2. Token-per-second optimization & KV cache usage
    3. Quantization strategies for large models (e.g. 70B) on limited hardware
    4. Architectures for secure enterprise LLM serving
  • Capstone: Deploy a Production-Grade RAG System
    1. Teams design and deploy a complete RAG-based AI application:
    2. Stand up a vector database
    3. Deploy an embedding model for document indexing
    4. Serve a chat model (e.g. Llama / Mistral) via vLLM
    5. Implement RAG retrieval & generation pipeline
    6. Add monitoring for latency, relevance & hallucinations
    7. Deploy everything on Kubernetes with autoscaling & dashboards
    8. This capstone ties together MLOps, LLMOps and platform engineering into one real-world project.

Hands-on Experience with Tools

No items found.
No items found.
No items found.

Training Delivery Format

Flexible, comprehensive training designed to fit your schedule and learning preferences
Opt-in Certifications
AWS, Scrum.org, DASA & more
100% Live
on-site/online training
Hands-on
Labs and capstone projects
Lifetime Access
to training material and sessions

How Does Personalised Training Work?

Skill-Gap Assessment

Analysing skill gap and assessing business requirements to craft a unique program

1

Personalisation

Customising curriculum and projects to prepare your team for challenges within your industry

2

Implementation

Supplementing training with consulting support to ensure implementation in real projects

3

Why MLOps, LLMOps & AI Deployment Engineering for your business?

  • Deploy ML and LLM models reliably on Kubernetes
  • Build end-to-end ML & GenAI pipelines with CI/CD and approvals
  • Implement feature stores, model registries, and governance
  • Monitor both classical models and LLMs for performance, drift & hallucinations
  • Design and deploy RAG systems with vector databases and scalable inference

Lead the Digital Landscape with Cutting-Edge Tech and In-House " Techsperts "

Discover the power of digital transformation with train-to-deliver programs from Uptut's experts. Backed by 50,000+ professionals across the world's leading tech innovators.

Frequently Asked Questions

1. What are the pre-requisites for this training?
Faq PlusFaq Minus

The training does not require you to have prior skills or experience. The curriculum covers basics and progresses towards advanced topics.

2. Will my team get any practical experience with this training?
Faq PlusFaq Minus

With our focus on experiential learning, we have made the training as hands-on as possible with assignments, quizzes and capstone projects, and a lab where trainees will learn by doing tasks live.

3. What is your mode of delivery - online or on-site?
Faq PlusFaq Minus

We conduct both online and on-site training sessions. You can choose any according to the convenience of your team.

4. Will trainees get certified?
Faq PlusFaq Minus

Yes, all trainees will get certificates issued by Uptut under the guidance of industry experts.

5. What do we do if we need further support after the training?
Faq PlusFaq Minus

We have an incredible team of mentors that are available for consultations in case your team needs further assistance. Our experienced team of mentors is ready to guide your team and resolve their queries to utilize the training in the best possible way. Just book a consultation to get support.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.