LLMOps – Productionalizing Real-world Applications with LLMs

10 AUGUST 2024 | 09:30AM - 05:30PM | location La Marvella :- 2nd Block, Jayanagar, Bengaluru

About the workshop

LLMs have taken the world by storm since their inception, and the past year has marked a significant shift in the AI industry and its impact on our day-to-day lives.

As an engineer working on LLMs, tackling the challenges of collaborating, training, scaling, and monitoring such massive models has become increasingly complex. LLMOps encompasses the practices, techniques, and tools necessary for the operational management of large language models in production. It's the infrastructure created by LLMOps that drives efficiency, agility, security, and scalability for both its engineers and end-users.

Join us in this immersive LLMOps workshop, where we'll embark on a day-long journey, delving into various modules crafted to equip you with actionable insights and hands-on skills to harness the full potential of LLMs.

Prerequisite: AWS Account with Sagemaker, EKS and Bedrock full access

video thumbnail

Instructor

Modules

  • Introduction to LLMs and their operational mechanisms.
  • Overview of evaluation metrics in the LLM landscape.
  • Key operational disparities between MLOps and LLMOps.
  • Current trends and the evolving state of Generative AI.
  • A glimpse into the tools and technologies shaping the LLMOps ecosystem.

  • Identifying suitable problems for LLM solutions.
  • Setting up AWS Sagemaker and AWS Bedrock.
  • Leveraging foundation models for diverse real-world tasks.
  • Enhancing LLM outputs through prompt engineering.

  • Exploring various fine-tuning and optimization techniques.
  • Model serving strategies on AWS.
  • Hands-on LLM fine-tuning and optimization lab on AWS Sagemaker.

  • Understanding AWS Sagemaker features.
  • Exploring common AWS services. 
  • Sagemaker Feature store and other vector databases.
  • Implementing Sagemaker pipelines for seamless training, evaluation, deployment, and real-time monitoring.
  • Fully automated re-training and deployment using model evaluation.
  • Hands-on session on end-to-end pipeline on Sagemaker.

  • Unveiling the internal workings of Kubernetes.
  • Deploying LLMs on Kubernetes with auto-scaling capabilities.
  • Evaluating LLM performance through real-world load testing.
  • Monitoring LLMs performance in production environments.

  • Exploring vector databases and embedding techniques.
  • Understanding RAG pipelines and their versatile applications.
  • Weighing the pros and cons of RAG utilization.
  • Hands-on session implementing and enhancing RAG techniques for improved performance and accuracy.

  • Grasping governance principles in LLMOps and their significance.
  • Best practices for ensuring data security in LLMOps.
  • Overview of EU AI Draft Regulations and their implications.
  • Addressing common LLM vulnerabilities and strategies for effective guardrails.
  • Hands-on implementation of guardrails on Sagemaker.

Throughout our sessions, we aim to foster a collaborative and engaging environment, where participants can actively learn and discuss LLMs.

Get ready to embark on an enriching journey into the realm of LLMOps!

  • Strong Python programming skills
  • Familiarity with machine learning and deep learning concepts
  • Basic understanding of Large Language Models (LLMs) and their applications
  • Experience with cloud computing, particularly AWS services
  • Basic knowledge of containerization and Kubernetes
  • Familiarity with MLOps concepts and practices
  • AWS Account with full access to Sagemaker, EKS, and Bedrock (as explicitly stated in the outline)
*Note: These are tentative details and are subject to change.
Stay informed about DHS 2025

Certificate of Participation

Receive a digital (blockchain-enabled) and physical certificate to showcase your accomplishment to the world

  • Earn your certificate
  • Share your achievement
Book Tickets
Book Tickets

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details