Lightning Lessons

Architect Your LLM Twin

Hosted by Paul Iusztin

Share with your network

What you'll learn

LLM System design of your LLM Twin

→ Using the 3-pipeline architecture & MLOps good practices

Design a data collection pipeline

→ data crawling, ETLs, CDC, AWS

Design a feature pipeline

→ streaming engine in Python, data ingestion for fine-tuning & RAG, vector DBs

Design a training pipeline

→ create a custom dataset, fine-tuning, model registries, experiment trackers, LLM evaluation

Design an inference pipeline

→ real-time deployment, REST API, RAG, LLM monitoring

Why this topic matters

What is your LLM Twin? It is an AI character that writes like yourself by incorporating your style and voice into an LLM.

With an LLM Twin, you can generate posts or articles that sound like yourself in the blink of an eye.

In this session, you will learn how to design, a production-ready LLM twin of yourself powered by LLMs, vector DBs, and LLMOps good practices.

You'll learn from

Paul Iusztin

Senior ML & MLOps engineer @ Metaphysic | Co-Founder @ Decoding ML

I am a senior machine learning engineer and contractor with 7+ years of experience. I design and implement modular, scalable, and production-ready ML systems for startups worldwide.


My mission is to build data-intensive AI/ML systems that innovate the world.


I am also the co-founder of Decoding ML—a weekly eLearning ecosystem on Medium and Substack for production-grade ML & MLOps content.

The LLM Twin is built with

Comet
bytewax
Qdrant
Qwak

Watch the recording for free

By continuing, you agree to Maven's Terms and Privacy Policy.

© 2024 Maven Learning, Inc.