Automating Evals With Claude Code + Phoenix

Hosted by Mikyo King and Hamel Husain

Thu, Feb 19, 2026

9:00 PM UTC (1 hour)

Virtual (Zoom)

Free to join

767 students

Invite your network

Go deeper with a course

Featured in Lenny’s List
AI Evals For Engineers & PMs
Hamel Husain and Shreya Shankar
View syllabus

What you'll learn

Connect Claude Code to Phoenix observability data

Set up the toolchain to give Claude Code direct CLI access to your Phoenix traces, spans, and performance metrics.

Use CLI commands to fetch traces and debug agents

Learn important Phoenix CLI tools for retrieving observability data and identifying errors in agent-generated code.

Prompt AI to analyze system behavior in real-time

Practice natural language prompts that enable Claude Code to interpret traces, diagnose issues, and suggest fixes.

Why this topic matters

As AI agents like Claude Code write more of our code, they need the same visibility into system behavior that human developers have relied on. Observability data: traces, spans, latency and errors are no longer just for dashboards and human eyes. This hands-on session covers how to give Claude Code direct access to your Phoenix observability data via the CLI.

You'll learn from

Mikyo King

Head of Open Source. Building the future of AI Observability at Arize AI

Mikyo is an Entrepreneurial full-stack engineer and technical leader who is passionate about building tools that have a profound impact.

Hamel Husain

ML Engineer with 20 years of experience

Hamel is a machine learning engineer with over 20 years of experience. He has worked with innovative companies such as Airbnb and GitHub, which included early LLM research used by OpenAI, for code understanding. He has also led and contributed to numerous popular open-source machine-learning tools. Hamel is currently an independent consultant helping companies build AI products.

Previously at

Arize AI
GitHub
Airbnb
Apple
Cisco

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.