Sign in to view content

Sign in to view this lesson and continue learning.

End-to-End AI Applications Day1 Lab

Module
58 mins
PythonMLOpsGitAPIs

Description

In this lab, Zach walks through the ML Ops for LangChain agents, focusing on the new Tests section and how to run them in CI with nightly evals and PR critical runs. He shows how PyTest filters critical tests with marks, and warn that eval tests call OpenAI and can burn tokens fast, so they are gated. He explains how seeded cases and judge rubrics work, and highlights that judge quality can miss real artifact quality unless we pass the DAG properly. Finally, he demonstrates DSPy based autoprompt optimization.