Open
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
I am using pytest in my project and I wanted to include the LLM evals in the same way, however the logging is not sent to logfire when running pytest.
Example Code
import logfire
import pytest
from pydantic_evals import Dataset
from agents import my_agent
from models import InputModel, OutputModel
TEST_FILE = "llm-evals/file.yaml"
...
@pytest.mark.asyncio
async def test_therapeutic_area_agent():
dataset = Dataset[InputModel, OutputModel].from_file(TEST_FILE)
with logfire.span("test"):
logfire.info(f"Loaded dataset with {len(dataset.cases)} cases")
report = await dataset.evaluate(my_agent, name="my_agent")
report.print(include_input=True, include_output=True, include_expected_output=True, include_durations=True)
logfire.info(f"Average score {report.averages().assertions:.2f}")
Python, Pydantic AI & LLM client version
pydantic-ai-slim = {extras = ["anthropic", "openai"], version = "^0.0.55"}
pydantic-evals = {extras = ["logfire"], version = "^0.0.55"}
AnthropicProvider