For a complete example, see the Pydantic AI example on GitHub.
Pydantic AI uses OpenTelemetry internally, which Gentrace automatically captures when initialized, providing seamless tracing without additional configuration.

Prerequisites

Installation

pip install gentrace pydantic-ai pydantic-ai-openai

Configuration

Simply initialize Gentrace before using Pydantic AI:
pydantic_ai_simple.py
import os
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from gentrace import init, interaction

# Initialize Gentrace (will capture Pydantic AI's OTEL traces)
init(
    api_key=os.getenv("GENTRACE_API_KEY"),
    base_url=os.getenv("GENTRACE_BASE_URL", "https://gentrace.ai/api"),
)

# Create a simple Pydantic AI agent
agent = Agent(
    OpenAIModel("gpt-4o-mini"),
    system_prompt="You are a helpful assistant that gives concise answers.",
)

@interaction(name="pydantic_ai_chat", pipeline_id=os.getenv("GENTRACE_PIPELINE_ID", ""))
async def chat_with_agent(prompt: str) -> str:
    result = await agent.run(prompt)
    return result.output

# Usage
import asyncio

async def main():
    response = await chat_with_agent("What is 2+2?")
    print(f"Agent says: {response}")

asyncio.run(main())

Environment Variables

.env
GENTRACE_API_KEY=your-gentrace-api-key
GENTRACE_PIPELINE_ID=your-pipeline-id
OPENAI_API_KEY=your-openai-api-key

How It Works

Pydantic AI automatically generates OpenTelemetry traces for:
  • Agent invocations
  • Model calls
  • Tool usage
  • Retries and validation
Since Gentrace is OpenTelemetry-compatible, it automatically captures all these traces without requiring any additional instrumentation.

Supported Models

Pydantic AI supports multiple models through different providers:
  • OpenAI (GPT-4, GPT-3.5)
  • Anthropic (Claude)
  • Google (Gemini)
  • Groq
  • Mistral