interaction()
The interaction()
function wraps your AI functions with OpenTelemetry tracing to track interactions within a Gentrace pipeline. It creates spans for function execution, records arguments and outputs, and automatically manages OpenTelemetry baggage to ensure proper sampling and tracing context.
When used with the GentraceSpanProcessor
(configured in your OpenTelemetry setup), Gentrace simplifies the process of converting baggage values to span attributes, ensuring the OpenTelemetry Collector can properly filter and route traces to Gentrace.
The GentraceSpanProcessor
simplifies baggage-to-span attribute conversion by automatically copying baggage values (like gentrace.sample=true
) to span attributes. This ensures that the OpenTelemetry Collector can properly identify and route Gentrace-related traces without requiring manual attribute management in your code.
Basic usage
- TypeScript
- Python
typescript
import { init, interaction } from 'gentrace';import { Anthropic } from '@anthropic-ai/sdk';init({apiKey: process.env.GENTRACE_API_KEY,});const anthropic = new Anthropic({apiKey: process.env.ANTHROPIC_API_KEY,});const PIPELINE_ID = process.env.GENTRACE_PIPELINE_ID!;async function queryAI(prompt: string): Promise<string> {const response = await anthropic.messages.create({model: 'claude-3-5-sonnet-20241022',messages: [{ role: 'user', content: prompt }],max_tokens: 1024,});return response.content[0].text;}const tracedQueryAI = interaction('Query AI', queryAI, {pipelineId: PIPELINE_ID,});// Use the traced functionconst result = await tracedQueryAI('What is the capital of France?');console.log(result);
python
import osfrom gentrace import init, interactionfrom anthropic import Anthropicinit(api_key=os.environ["GENTRACE_API_KEY"])PIPELINE_ID = os.environ["GENTRACE_PIPELINE_ID"]anthropic = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])# Define your AI functionasync def query_ai(prompt: str) -> str:response = await anthropic.messages.create(model="claude-3-5-sonnet-20241022",messages=[{"role": "user", "content": prompt}],max_tokens=1024)return response.content[0].text# Wrap with interaction tracingtraced_query_ai = interaction("Query AI", query_ai, {"pipeline_id": PIPELINE_ID})# Use the traced functionresult = await traced_query_ai("What is the capital of France?")print(result)
Overview
An interaction in Gentrace represents a single AI function call or operation within your pipeline. The interaction()
function:
- Creates OpenTelemetry spans for function execution with detailed tracing
- Records function arguments and outputs as span events for debugging
- Manages OpenTelemetry baggage by setting
gentrace.sample="true"
for proper sampling - Associates with pipelines by adding the
gentrace.pipeline_id
attribute - Handles errors gracefully by recording exceptions and setting span status
Parameters
- TypeScript
- Python
Function signature
typescript
function interaction<F extends (...args: any[]) => any>(name: string,fn: F,options: InteractionOptions): F
Parameters
name
(string, required): The name for the OpenTelemetry spanfn
(function, required): The function to wrap with tracingoptions
(InteractionOptions, required): Configuration options
InteractionOptions
typescript
type InteractionOptions = {pipelineId: string;attributes?: Record<string, any>;}
pipelineId
(string, required): The UUID of the Gentrace pipeline this interaction belongs toattributes
(object, optional): Additional attributes to set on the span
Decorator signature
python
def interaction(*,pipeline_id: str,name: Optional[str] = None,attributes: Optional[Dict[str, Any]] = None,) -> Callable[[F], F]
Parameters
pipeline_id
(str, required): The UUID of the Gentrace pipeline this interaction belongs toname
(str, optional): Custom name for the OpenTelemetry span. Defaults to the function's__name__
attributes
(dict, optional): Additional attributes to set on the span
Additional attributes
You can add custom attributes to the OpenTelemetry span using the attributes
option.
- TypeScript
- Python
typescript
import { interaction } from 'gentrace';async function generateText(prompt: string, temperature: number): Promise<string> {const response = await openai.chat.completions.create({model: 'gpt-4o',messages: [{ role: 'user', content: prompt }],temperature,});return response.choices[0].message.content || '';}const tracedGenerateText = interaction('Generate Text', generateText, {pipelineId: PIPELINE_ID,attributes: {provider: 'openai',version: '1.0.0',},});const result = await tracedGenerateText('Write a haiku about coding', 0.7);
python
@interaction(pipeline_id=PIPELINE_ID,name="Generate Text",attributes={"model": "gpt-4o","provider": "openai","version": "1.0.0",})async def generate_text(prompt: str, temperature: float) -> str:response = await openai.chat.completions.acreate(model="gpt-4o",messages=[{"role": "user", "content": prompt}],temperature=temperature,)return response.choices[0].message.content or ""result = await generate_text("Write a haiku about coding", 0.7)
OpenTelemetry integration
The interaction()
function provides deep integration with OpenTelemetry:
Span creation and attributes
- Span Name: Uses the provided name parameter
- Pipeline ID: Automatically adds
gentrace.pipeline_id
attribute - Custom Attributes: Merges any additional attributes you provide
- Function Metadata: Records function name and execution context
Baggage management
The function automatically manages OpenTelemetry baggage:
typescript
// Automatically sets baggage for the duration of the functionbaggage.setEntry('gentrace.sample', 'true')
This ensures that:
- All nested spans are properly sampled
- Gentrace can identify and process the traces
- Context is preserved across async boundaries
Event recording
Function arguments and outputs are recorded as span events:
- Arguments Event:
gentrace.fn.args
with serialized function arguments - Output Event:
gentrace.fn.output
with serialized return value - Exception Events: Automatic exception recording with stack traces
Error handling
The interaction()
function handles errors gracefully and automatically associates all errors and exceptions with the OpenTelemetry span:
- TypeScript
- Python
typescript
import { interaction } from 'gentrace';const PIPELINE_ID = '8c0c49f4-4a16-4983-afe2-5fc7f91c918c'async function riskyAIFunction(input: string): Promise<string> {if (input.length === 0) {throw new Error('Input cannot be empty');}// AI processing logicreturn `Processed: ${input}`;}const tracedRiskyFunction = interaction('Risky AI Function', riskyAIFunction, {pipelineId: PIPELINE_ID,});try {const result = await tracedRiskyFunction('');} catch (error) {// Error is automatically recorded in the OpenTelemetry span with:// - Exception event with stack trace// - Span status set to ERROR// - Error type and message as attributes}
python
import asynciofrom gentrace import interactionPIPELINE_ID = '9356cfde-a9ec-4fdb-aaa4-433bb6840a86'@interaction(pipeline_id=PIPELINE_ID, name="Risky AI Function")async def risky_ai_function(input_text: str) -> str:if len(input_text) == 0:raise ValueError("Input cannot be empty")return f"Processed: {input_text}"async def main():try:result = await risky_ai_function("")except ValueError as e:print(f"Function failed: {e}")# Exception is automatically recorded in the OpenTelemetry span with:# - Exception event with stack trace# - Span status set to ERROR# - Error type and message as attributesasyncio.run(main())
OTEL span error integration
When errors occur within interactions:
- Automatic Capture: All
Error
objects (TypeScript) and exceptions (Python) are automatically captured as span events - Stack Traces: Full stack traces are preserved in the span for debugging
- Error Attributes: Error messages, types, and metadata are recorded as span attributes
- Span Status: The span status is automatically set to
ERROR
when unhandled exceptions occur
Usage in experiments
The interaction()
function works seamlessly with Gentrace experiments:
- TypeScript
- Python
typescript
import { experiment, evalOnce, interaction } from 'gentrace';// Define your AI functionasync function summarizeText(text: string): Promise<string> {// Your summarization logicconst response = await openai.chat.completions.create({model: 'gpt-4o',messages: [{ role: 'system', content: 'Summarize the following text concisely.' },{ role: 'user', content: text },],});return response.choices[0].message.content || '';}// Wrap with interaction tracingconst tracedSummarizeText = interaction('Summarize Text', summarizeText, {pipelineId: PIPELINE_ID,attributes: {model: 'gpt-4o',task: 'summarization',},});// Use in experimentsexperiment(PIPELINE_ID, async () => {await evalOnce('summarization-test', async () => {const longText = "This is a very long article about artificial intelligence...";const summary = await tracedSummarizeText(longText);// Evaluate the summary qualityreturn {summary,length: summary.length,quality_score: calculateQualityScore(summary),};});});
python
from gentrace import experiment, eval, interaction# Define your AI functionasync def summarize_text(text: str) -> str:# Your summarization logicresponse = await openai.chat.completions.acreate(model="gpt-4o",messages=[{"role": "system", "content": "Summarize the following text concisely."},{"role": "user", "content": text},],)return response.choices[0].message.content or ""# Wrap with interaction tracing@interaction(pipeline_id=PIPELINE_ID,name="Summarize Text",attributes={"model": "gpt-4o","task": "summarization",})async def traced_summarize_text(text: str) -> str:return await summarize_text(text)# Use in experiments@experiment(pipeline_id=PIPELINE_ID)async def summarization_experiment() -> None:@eval(name="summarization-test")async def summarization_test() -> dict:long_text = "This is a very long article about artificial intelligence..."summary = await traced_summarize_text(long_text)# Evaluate the summary qualityreturn {"summary": summary,"length": len(summary),"quality_score": calculate_quality_score(summary),}await summarization_test()
Requirements
- OpenTelemetry Setup: The
interaction()
function requires OpenTelemetry to be configured for tracing. See the OpenTelemetry Setup Guide for configuration details. - Valid Pipeline ID: Must provide a valid UUID for an existing Gentrace pipeline
- API Key: Gentrace API key must be configured via
init()
- Function Compatibility: Works with both synchronous and asynchronous functions
Related functions
init()
- Initialize the Gentrace SDKtraced()
- Lower-level function tracing without pipeline associationexperiment()
- Create testing contexts for grouping related evaluationseval()
/evalOnce()
- Run individual test cases within experiments