Prerequisites

  • Node.js 20+ (for TypeScript) or Python 3.8+ (for Python)
  • An OpenAI API key (or another supported LLM provider)
  • A Gentrace account (sign up free)

Getting Started

1

Get your API key

Generate a Gentrace API key:
  1. Log in to Gentrace
  2. Navigate to API Keys
  3. Click “Create key” and copy the key
Keep your API key secure and never commit it to version control. Use environment variables instead.
2

Install the SDK

npm install gentrace
3

Initialize Gentrace

The Gentrace SDK now automatically configures OpenTelemetry for you. Simply call init() with your API key:
quickstart.ts
import { init } from 'gentrace';

// Initialize Gentrace with automatic OpenTelemetry setup
init({
  apiKey: process.env.GENTRACE_API_KEY,
});
The init() function automatically:
  • Configures OpenTelemetry SDK with proper exporters (read more)
  • Sets up authentication with Gentrace
  • Registers shutdown handlers to ensure traces are flushed correctly
  • Configures sampling and span processing
4

Create your first traced interaction

Now let’s create a simple interaction for Gentrace to trace:
The interaction() function is required to see spans in your pipeline. Without wrapping your code in an interaction() with a pipelineId, traces won’t appear in the Gentrace dashboard. This function associates your spans with a specific pipeline for viewing and analysis.
example.ts
import { init, interaction } from 'gentrace';

init({
  apiKey: process.env.GENTRACE_API_KEY,
})

const boringInteraction = interaction(
  'boring-interaction',
  async () => {
    return 'TODO: replace me with a call to an LLM or agent';
  }
);

boringInteraction();
5

View your trace

After running the code above:
  1. Go to Gentrace
  2. Navigate to the Pipelines section
  3. You’ll see your trace with full details.
If you’re not seeing traces in your Gentrace pipeline, visit https://gentrace.ai/s/otel-metrics to monitor span ingestion. This shows distributions of accepted, rejected, and buffered spans to help identify configuration issues.

What’s next?

Now, it’s time to replace your boring interaction with a real agent or LLM call. Go to the instrumentations page to pick your LLM library or agent framework.