Skip to main content
Version: 4.7.8

Metadata

You can attach run-level metadata with our SDKs to attach related information to your generative output.

Usage

You can add run-level context using both our simple and advanced SDKs.

Supported keys

Some keys only work on certain types of AI generations. For example, user IDs can only be associated to an entire generation, whereas metadata can be assigned directly on particular steps within a generation. Visit the detailed page for each context type to learn more about its semantics.

Simple SDK

Every simple invocation allows you to specify a gentrace parameter, where you can supply specific context key/value pairs. Here's an example from our OpenAI simple plugin.

typescript
import { init } from "@gentrace/core";
import { OpenAI } from "@gentrace/openai";
 
init({
apiKey: process.env.GENTRACE_API_KEY,
});
 
const openai = new OpenAI({
apiKey: process.env.OPENAI_KEY,
});
 
await openai.chat.completions.create({
messages: [
{
role: "user",
contentTemplate: "Hello! My name is {{ name }}. Write a brief essay about Maine.",
contentInputs: { name: "Vivek" },
},
],
model: "gpt-3.5-turbo",
pipelineSlug: "simple-pipeline",
stream: true,
 
// Context is specified with the `gentrace` key. In this case, the user ID will be
// associated with this generation.
gentrace: {
userId: "TWFuIGlzIGRpc3Rpbmd1aXNoZW",
},
});

Advanced SDK

You can specify context when you initialize a PipelineRun instance with the pipeline.start() function.

typescript
import { Pipeline } from "@gentrace/core";
import { initPlugin } from "@gentrace/openai";
 
const plugin = await initPlugin({
apiKey: process.env.OPENAI_KEY,
});
 
const pipeline = new Pipeline({
slug: "advanced-pipeline",
plugins: {
openai: plugin,
},
});
 
// Context is specified directly as an object. This context is associated with all
// steps in the generative pipeline.
const runner = pipeline.start({
userId: "TWFuIGlzIGRpc3Rpbmd1aXNoZW",
});
 
const openai = runner.openai;
 
const chatCompletionResponse = await openai.chat.completions.create({
messages: [{ role: "user", content: "Hello!" }],
model: "gpt-3.5-turbo",
stream: true,
});
 

You can also associate context with any individual step with the gentrace parameter.

typescript
import { Pipeline } from "@gentrace/core";
import { initPlugin } from "@gentrace/openai";
 
const plugin = await initPlugin({
apiKey: process.env.OPENAI_KEY,
});
 
const pipeline = new Pipeline({
slug: "advanced-pipeline",
plugins: {
openai: plugin,
},
});
 
const runner = pipeline.start();
 
const openai = runner.openai;
 
const chatCompletionResponse = await openai.chat.completions.create({
messages: [{ role: "user", content: "Hello!" }],
model: "gpt-3.5-turbo",
stream: true,
// Context is applied directly to this AI step
gentrace: {
userId: "TWFuIGlzIGRpc3Rpbmd1aXNoZW"
}
});

Certain step methods like measure() and checkpoint() require that context is explicitly passed in a context key.

typescript
import { Pipeline } from "@gentrace/core";
import { initPlugin } from "@gentrace/openai";
 
const plugin = await initPlugin({
apiKey: process.env.OPENAI_KEY,
});
 
const pipeline = new Pipeline({
slug: "advanced-pipeline",
plugins: {
openai: plugin,
},
});
 
const runner = pipeline.start();
 
const outputs = await runner.measure(
(pageDescription) => {
// ... Omitted logic for creating HTML from provided inputs
return {
koalaHtml: htmlOutput
};
},
["Create HTML for a site about koalas"],
{
context: {
render: {
type: "html",
key: "koalaHtml",
},
},
},
);
 
await runner.submit();

OpenAI automatic capture (cost, speed)

Some metadata attributes are automatically captured by our OpenAI SDK. These include:

  • cost in US$
  • speed in milliseconds
typescript
const runner = pipeline.start();
 
const openai = runner.openai;
 
const chatCompletionResponse = await openai.chat.completions.create({
messages: [{ role: "user", content: "Hello!" }],
model: "gpt-3.5-turbo",
stream: true,
});
 
// ✅ By invoking the create() function, the cost and speed of that generation
// are automatically captured.

Streaming costs

OpenAI does not return usage information for streaming completions. When this happens, we estimate the cost by tokenizing the input/output information with tiktoken.

Streaming tool call costs are estimates

OpenAI does not disclose how tool call definitions are tokenized internally. In this case, we estimate costs with the following heuristic. Note: Your actual OpenAI costs may deviate slightly from this heuristic.

typescript
// Constants for estimating tool call costs
const ONE_TOOL_PROMPT_TOKENS = 19;
const MULTI_TOOL_INITIAL_JUMP = 35;
const MULTI_TOOL_PROGRESSIVE_JUMP = 12;
function estimateToolCallTokens(
toolCallDefinitionTokenCount: number,
numToolCalls: number
): number {
if (toolCallCount === 1) {
return toolCallDefinitionTokenCount + ONE_TOOL_PROMPT_TOKENS;
} else {
const offset = ONE_TOOL_PROMPT_TOKENS
+ (toolCallCount - 2) * MULTI_TOOL_PROGRESSIVE_JUMP + MULI_TOOL_INITIAL_JUMP;
return toolCallDefinitionTokenCount + offset;
}
}

Manual cost capture

To capture costs from fine-tuned OpenAI models and other LLM providers, you can manually specify the cost of your model's generation by providing the following structure in your submitted output:

Data structure

json
{
"cost": {
"type": "cents",
"value": 100
}
}

Simple

To capture costs with the simple test result submission flow, specify the cost in the output object.

typescript
const tcs = await getTestCases(pipelineSlug);
const outputs = await Promise.all(
tcs.map((testCase) => {
// ... Omit logic for creating generative output
return {
generativeOutput: "... ",
cost: {
type: "cents",
value: 100
}
}
})
);
 
await submitTestResult(pipelineSlug, tcs, outputs);

Advanced

To capture costs with the advanced SDK runner, specify the cost using a custom measure() function.

typescript
const outputs = await runner.measure(
(pageDescription) => {
// ... Omit logic for generating HTML from page description
return {
html: htmlOutput,
cost: {
type: "cents",
value: 100
}
};
},
["Create HTML for a site about koalas"],
);
 
await runner.submit();