Skip to main content
Version: 3.0.3


Processors are designed to transform outputs before the information is processed by an evaluator. They are basic JavaScript functions that return a single object that's passed into an evaluator under the processed key.

Problem: Messy output data

Let's say we have a basic AI feature that composes an email with OpenAI. We collect the step information automatically using our pipeline runner, created by pipeline.start().

import { Pipeline, PipelineRun } from "@gentrace/core";
import { initPlugin } from "@gentrace/openai";
const PIPELINE_SLUG = "compose"
const plugin = await initPlugin({
apiKey: process.env.OPENAI_KEY,
const pipeline = new Pipeline({
plugins: {
openai: plugin
export const compose = async (
sender: string,
receiver: string,
query: string
): Promise<[any, PipelineRun]> => {
// This runner automatically captures and meters invocations to OpenAI
const runner = pipeline.start();
// Near type-match of the official OpenAI Python package handle.
const openai = await runner.openai;
const emailDraftResponse = await{
model: "gpt-3.5-turbo",
temperature: 0.8,
messages: [
role: "system",
content: `Write an email on behalf of ${sender} to ${receiver}: ${query}`,
const emailDraft = emailDraftResponse.choices[0]!.message!.content;
await runner.submit();
return [emailDraft, runner];

When Gentrace receives this information, the data will have this clunky structure.

"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1691678980,
"model": "gpt-3.5-turbo-0613",
"choices": [
"index": 0,
"message": {
"role": "assistant",
"content": "<Email Content>"
"finish_reason": "stop"
"usage": {
"prompt_tokens": 531,
"completion_tokens": 256,
"total_tokens": 787

The output contains mostly unnecessary information. We only care about the email draft content nested at choices[0].message.content. Ideally, we would pre-compute this information and store it in a way that's easy for our evaluator to access.

Creating a processor

To create a processor, you should navigate to the new evaluator creation flow for your desired pipeline.

Creating processor

Press the add button under the processor section to open the processor creation modal. Then, define your transformation as a JavaScript function. The function will be passed:

  • outputs object which contains the final raw output from the pipeline
  • steps array which contains the full list of intermediate steps taken by the pipeline.

For this example, we created a simple transformation to access the message content and store it on the emailDraft key on the object.

Test and create processor

Simple processor function

Once you're done writing the function, test that it works correctly on the existing pipeline data (using the data dropdown) and create the processor.

Using processed data in evaluators

All processed data returned by the function is available to evaluators under the processed key.

Processed values in AI evaluator

Processed values in heuristic evaluator


You can also use processors to compute properties on the intermediate steps in a complex AI pipeline. Read this guide to learn more about step transformations.