PipelineRun class - OpenAI handler
- TypeScript
- Python
The PipelineRun
class instance exposes an OpenAI handler that simplifies capturing generative output in Gentrace.
For a guided walkthrough of the Gentrace OpenAI integration, visit our docs here.
Usage
When the PipelineRun
instance is created, create a handle in two ways to communicate with OpenAI.
Simple
typescript
constopenai = newOpenAI ({apiKey :process .env .OPENAI_KEY ,})constchatCompletionResponse = awaitopenai .chat .completions .create ({messages : [{role : "user",content : "Hello! What's the capital of Maine?" }],model : "gpt-3.5-turbo",stream : true,});
python
import osimport gentracegentrace.init(api_key=os.getenv("GENTRACE_API_KEY"),)openai = gentrace.OpenAI(api_key=os.getenv("OPENAI_KEY"))result = openai.chat.completions.create(pipeline_slug="example-pipeline",messages=[{"role": "user","content": "Hello! What's the capital of Maine?",}],model="gpt-3.5-turbo",)
Advanced
The advanced method allows you to attach multiple steps to a single PipelineRun
instance.
typescript
construnner =pipeline .start ();constopenai =runner .openai ;constchatCompletionResponse = awaitopenai .chat .completions .create ({messages : [{role : "user",content : "Hello! What's the capital of Maine?" }],model : "gpt-3.5-turbo",stream : true,});awaitrunner .submit ();
python
import gentracegentrace.init(api_key=os.getenv("GENTRACE_API_KEY"),)pipeline = gentrace.Pipeline("example-pipeline",openai_config={"api_key": os.getenv("OPENAI_KEY"),},)runner = pipeline.start()openai = runner.get_openai()result = openai.chat.completions.create(messages=[{"role": "user", "content": "Hello! What's the capital of Maine?"}],model="gpt-3.5-turbo",)
Chat completion
The openai.chat.completions.create()
method wraps the equivalent OpenAI Node.JS chat completion API.
Arguments
The original method parameters are supported. The below key-value pairs augment the defaults.
pipelineSlug?: string
For the Simple SDK, you can specify the pipeline slug here.
messages: { role: string, content: string, contentTemplate?: string, contentInputs?: object }[]
The difference between this and the original is that the messages
array optionally allows templated values in each
element with the contentTemplate
and contentInputs
keys.
typescript
construnner =pipeline .start ();constopenai =runner .openai ;constchatCompletionResponse = awaitopenai .chat .completions .create ({messages : [{role : "user",contentTemplate : "Hello {{ name }}!",contentInputs : {name : "Vivek" },}],model : "gpt-3.5-turbo",stream : true,});
gentrace?: object
This object contains Gentrace context. Learn about context here.
Return value
Resolves to the original OpenAI response. If you're using the simple SDK, the
response has an additional pipelineRunId
.
pipelineRunId?: string (UUID)
Only available if you're using the Simple SDK.
We have updated our Python SDK to support the latest 1.x.x
Python SDK version. You can learn more about
the new interface here.
We have deprecated support for the 0.x.x
Python SDK. You can view the old documentation here.
The openai.chat.completions.create()
method wraps around the equivalent OpenAI Python chat completion API.
Arguments
The original method parameters are supported. The below key-value pairs augment the defaults.
pipeline_slug?: string
If you're using the Simple SDK, you can specify the pipeline slug here.
messages
The difference between this and the original is that the messages
array optionally allows templated values in each
element with the content_template
and content_inputs
keys.
python
import gentracegentrace.init(api_key=os.getenv("GENTRACE_API_KEY"),)pipeline = gentrace.Pipeline("example-pipeline",openai_config={"api_key": os.getenv("OPENAI_KEY"),},)runner = pipeline.start()openai = runner.get_openai()result = openai.chat.completions.create(messages=[{"role": "user", "content": "Hello! What's the capital of Maine?"}],model="gpt-3.5-turbo",)
gentrace?: object
This object contains Gentrace context. Learn about context here.
Return value
Resolves to the original OpenAI response. If you're using the simple SDK, the
response has an additional pipelineRunId
.
pipelineRunId?: string (UUID)
Only available if you're using the Simple SDK.
Embedding
The openai.embeddings.create()
method wraps the equivalent OpenAI Node.JS embedding API.
Arguments
The original method parameters are supported. The below key-value pairs augment the defaults.
pipelineSlug?: string
If you're using the Simple SDK, you can specify the pipeline slug here.
typescript
construnner =pipeline .start ();constopenai =runner .openai ;constembeddingResponse = awaitopenai .embeddings .create ({model : "text-embedding-ada-002",input : "The capital of Maine is Augusta",});awaitrunner .submit ();
gentrace?: object
This object contains Gentrace context. Learn about context here.
Return value
Resolves to the original OpenAI response. If you're using the simple SDK, the
response has an additional pipelineRunId
.
pipelineRunId?: string (UUID)
The openai.embeddings.create()
method wraps the equivalent OpenAI Python embedding API.
Arguments
The original method parameters are supported. The below key-value pairs augment the defaults.
pipeline_slug?: string
If you're using the Simple SDK, you can specify the pipeline slug here.
python
import gentracegentrace.init(api_key=os.getenv("GENTRACE_API_KEY"),)pipeline = gentrace.Pipeline("example-pipeline",openai_config={"api_key": os.getenv("OPENAI_KEY"),},)runner = pipeline.start()openai = runner.get_openai()response = openai.embeddings.create(input="The capital of Maine is Augusta",model="text-embedding-3-small")
gentrace?: object
This object contains Gentrace context. Learn about context here.
Return value
Resolves to the original OpenAI response. If you're using the simple SDK, the
response has an additional pipelineRunId
.
pipelineRunId?: string (UUID)
Structured Outputs
The openai.beta.chat.completions.parse()
method wraps the equivalent OpenAI structured output API for chat completions.
Structured outputs are currently in beta with OpenAI. This feature may be subject to changes or updates as OpenAI continues to develop and refine it.
This allows you to define a specific response structure, making it easier to use the generated content. Gentrace's OpenAI integration fully supports this feature. For more details, see the OpenAI documentation on structured outputs.
Arguments
The original method parameters are supported. The below key-value pairs augment the defaults.
pipelineSlug?: string
pipeline_slug?: string
If you're using the Simple SDK, you can specify the pipeline slug here.
gentrace?: object
This object contains Gentrace context. Learn about context here.
Return value
Resolves to the parsed response according to the specified response_format
. If you're using the simple SDK, the response has an additional pipelineRunId
.
pipelineRunId?: string (UUID)
Example
typescript
const Step = z.object({explanation: z.string(),output: z.string(),});const MathReasoning = z.object({steps: z.array(Step),final_answer: z.string(),});// Omit Gentrace pipeline initialization...const runner = pipeline.start();const completion = await runner.openai.beta.chat.completions.parse({model: "gpt-4o-2024-08-06",messages: [{role: "system",content:"You are a helpful math tutor. Guide the user through the solution step by step.",},{ role: "user", content: "how can I solve 8x + 7 = -23" },],response_format: zodResponseFormat(MathReasoning, "math_reasoning"),gentrace: {metadata: {problemType: {type: "string",value: "linear_equation",},},},});
python
class Step(BaseModel):explanation: stroutput: strclass MathReasoning(BaseModel):steps: list[Step]final_answer: strresult = await openai.beta.chat.completions.parse(model="gpt-4o-2024-08-06",messages=[{"role": "system","content": "You are a helpful math tutor. ""Guide the user through the solution step by step.",},{"role": "user", "content": "how can I solve 8x + 7 = -23"},],response_format=MathReasoning,pipeline_slug="math-reasoning-pipeline",gentrace={"metadata": {"problem_type": {"type": "string","value": "algebra"}}},)