Get evaluators
Get evaluators for a given pipeline
Query Parameters
The ID of the Pipeline to retrieve evaluators for. Use "null" to retrieve the organization's evaluator templates.
Or, the slug of the Pipeline to retrieve evaluators for
- 200
- 404
Evaluators retrieved successfully
Schema
- Array [
- ]
data object[] required
The ID of the evaluator
Timestamp in seconds since the UNIX epoch. Can be transformed into a Date object.
Timestamp in seconds since the UNIX epoch. Can be transformed into a Date object.
Timestamp in seconds since the UNIX epoch. Can be transformed into a Date object.
The name of the evaluator
For evaluators with options scoring, the available options to choose from
For AI evaluators, the AI model to use
The ID of the pipeline that the evaluator belongs to
The ID of the processor associated with the evaluator
The ID of the organization that the evaluator belongs to
For evaluator templates, the description of the template
For heuristic evaluators, the heuristic function to use
For heuristic evaluators, the coding language of the heuristic function (such as "JAVASCRIPT", "PYTHON")
For AI evaluators, the prompt template that should be sent to the AI model
For AI image evaluators, the paths to the image URLs
For human evaluators, the instructions for the human to follow
For classification evaluators, the path to the predicted classification
For classification evaluators, the path to the expected classification
For classification evaluators using multi-class evaluation, the available options to match with
The type of evaluator (such as "AI", "HEURISTIC", "HUMAN", "CLASSIFIER")
The scoring method used by the evaluator (such as "ENUM", "PERCENTAGE")
The run condition of the evaluator (such as "TEST_PROD", "TEST", "PROD", "COMPARISON_2")
Use "samplingProbability" instead
When optionally running on production data, the associated sampling probability of this evaluator (from 0 to 100)
{ "data": [ { "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "createdAt": 0, "updatedAt": 0, "archivedAt": 0, "icon": "string", "name": "string", "options": [ null ], "aiModel": "string", "pipelineId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "processorId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "organizationId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "templateDescription": "string", "heuristicFn": "string", "heuristicFnLanguage": "string", "aiPromptFormat": "string", "aiImageUrls": [ "string" ], "humanPrompt": "string", "classifierValuePath": "string", "classifierExpectedValuePath": "string", "multiClassOptions": [ "string" ], "who": "string", "valueType": "string", "runCondition": "string", "samplingProbability": 0 } ] }
Schema
- Array [
- ]
data object[] required
The ID of the evaluator
Timestamp in seconds since the UNIX epoch. Can be transformed into a Date object.
Timestamp in seconds since the UNIX epoch. Can be transformed into a Date object.
Timestamp in seconds since the UNIX epoch. Can be transformed into a Date object.
The name of the evaluator
For evaluators with options scoring, the available options to choose from
For AI evaluators, the AI model to use
The ID of the pipeline that the evaluator belongs to
The ID of the processor associated with the evaluator
The ID of the organization that the evaluator belongs to
For evaluator templates, the description of the template
For heuristic evaluators, the heuristic function to use
For heuristic evaluators, the coding language of the heuristic function (such as "JAVASCRIPT", "PYTHON")
For AI evaluators, the prompt template that should be sent to the AI model
For AI image evaluators, the paths to the image URLs
For human evaluators, the instructions for the human to follow
For classification evaluators, the path to the predicted classification
For classification evaluators, the path to the expected classification
For classification evaluators using multi-class evaluation, the available options to match with
The type of evaluator (such as "AI", "HEURISTIC", "HUMAN", "CLASSIFIER")
The scoring method used by the evaluator (such as "ENUM", "PERCENTAGE")
The run condition of the evaluator (such as "TEST_PROD", "TEST", "PROD", "COMPARISON_2")
Use "samplingProbability" instead
When optionally running on production data, the associated sampling probability of this evaluator (from 0 to 100)
{ "data": [ { "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "createdAt": 0, "updatedAt": 0, "archivedAt": 0, "icon": "string", "name": "string", "options": [ null ], "aiModel": "string", "pipelineId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "processorId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "organizationId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "templateDescription": "string", "heuristicFn": "string", "heuristicFnLanguage": "string", "aiPromptFormat": "string", "aiImageUrls": [ "string" ], "humanPrompt": "string", "classifierValuePath": "string", "classifierExpectedValuePath": "string", "multiClassOptions": [ "string" ], "who": "string", "valueType": "string", "runCondition": "string", "samplingProbability": 0 } ] }
Pipeline not found