Config
Bases: ReferenceCompletionConfigSchema, ObservedConsistencyConfigSchema, SelfReflectionConfigSchema, SemanticEvalsConfigSchema, ModelProviderSchema
Configuration for TLM inference.
This class combines multiple configuration schemas to provide comprehensive control over TLM's inference behavior, including reference completions, consistency checking, self-reflection, semantic evaluation, and model provider settings.
Attributes:
| Name | Type | Description |
|---|---|---|
quality_preset |
QualityPreset
|
Quality preset controlling the trade-off between speed and accuracy. |
reasoning_effort |
ReasoningEffort | None
|
Optional reasoning effort level for models that support it. |
similarity_measure |
SimilarityMeasure | None
|
Optional similarity measure to use for comparing consistency across responses. |
constrain_outputs |
list[str] | None
|
Optional list of allowed output values to constrain responses, for example in multiple choice questions. |
Source code in tlm/config/schema.py
Bases: BaseModel
Configuration for reference completion generation.
Attributes:
| Name | Type | Description |
|---|---|---|
num_reference_completions |
int | None
|
The attempted number of reference completions to generate. |
Source code in tlm/config/schema.py
Bases: BaseModel
Configuration for generating additional completions against which to score consistency of reference completions.
Attributes:
| Name | Type | Description |
|---|---|---|
num_consistency_completions |
int | None
|
The attempted number of observed consistency completions to generate. |
observed_consistency_temperature |
float | None
|
The temperature to use for generating comparison completions. |
Source code in tlm/config/schema.py
Bases: BaseModel
Configuration for prompting LLM-as-judge to score the trustworthiness of reference completions using self-reflection prompts.
Attributes:
| Name | Type | Description |
|---|---|---|
self_reflection_temperature |
float | None
|
The temperature to use for self reflection completions. |
num_self_reflection_completions |
int | None
|
The attempted number of self reflection completions to generate. |
Source code in tlm/config/schema.py
Bases: BaseModel
Configuration for semantic evaluation of reference completions.
Attributes:
| Name | Type | Description |
|---|---|---|
use_prompt_evaluation |
bool | None
|
Whether to incorporate prompt evaluation scores into the final trustworthiness score. |
prompt_evaluation_temperature |
float | None
|
The temperature to use for prompt evaluation completions. |
semantic_evaluation_temperature |
float | None
|
The temperature to use when generating completions to score the Evals. |
Source code in tlm/config/schema.py
Bases: BaseModel
Configuration for the model provider in alignment with the LiteLLM API.
Attributes:
| Name | Type | Description |
|---|---|---|
provider |
str | None
|
The name of the model provider. |
api_base |
str | None
|
The base URL of the model provider's API. |
api_key |
str | None
|
The API key to use for the model provider. |
api_version |
str | None
|
The version of the model provider's API. |