API
ActiveSpan ¶
Represents an active tracing span.
add_event ¶
Add an event to the active span.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the event. |
required |
attributes
|
Any
|
Optional attributes for the event.
Can be any serializable type or pydantic |
required |
Source code in python/scouter/stubs.pyi
set_attribute ¶
Set an attribute on the active span.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
str
|
The attribute key. |
required |
value
|
SerializedType
|
The attribute value. |
required |
set_input ¶
Set the input for the active span.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Any
|
The input to set. Can be any serializable primitive type (str, int, float, bool, list, dict),
or a pydantic |
required |
max_length
|
int
|
The maximum length for a given string input. Defaults to 1000. |
1000
|
Source code in python/scouter/stubs.pyi
set_output ¶
Set the output for the active span.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output
|
Any
|
The output to set. Can be any serializable primitive type (str, int, float, bool, list, dict),
or a pydantic |
required |
max_length
|
int
|
The maximum length for a given string output. Defaults to 1000. |
1000
|
Source code in python/scouter/stubs.pyi
set_status ¶
Set the status of the active span.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
status
|
str
|
The status code (e.g., "OK", "ERROR"). |
required |
description
|
Optional[str]
|
Optional description for the status. |
None
|
Source code in python/scouter/stubs.pyi
Agent ¶
Agent(
provider: Provider | str,
system_instruction: Optional[
str | List[str] | Message | List[Message]
] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
provider
|
Provider | str
|
The provider to use for the agent. This can be a Provider enum or a string representing the provider. |
required |
system_instruction
|
Optional[str | List[str] | Message | List[Message]]
|
The system message to use for the agent. This can be a string, a list of strings, a Message object, or a list of Message objects. If None, no system message will be used. This is added to all tasks that the agent executes. If a given task contains it's own system message, the agent's system message will be prepended to the task's system message. |
None
|
Example:
Source code in python/scouter/stubs.pyi
id
property
¶
The ID of the agent. This is a random uuid7 that is generated when the agent is created.
system_instruction
property
¶
The system message to use for the agent. This is a list of Message objects.
execute_prompt ¶
execute_prompt(
prompt: Prompt,
output_type: Optional[Any] = None,
model: Optional[str] = None,
) -> AgentResponse
Execute a prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt
|
Prompt
|
` The prompt to execute. |
required |
output_type
|
Optional[Any]
|
The output type to use for the task. This can either be a Pydantic |
None
|
model
|
Optional[str]
|
The model to use for the task. If not provided, defaults to the |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
AgentResponse |
AgentResponse
|
The response from the agent after executing the task. |
Source code in python/scouter/stubs.pyi
execute_task ¶
execute_task(
task: Task,
output_type: Optional[Any] = None,
model: Optional[str] = None,
) -> AgentResponse
Execute a task.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
task
|
Task
|
The task to execute. |
required |
output_type
|
Optional[Any]
|
The output type to use for the task. This can either be a Pydantic |
None
|
model
|
Optional[str]
|
The model to use for the task. If not provided, defaults to the |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
AgentResponse |
AgentResponse
|
The response from the agent after executing the task. |
Source code in python/scouter/stubs.pyi
AgentResponse ¶
log_probs
property
¶
Returns the log probabilities of the agent response if supported. This is primarily used for debugging and analysis purposes.
result
property
¶
The result of the agent response. This can be a Pydantic BaseModel class or a supported potato_head response
type such as Score. If neither is provided, the response json will be returned as a dictionary.
token_usage
property
¶
Returns the token usage of the agent response if supported
AlertDispatchType ¶
AlertThreshold ¶
Enum representing different alert conditions for monitoring metrics.
Attributes:
| Name | Type | Description |
|---|---|---|
Below |
AlertThreshold
|
Indicates that an alert should be triggered when the metric is below a threshold. |
Above |
AlertThreshold
|
Indicates that an alert should be triggered when the metric is above a threshold. |
Outside |
AlertThreshold
|
Indicates that an alert should be triggered when the metric is outside a specified range. |
from_value
staticmethod
¶
Creates an AlertThreshold enum member from a string value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
The string representation of the alert condition. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
AlertThreshold |
AlertThreshold
|
The corresponding AlertThreshold enum member. |
Source code in python/scouter/stubs.pyi
Attribute ¶
Represents a key-value attribute associated with a span.
AudioUrl ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
The URL of the audio. |
required |
kind
|
Literal['audio-url']
|
The kind of the content. |
'audio-url'
|
Source code in python/scouter/stubs.pyi
BaseModel ¶
BaseTracer ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the service for tracing. |
required |
Source code in python/scouter/stubs.pyi
current_span ¶
Get the current active span.
Returns:
| Name | Type | Description |
|---|---|---|
ActiveSpan |
ActiveSpan
|
The current active span. Raises an error if no active span exists. |
start_as_current_span ¶
start_as_current_span(
name: str,
kind: Optional[SpanKind] = SpanKind.Internal,
label: Optional[str] = None,
attributes: Optional[dict[str, str]] = None,
baggage: Optional[dict[str, str]] = None,
tags: Optional[dict[str, str]] = None,
parent_context_id: Optional[str] = None,
) -> ActiveSpan
Context manager to start a new span as the current span.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the span. |
required |
kind
|
Optional[SpanKind]
|
The kind of span (e.g., "SERVER", "CLIENT"). |
Internal
|
label
|
Optional[str]
|
An optional label for the span. |
None
|
attributes
|
Optional[dict[str, str]]
|
Optional attributes to set on the span. |
None
|
baggage
|
Optional[dict[str, str]]
|
Optional baggage items to attach to the span. |
None
|
tags
|
Optional[dict[str, str]]
|
Optional tags to set on the span and trace. |
None
|
parent_context_id
|
Optional[str]
|
Optional parent span context ID. |
None
|
Returns: ActiveSpan:
Source code in python/scouter/stubs.pyi
BatchConfig ¶
BatchConfig(
max_queue_size: int = 2048,
scheduled_delay_ms: int = 5000,
max_export_batch_size: int = 512,
)
Configuration for batch exporting of spans.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_queue_size
|
int
|
The maximum queue size for spans. Defaults to 2048. |
2048
|
scheduled_delay_ms
|
int
|
The delay in milliseconds between export attempts. Defaults to 5000. |
5000
|
max_export_batch_size
|
int
|
The maximum batch size for exporting spans. Defaults to 512. |
512
|
Source code in python/scouter/stubs.pyi
Bin ¶
BinaryContent ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
bytes
|
The binary data. |
required |
media_type
|
str
|
The media type of the binary data. |
required |
kind
|
str
|
The kind of the content |
'binary'
|
Source code in python/scouter/stubs.pyi
CharStats ¶
ChatResponse ¶
CommonCrons ¶
CompletionTokenDetails ¶
Details about the completion tokens used in a model response.
ConsoleDispatchConfig ¶
CustomDriftProfile ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
CustomMetricDriftConfig
|
The configuration for custom metric drift detection. |
required |
metrics
|
list[CustomMetric]
|
A list of CustomMetric objects representing the metrics to be monitored. |
required |
Example
config = CustomMetricDriftConfig(...) metrics = [CustomMetric("accuracy", 0.95), CustomMetric("f1_score", 0.88)] profile = CustomDriftProfile(config, metrics, "1.0.0")
Source code in python/scouter/stubs.pyi
custom_metrics
property
¶
Return custom metric objects that were used to create the drift profile
from_file
staticmethod
¶
Load drift profile from file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to the file |
required |
model_dump ¶
model_dump_json ¶
model_validate
staticmethod
¶
Load drift profile from dictionary
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
DriftProfile dictionary |
required |
model_validate_json
staticmethod
¶
Load drift profile from json
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string representation of the drift profile |
required |
save_to_json ¶
Save drift profile to json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
Optional path to save the drift profile. If None, outputs to |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the saved json file |
Source code in python/scouter/stubs.pyi
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None
Inplace operation that updates config args
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
Optional[str]
|
Model space |
None
|
name
|
Optional[str]
|
Model name |
None
|
version
|
Optional[str]
|
Model version |
None
|
alert_config
|
Optional[CustomMetricAlertConfig]
|
Custom metric alert configuration |
None
|
Returns:
| Type | Description |
|---|---|
None
|
None |
Source code in python/scouter/stubs.pyi
CustomMetric ¶
CustomMetric(
name: str,
value: float,
alert_threshold: AlertThreshold,
alert_threshold_value: Optional[float] = None,
)
This class represents a custom metric that uses comparison-based alerting. It applies an alert condition to a single metric value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the metric being monitored. This should be a descriptive identifier for the metric. |
required |
value
|
float
|
The current value of the metric. |
required |
alert_threshold
|
AlertThreshold
|
The condition used to determine when an alert should be triggered. |
required |
alert_threshold_value
|
Optional[float]
|
The threshold or boundary value used in conjunction with the alert_threshold. If supplied, this value will be added or subtracted from the provided metric value to determine if an alert should be triggered. |
None
|
Source code in python/scouter/stubs.pyi
CustomMetricAlertCondition ¶
CustomMetricAlertCondition(
alert_threshold: AlertThreshold,
alert_threshold_value: Optional[float],
)
alert_threshold (AlertThreshold): The condition that determines when an alert
should be triggered. This could be comparisons like 'greater than',
'less than', 'equal to', etc.
alert_threshold_value (Optional[float], optional): A numerical boundary used in
conjunction with the alert_threshold. This can be None for certain
types of comparisons that don't require a fixed boundary.
Example: alert_threshold = CustomMetricAlertCondition(AlertCondition.BELOW, 2.0)
Source code in python/scouter/stubs.pyi
CustomMetricAlertConfig ¶
CustomMetricAlertConfig(
dispatch_config: Optional[
SlackDispatchConfig | OpsGenieDispatchConfig
] = None,
schedule: Optional[str | CommonCrons] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dispatch_config
|
Optional[SlackDispatchConfig | OpsGenieDispatchConfig]
|
Alert dispatch config. Defaults to console |
None
|
schedule
|
Optional[str | CommonCrons]
|
Schedule to run monitor. Defaults to daily at midnight |
None
|
Source code in python/scouter/stubs.pyi
alert_conditions
property
writable
¶
Return the alert_condition that were set during metric definition
CustomMetricDriftConfig ¶
CustomMetricDriftConfig(
space: str = "__missing__",
name: str = "__missing__",
version: str = "0.1.0",
sample_size: int = 25,
alert_config: CustomMetricAlertConfig = CustomMetricAlertConfig(),
)
space:
Model space
name:
Model name
version:
Model version. Defaults to 0.1.0
sample_size:
Sample size
alert_config:
Custom metric alert configuration
Source code in python/scouter/stubs.pyi
load_from_json_file
staticmethod
¶
Load config from json file Args: path: Path to json file to load config from.
model_dump_json ¶
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None
Inplace operation that updates config args Args: space: Model space name: Model name version: Model version alert_config: Custom metric alert configuration
Source code in python/scouter/stubs.pyi
CustomMetricServerRecord ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Model space |
required |
name
|
str
|
Model name |
required |
version
|
str
|
Model version |
required |
metric
|
str
|
Metric name |
required |
value
|
float
|
Metric value |
required |
Source code in python/scouter/stubs.pyi
model_dump_json ¶
DataProfile ¶
Data profile of features
features
property
¶
Returns dictionary of features and their data profiles
model_dump_json ¶
model_validate_json
staticmethod
¶
Load Data profile from json
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string representation of the data profile |
required |
save_to_json ¶
Save data profile to json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
Optional path to save the data profile. If None, outputs to |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the saved data profile |
Source code in python/scouter/stubs.pyi
DataProfiler ¶
Source code in python/scouter/stubs.pyi
create_data_profile ¶
create_data_profile(
data: Any,
data_type: Optional[ScouterDataType] = None,
bin_size: int = 20,
compute_correlations: bool = False,
) -> DataProfile
Create a data profile from data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Any
|
Data to create a data profile from. Data can be a numpy array, a polars dataframe or pandas dataframe. Data is expected to not contain any missing values, NaNs or infinities These types are incompatible with computing quantiles, histograms, and correlations. These values must be removed or imputed. |
required |
data_type
|
Optional[ScouterDataType]
|
Optional data type. Inferred from data if not provided. |
None
|
bin_size
|
int
|
Optional bin size for histograms. Defaults to 20 bins. |
20
|
compute_correlations
|
bool
|
Whether to compute correlations or not. |
False
|
Returns:
| Type | Description |
|---|---|
DataProfile
|
DataProfile |
Source code in python/scouter/stubs.pyi
Distinct ¶
Doane ¶
DocumentUrl ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
The URL of the document. |
required |
kind
|
Literal['document-url']
|
The kind of the content. |
'document-url'
|
Source code in python/scouter/stubs.pyi
DriftAlertRequest ¶
DriftAlertRequest(
name: str,
space: str,
version: str,
active: bool = False,
limit_datetime: Optional[datetime] = None,
limit: Optional[int] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name |
required |
space
|
str
|
Space |
required |
version
|
str
|
Version |
required |
active
|
bool
|
Whether to get active alerts only |
False
|
limit_datetime
|
Optional[datetime]
|
Limit datetime for alerts |
None
|
limit
|
Optional[int]
|
Limit for number of alerts to return |
None
|
Source code in python/scouter/stubs.pyi
DriftRequest ¶
DriftRequest(
name: str,
space: str,
version: str,
time_interval: TimeInterval,
max_data_points: int,
drift_type: DriftType,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Model name |
required |
space
|
str
|
Model space |
required |
version
|
str
|
Model version |
required |
time_interval
|
TimeInterval
|
Time window for drift request |
required |
max_data_points
|
int
|
Maximum data points to return |
required |
drift_type
|
DriftType
|
Drift type for request |
required |
Source code in python/scouter/stubs.pyi
Drifter ¶
Source code in python/scouter/stubs.pyi
compute_drift ¶
compute_drift(
data: Any,
drift_profile: SpcDriftProfile,
data_type: Optional[ScouterDataType] = None,
) -> SpcDriftMap
compute_drift(
data: Any,
drift_profile: Union[
SpcDriftProfile, PsiDriftProfile, LLMDriftProfile
],
data_type: Optional[ScouterDataType] = None,
) -> Union[SpcDriftMap, PsiDriftMap, LLMDriftMap]
Create a drift map from data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Any
|
Data to create a data profile from. Data can be a numpy array, a polars dataframe or a pandas dataframe. |
required |
drift_profile
|
Union[SpcDriftProfile, PsiDriftProfile, LLMDriftProfile]
|
Drift profile to use to compute drift map |
required |
data_type
|
Optional[ScouterDataType]
|
Optional data type. Inferred from data if not provided. |
None
|
Returns:
| Type | Description |
|---|---|
Union[SpcDriftMap, PsiDriftMap, LLMDriftMap]
|
SpcDriftMap, PsiDriftMap or LLMDriftMap |
Source code in python/scouter/stubs.pyi
create_drift_profile ¶
create_drift_profile(
data: Any,
config: SpcDriftConfig,
data_type: Optional[ScouterDataType] = None,
) -> SpcDriftProfile
create_drift_profile(
data: Any,
config: Optional[
Union[
SpcDriftConfig,
PsiDriftConfig,
CustomMetricDriftConfig,
]
] = None,
data_type: Optional[ScouterDataType] = None,
) -> Union[
SpcDriftProfile, PsiDriftProfile, CustomDriftProfile
]
Create a drift profile from data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Any
|
Data to create a data profile from. Data can be a numpy array, a polars dataframe, pandas dataframe or a list of CustomMetric if creating a custom metric profile. Data is expected to not contain any missing values, NaNs or infinities |
required |
config
|
Optional[Union[SpcDriftConfig, PsiDriftConfig, CustomMetricDriftConfig]]
|
Drift config that will be used for monitoring |
None
|
data_type
|
Optional[ScouterDataType]
|
Optional data type. Inferred from data if not provided. |
None
|
Returns:
| Type | Description |
|---|---|
Union[SpcDriftProfile, PsiDriftProfile, CustomDriftProfile]
|
SpcDriftProfile, PsiDriftProfile or CustomDriftProfile |
Source code in python/scouter/stubs.pyi
create_llm_drift_profile ¶
create_llm_drift_profile(
config: LLMDriftConfig,
metrics: List[LLMDriftMetric],
workflow: Optional[Workflow] = None,
) -> LLMDriftProfile
Initialize a LLMDriftProfile for LLM evaluation and drift detection.
LLM evaluations are run asynchronously on the scouter server.
Logic flow
- If only metrics are provided, a workflow will be created automatically from the metrics. In this case a prompt is required for each metric.
- If a workflow is provided, it will be parsed and validated for compatibility:
- A list of metrics to evaluate workflow output must be provided
- Metric names must correspond to the final task names in the workflow
Baseline metrics and thresholds will be extracted from the LLMDriftMetric objects.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
LLMDriftConfig
|
The configuration for the LLM drift profile containing space, name, version, and alert settings. |
required |
metrics
|
list[LLMDriftMetric]
|
A list of LLMDriftMetric objects representing the metrics to be monitored. Each metric defines evaluation criteria and alert thresholds. |
required |
workflow
|
Optional[Workflow]
|
Optional custom workflow for advanced evaluation scenarios. If provided, the workflow will be validated to ensure proper parameter and response type configuration. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
LLMDriftProfile |
LLMDriftProfile
|
Configured profile ready for LLM drift monitoring. |
Raises:
| Type | Description |
|---|---|
ProfileError
|
If workflow validation fails, metrics are empty when no workflow is provided, or if workflow tasks don't match metric names. |
Examples:
Basic usage with metrics only:
>>> config = LLMDriftConfig("my_space", "my_model", "1.0")
>>> metrics = [
... LLMDriftMetric("accuracy", 0.95, AlertThreshold.Above, 0.1, prompt),
... LLMDriftMetric("relevance", 0.85, AlertThreshold.Below, 0.2, prompt2)
... ]
>>> profile = Drifter().create_llm_drift_profile(config, metrics)
Advanced usage with custom workflow:
>>> workflow = create_custom_workflow() # Your custom workflow
>>> metrics = [LLMDriftMetric("final_task", 0.9, AlertThreshold.Above)]
>>> profile = Drifter().create_llm_drift_profile(config, metrics, workflow)
Note
- When using custom workflows, ensure final tasks have Score response types
- Initial workflow tasks must include "input" and/or "response" parameters
- All metric names must match corresponding workflow task names
Source code in python/scouter/stubs.pyi
Embedder ¶
Embedder(
provider: Provider | str,
config: Optional[
OpenAIEmbeddingConfig | GeminiEmbeddingConfig
] = None,
)
Class for creating embeddings.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
provider
|
Provider | str
|
The provider to use for the embedder. This can be a Provider enum or a string representing the provider. |
required |
config
|
Optional[OpenAIEmbeddingConfig | GeminiEmbeddingConfig]
|
The configuration to use for the embedder. This can be a Pydantic BaseModel class representing the configuration for the provider. If no config is provided, defaults to OpenAI provider configuration. |
None
|
Source code in python/scouter/stubs.pyi
embed ¶
embed(
input: str | List[str] | PredictRequest,
) -> (
OpenAIEmbeddingResponse
| GeminiEmbeddingResponse
| PredictResponse
)
Create embeddings for input.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
str | List[str] | PredictRequest
|
The input to embed. Type depends on provider: - OpenAI/Gemini: str | List[str] - Vertex: PredictRequest |
required |
Returns:
| Type | Description |
|---|---|
OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse
|
Provider-specific response type. |
OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse
|
OpenAIEmbeddingResponse for OpenAI, |
OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse
|
GeminiEmbeddingResponse for Gemini, |
OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse
|
PredictResponse for Vertex. |
Examples:
## OpenAI
embedder = Embedder(Provider.OpenAI)
response = embedder.embed(input="Test input")
## Gemini
embedder = Embedder(Provider.Gemini, config=GeminiEmbeddingConfig(model="gemini-embedding-001"))
response = embedder.embed(input="Test input")
## Vertex
from potato_head.google import PredictRequest
embedder = Embedder(Provider.Vertex)
response = embedder.embed(input=PredictRequest(text="Test input"))
Source code in python/scouter/stubs.pyi
EqualWidthBinning ¶
This strategy divides the range of values into bins of equal width. Several binning rules are supported to automatically determine the appropriate number of bins based on the input distribution.
Note
Detailed explanations of each method are provided in their respective constructors or documentation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
EqualWidthMethods
|
Specifies how the number of bins should be determined. Options include: - Manual(num_bins): Explicitly sets the number of bins. - SquareRoot, Sturges, Rice, Doane, Scott, TerrellScott, FreedmanDiaconis: Rules that infer bin counts from data. Defaults to Doane(). |
Doane()
|
Source code in python/scouter/stubs.pyi
method
property
writable
¶
Specifies how the number of bins should be determined.
EvaluationConfig ¶
EvaluationConfig(
embedder: Optional[Embedder] = None,
embedding_targets: Optional[List[str]] = None,
compute_similarity: bool = False,
cluster: bool = False,
compute_histograms: bool = False,
)
Configuration options for LLM evaluation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
embedder
|
Optional[Embedder]
|
Optional Embedder instance to use for generating embeddings for similarity-based metrics. If not provided, no embeddings will be generated. |
None
|
embedding_targets
|
Optional[List[str]]
|
Optional list of context keys to generate embeddings for. If not provided, embeddings will be generated for all string fields in the record context. |
None
|
compute_similarity
|
bool
|
Whether to compute similarity between embeddings. Default is False. |
False
|
cluster
|
bool
|
Whether to perform clustering on the embeddings. Default is False. |
False
|
compute_histograms
|
bool
|
Whether to compute histograms for all calculated features (metrics, embeddings, similarities). Default is False. |
False
|
Source code in python/scouter/stubs.pyi
ExportConfig ¶
ExportConfig(
endpoint: Optional[str],
protocol: OtelProtocol = OtelProtocol.HttpBinary,
timeout: Optional[int] = None,
)
Configuration for exporting spans.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
endpoint
|
Optional[str]
|
The HTTP endpoint for exporting spans. |
required |
protocol
|
Protocol
|
The protocol to use for exporting spans. Defaults to HttpBinary. |
HttpBinary
|
timeout
|
Optional[int]
|
The timeout for HTTP requests in seconds. |
None
|
Source code in python/scouter/stubs.pyi
FeatureDrift ¶
FeatureProfile ¶
Features ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
features
|
List[QueueFeature] | Dict[str, Union[int, float, str]]
|
List of features or a dictionary of key-value pairs. If a list, each item must be an instance of Feature. If a dictionary, each key is the feature name and each value is the feature value. Supported types for values are int, float, and string. |
required |
Example
# Passing a list of features
features = Features(
features=[
Feature.int("feature_1", 1),
Feature.float("feature_2", 2.0),
Feature.string("feature_3", "value"),
]
)
# Passing a dictionary (pydantic model) of features
class MyFeatures(BaseModel):
feature1: int
feature2: float
feature3: str
my_features = MyFeatures(
feature1=1,
feature2=2.0,
feature3="value",
)
features = Features(my_features.model_dump())
Source code in python/scouter/stubs.pyi
FreedmanDiaconis ¶
FunctionType ¶
Enumeration of function types.
GeminiEmbeddingConfig ¶
GeminiEmbeddingConfig(
model: Optional[str] = None,
output_dimensionality: Optional[int] = None,
task_type: Optional[EmbeddingTaskType | str] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Optional[str]
|
The embedding model to use. If not specified, the default gemini model will be used. |
None
|
output_dimensionality
|
Optional[int]
|
The output dimensionality of the embeddings. If not specified, a default value will be used. |
None
|
task_type
|
Optional[EmbeddingTaskType]
|
The type of embedding task to perform. If not specified, the default gemini task type will be used. |
None
|
Source code in python/scouter/stubs.pyi
GeminiSettings ¶
GeminiSettings(
labels: Optional[dict[str, str]] = None,
tool_config: Optional[ToolConfig] = None,
generation_config: Optional[GenerationConfig] = None,
safety_settings: Optional[list[SafetySetting]] = None,
model_armor_config: Optional[ModelArmorConfig] = None,
extra_body: Optional[dict] = None,
)
Reference
https://cloud.google.com/vertex-ai/generative-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/generateContent
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
labels
|
Optional[dict[str, str]]
|
An optional dictionary of labels for the settings. |
None
|
tool_config
|
Optional[ToolConfig]
|
Configuration for tools like function calling and retrieval. |
None
|
generation_config
|
Optional[GenerationConfig]
|
Configuration for content generation parameters. |
None
|
safety_settings
|
Optional[list[SafetySetting]]
|
List of safety settings to apply. |
None
|
model_armor_config
|
Optional[ModelArmorConfig]
|
Configuration for model armor templates. |
None
|
extra_body
|
Optional[dict]
|
Additional configuration as a dictionary. |
None
|
Source code in python/scouter/stubs.pyi
GenerationConfig ¶
GenerationConfig(
stop_sequences: Optional[List[str]] = None,
response_mime_type: Optional[str] = None,
response_modalities: Optional[List[Modality]] = None,
thinking_config: Optional[ThinkingConfig] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
top_k: Optional[int] = None,
candidate_count: Optional[int] = None,
max_output_tokens: Optional[int] = None,
response_logprobs: Optional[bool] = None,
logprobs: Optional[int] = None,
presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None,
seed: Optional[int] = None,
audio_timestamp: Optional[bool] = None,
media_resolution: Optional[MediaResolution] = None,
speech_config: Optional[SpeechConfig] = None,
enable_affective_dialog: Optional[bool] = None,
)
Configuration for content generation with comprehensive parameter control.
This class provides fine-grained control over the generation process including sampling parameters, output format, modalities, and various specialized features.
Examples:
Basic usage with temperature control:
Multi-modal configuration:
config = GenerationConfig(
response_modalities=[Modality.TEXT, Modality.AUDIO],
speech_config=SpeechConfig(language_code="en-US")
)
Advanced sampling with penalties:
config = GenerationConfig(
temperature=0.8,
top_p=0.9,
top_k=40,
presence_penalty=0.1,
frequency_penalty=0.2
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
stop_sequences
|
Optional[List[str]]
|
List of strings that will stop generation when encountered |
None
|
response_mime_type
|
Optional[str]
|
MIME type for the response format |
None
|
response_modalities
|
Optional[List[Modality]]
|
List of modalities to include in the response |
None
|
thinking_config
|
Optional[ThinkingConfig]
|
Configuration for reasoning/thinking capabilities |
None
|
temperature
|
Optional[float]
|
Controls randomness in generation (0.0-1.0) |
None
|
top_p
|
Optional[float]
|
Nucleus sampling parameter (0.0-1.0) |
None
|
top_k
|
Optional[int]
|
Top-k sampling parameter |
None
|
candidate_count
|
Optional[int]
|
Number of response candidates to generate |
None
|
max_output_tokens
|
Optional[int]
|
Maximum number of tokens to generate |
None
|
response_logprobs
|
Optional[bool]
|
Whether to return log probabilities |
None
|
logprobs
|
Optional[int]
|
Number of log probabilities to return per token |
None
|
presence_penalty
|
Optional[float]
|
Penalty for token presence (-2.0 to 2.0) |
None
|
frequency_penalty
|
Optional[float]
|
Penalty for token frequency (-2.0 to 2.0) |
None
|
seed
|
Optional[int]
|
Random seed for deterministic generation |
None
|
audio_timestamp
|
Optional[bool]
|
Whether to include timestamps in audio responses |
None
|
media_resolution
|
Optional[MediaResolution]
|
Resolution setting for media content |
None
|
speech_config
|
Optional[SpeechConfig]
|
Configuration for speech synthesis |
None
|
enable_affective_dialog
|
Optional[bool]
|
Whether to enable emotional dialog features |
None
|
Source code in python/scouter/stubs.pyi
GetProfileRequest ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Profile name |
required |
space
|
str
|
Profile space |
required |
version
|
str
|
Profile version |
required |
drift_type
|
DriftType
|
Profile drift type. A (repo/name/version can be associated with more than one drift type) |
required |
Source code in python/scouter/stubs.pyi
GrpcConfig ¶
Configuration for gRPC exporting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
compression
|
Optional[CompressionType]
|
Optional compression type for gRPC requests. |
None
|
Source code in python/scouter/stubs.pyi
GrpcSpanExporter ¶
GrpcSpanExporter(
batch_export: bool = True,
export_config: Optional[ExportConfig] = None,
grpc_config: Optional[GrpcConfig] = None,
sample_ratio: Optional[float] = None,
)
Exporter that sends spans to a gRPC endpoint.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_export
|
bool
|
Whether to use batch exporting. Defaults to True. |
True
|
export_config
|
Optional[ExportConfig]
|
Configuration for exporting spans. |
None
|
grpc_config
|
Optional[GrpcConfig]
|
Configuration for the gRPC exporter. |
None
|
sample_ratio
|
Optional[float]
|
The sampling ratio for traces. If None, defaults to always sample. |
None
|
Source code in python/scouter/stubs.pyi
compression
property
¶
Get the compression type used for exporting spans.
Histogram ¶
HttpConfig ¶
HttpConfig(
server_uri: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
auth_token: Optional[str] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
server_uri
|
Optional[str]
|
URL of the HTTP server to publish messages to. If not provided, the value of the HTTP_server_uri environment variable is used. |
None
|
username
|
Optional[str]
|
Username for basic authentication. |
None
|
password
|
Optional[str]
|
Password for basic authentication. |
None
|
auth_token
|
Optional[str]
|
Authorization token to use for authentication. |
None
|
Source code in python/scouter/stubs.pyi
HttpSpanExporter ¶
HttpSpanExporter(
batch_export: bool = True,
export_config: Optional[ExportConfig] = None,
http_config: Optional[OtelHttpConfig] = None,
sample_ratio: Optional[float] = None,
)
Exporter that sends spans to an HTTP endpoint.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_export
|
bool
|
Whether to use batch exporting. Defaults to True. |
True
|
export_config
|
Optional[ExportConfig]
|
Configuration for exporting spans. |
None
|
http_config
|
Optional[OtelHttpConfig]
|
Configuration for the HTTP exporter. |
None
|
sample_ratio
|
Optional[float]
|
The sampling ratio for traces. If None, defaults to always sample. |
None
|
Source code in python/scouter/stubs.pyi
compression
property
¶
Get the compression type used for exporting spans.
ImageUrl ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
url
|
str
|
The URL of the image. |
required |
kind
|
Literal['image-url']
|
The kind of the content. |
'image-url'
|
Source code in python/scouter/stubs.pyi
KafkaConfig ¶
KafkaConfig(
username: Optional[str] = None,
password: Optional[str] = None,
brokers: Optional[str] = None,
topic: Optional[str] = None,
compression_type: Optional[str] = None,
message_timeout_ms: int = 600000,
message_max_bytes: int = 2097164,
log_level: LogLevel = LogLevel.Info,
config: Dict[str, str] = {},
max_retries: int = 3,
)
This configuration supports both authenticated (SASL) and unauthenticated connections. When credentials are provided, SASL authentication is automatically enabled with secure defaults.
Authentication Priority (first match wins): 1. Direct parameters (username/password) 2. Environment variables (KAFKA_USERNAME/KAFKA_PASSWORD) 3. Configuration dictionary (sasl.username/sasl.password)
SASL Security Defaults
- security.protocol: "SASL_SSL" (override via KAFKA_SECURITY_PROTOCOL env var)
- sasl.mechanism: "PLAIN" (override via KAFKA_SASL_MECHANISM env var)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
username
|
Optional[str]
|
SASL username for authentication. Fallback: KAFKA_USERNAME environment variable. |
None
|
password
|
Optional[str]
|
SASL password for authentication. Fallback: KAFKA_PASSWORD environment variable. |
None
|
brokers
|
Optional[str]
|
Comma-separated list of Kafka broker addresses (host:port). Fallback: KAFKA_BROKERS environment variable. Default: "localhost:9092" |
None
|
topic
|
Optional[str]
|
Target Kafka topic for message publishing. Fallback: KAFKA_TOPIC environment variable. Default: "scouter_monitoring" |
None
|
compression_type
|
Optional[str]
|
Message compression algorithm. Options: "none", "gzip", "snappy", "lz4", "zstd" Default: "gzip" |
None
|
message_timeout_ms
|
int
|
Maximum time to wait for message delivery (milliseconds). Default: 600000 (10 minutes) |
600000
|
message_max_bytes
|
int
|
Maximum message size in bytes. Default: 2097164 (~2MB) |
2097164
|
log_level
|
LogLevel
|
Logging verbosity for the Kafka producer. Default: LogLevel.Info |
Info
|
config
|
Dict[str, str]
|
Additional Kafka producer configuration parameters. See: https://kafka.apache.org/documentation/#producerconfigs Note: Direct parameters take precedence over config dictionary values. |
{}
|
max_retries
|
int
|
Maximum number of retry attempts for failed message deliveries. Default: 3 |
3
|
Examples:
Basic usage (unauthenticated):
SASL authentication:
config = KafkaConfig(
username="my_user",
password="my_password",
brokers="secure-kafka:9093",
topic="secure_topic"
)
Advanced configuration:
config = KafkaConfig(
brokers="kafka:9092",
compression_type="lz4",
config={
"acks": "all",
"batch.size": "32768",
"linger.ms": "10"
}
)
Source code in python/scouter/stubs.pyi
3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 | |
LLMAlertConfig ¶
LLMAlertConfig(
dispatch_config: Optional[
SlackDispatchConfig | OpsGenieDispatchConfig
] = None,
schedule: Optional[str | CommonCrons] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dispatch_config
|
Optional[SlackDispatchConfig | OpsGenieDispatchConfig]
|
Alert dispatch config. Defaults to console |
None
|
schedule
|
Optional[str | CommonCrons]
|
Schedule to run monitor. Defaults to daily at midnight |
None
|
Source code in python/scouter/stubs.pyi
alert_conditions
property
¶
Return the alert conditions
LLMDriftConfig ¶
LLMDriftConfig(
space: str = "__missing__",
name: str = "__missing__",
version: str = "0.1.0",
sample_rate: int = 5,
alert_config: LLMAlertConfig = LLMAlertConfig(),
)
space:
Space to associate with the config
name:
Name to associate with the config
version:
Version to associate with the config. Defaults to 0.1.0
sample_rate:
Sample rate for LLM drift detection. Defaults to 5.
alert_config:
Custom metric alert configuration
Source code in python/scouter/stubs.pyi
load_from_json_file
staticmethod
¶
Load config from json file Args: path: Path to json file to load config from.
model_dump_json ¶
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
alert_config: Optional[LLMAlertConfig] = None,
) -> None
Inplace operation that updates config args Args: space: Space to associate with the config name: Name to associate with the config version: Version to associate with the config alert_config: LLM alert configuration
Source code in python/scouter/stubs.pyi
LLMDriftMetric ¶
LLMDriftMetric(
name: str,
value: float,
alert_threshold: AlertThreshold,
alert_threshold_value: Optional[float] = None,
prompt: Optional[Prompt] = None,
)
Metric for monitoring LLM performance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the metric being monitored. This should be a descriptive identifier for the metric. |
required |
value
|
float
|
The current value of the metric. |
required |
alert_threshold
|
AlertThreshold
|
The condition used to determine when an alert should be triggered. |
required |
alert_threshold_value
|
Optional[float]
|
The threshold or boundary value used in conjunction with the alert_threshold. If supplied, this value will be added or subtracted from the provided metric value to determine if an alert should be triggered. |
None
|
prompt
|
Optional[Prompt]
|
Optional prompt associated with the metric. This can be used to provide context or additional information about the metric being monitored. If creating an LLM drift profile from a pre-defined workflow, this can be none. |
None
|
Source code in python/scouter/stubs.pyi
alert_threshold_value
property
¶
Return the alert_threshold_value
LLMDriftProfile ¶
LLMDriftProfile(
config: LLMDriftConfig,
metrics: list[LLMDriftMetric],
workflow: Optional[Workflow] = None,
)
LLM evaluations are run asynchronously on the scouter server.
Logic flow
- If only metrics are provided, a workflow will be created automatically from the metrics. In this case a prompt is required for each metric.
- If a workflow is provided, it will be parsed and validated for compatibility:
- A list of metrics to evaluate workflow output must be provided
- Metric names must correspond to the final task names in the workflow
Baseline metrics and thresholds will be extracted from the LLMDriftMetric objects.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
LLMDriftConfig
|
The configuration for the LLM drift profile containing space, name, version, and alert settings. |
required |
metrics
|
list[LLMDriftMetric]
|
A list of LLMDriftMetric objects representing the metrics to be monitored. Each metric defines evaluation criteria and alert thresholds. |
required |
workflow
|
Optional[Workflow]
|
Optional custom workflow for advanced evaluation scenarios. If provided, the workflow will be validated to ensure proper parameter and response type configuration. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
LLMDriftProfile |
Configured profile ready for LLM drift monitoring. |
Raises:
| Type | Description |
|---|---|
ProfileError
|
If workflow validation fails, metrics are empty when no workflow is provided, or if workflow tasks don't match metric names. |
Examples:
Basic usage with metrics only:
>>> config = LLMDriftConfig("my_space", "my_model", "1.0")
>>> metrics = [
... LLMDriftMetric("accuracy", 0.95, AlertThreshold.Above, 0.1, prompt),
... LLMDriftMetric("relevance", 0.85, AlertThreshold.Below, 0.2, prompt2)
... ]
>>> profile = LLMDriftProfile(config, metrics)
Advanced usage with custom workflow:
>>> workflow = create_custom_workflow() # Your custom workflow
>>> metrics = [LLMDriftMetric("final_task", 0.9, AlertThreshold.Above)]
>>> profile = LLMDriftProfile(config, metrics, workflow)
Note
- When using custom workflows, ensure final tasks have Score response types
- Initial workflow tasks must include "input" and/or "response" parameters
- All metric names must match corresponding workflow task names
Source code in python/scouter/stubs.pyi
from_file
staticmethod
¶
Load drift profile from file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to the json file |
required |
Returns:
| Type | Description |
|---|---|
LLMDriftProfile
|
LLMDriftProfile |
model_dump ¶
model_dump_json ¶
model_validate
staticmethod
¶
Load drift profile from dictionary
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
DriftProfile dictionary |
required |
model_validate_json
staticmethod
¶
Load drift profile from json
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string representation of the drift profile |
required |
save_to_json ¶
Save drift profile to json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
Optional path to save the json file. If not provided, a default path will be used. |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the saved json file. |
Source code in python/scouter/stubs.pyi
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
sample_size: Optional[int] = None,
alert_config: Optional[LLMAlertConfig] = None,
) -> None
Inplace operation that updates config args
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Model name |
None
|
space
|
Optional[str]
|
Model space |
None
|
version
|
Optional[str]
|
Model version |
None
|
sample_size
|
Optional[int]
|
Sample size |
None
|
alert_config
|
Optional[LLMAlertConfig]
|
Alert configuration |
None
|
Source code in python/scouter/stubs.pyi
LLMEvalMetric ¶
Defines an LLM eval metric to use when evaluating LLMs
and responses can be evaluated against a variety of user-defined metrics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the metric |
required |
prompt
|
Prompt
|
Prompt to use for the metric. For example, a user may create an accuracy analysis prompt or a query reformulation analysis prompt. |
required |
Source code in python/scouter/stubs.pyi
LLMEvalRecord ¶
LLM record containing context tied to a Large Language Model interaction that is used to evaluate LLM responses.
Examples:
>>> record = LLMEvalRecord(
id="123",
context={
"input": "What is the capital of France?",
"response": "Paris is the capital of France."
},
... )
>>> print(record.context["input"])
"What is the capital of France?"
then used to inject context into the evaluation prompts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
Context
|
Additional context information as a dictionary or a pydantic BaseModel. During evaluation,
this will be merged with the input and response data and passed to the assigned
evaluation prompts. So if you're evaluation prompts expect additional context via
bound variables (e.g., |
required |
id
|
Optional[str]
|
Unique identifier for the record. If not provided, a new UUID will be generated. This is helpful for when joining evaluation results back to the original request. |
None
|
Raises:
| Type | Description |
|---|---|
TypeError
|
If context is not a dict or a pydantic BaseModel. |
Source code in python/scouter/stubs.pyi
context
property
¶
Get the contextual information.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
The context data as a Python object (deserialized from JSON). |
LLMEvalResults ¶
Defines the results of an LLM eval metric
errored_tasks
property
¶
Get a list of record IDs that had errors during evaluation
histograms
property
¶
Get histograms for all calculated features (metrics, embeddings, similarities)
model_dump_json ¶
model_validate_json
staticmethod
¶
Validate and create an LLMEvalResults instance from a JSON string
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string to validate and create the LLMEvalResults instance from. |
required |
Source code in python/scouter/stubs.pyi
to_dataframe ¶
Convert the results to a Pandas or Polars DataFrame.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
polars
|
bool
|
Whether to return a Polars DataFrame. If False, a Pandas DataFrame will be returned. |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
DataFrame |
Any
|
A Pandas or Polars DataFrame containing the results. |
Source code in python/scouter/stubs.pyi
LLMEvalTaskResult ¶
LLMMetricAlertCondition ¶
alert_threshold (AlertThreshold):
The condition that determines when an alert should be triggered.
Must be one of the AlertThreshold enum members like Below, Above, or Outside.
alert_threshold_value (Optional[float], optional):
A numerical boundary used in conjunction with the alert_threshold.
This can be None for certain types of comparisons that don't require a fixed boundary.
Example: alert_threshold = LLMMetricAlertCondition(AlertCondition.BELOW, 2.0)
Source code in python/scouter/stubs.pyi
LLMMetricRecord ¶
LLMRecord ¶
LLM record containing context tied to a Large Language Model interaction that is used to evaluate drift in LLM responses.
Examples:
>>> record = LLMRecord(
... context={
... "input": "What is the capital of France?",
... "response": "Paris is the capital of France."
... },
... )
>>> print(record.context["input"])
"What is the capital of France?"
then used to inject context into the evaluation prompts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
Context
|
Additional context information as a dictionary or a pydantic BaseModel. During evaluation,
this will be merged with the input and response data and passed to the assigned
evaluation prompts. So if you're evaluation prompts expect additional context via
bound variables (e.g., |
required |
prompt
|
Optional[Prompt | SerializedType]
|
Optional prompt configuration associated with this record. Can be a Potatohead Prompt or a JSON-serializable type. |
None
|
Raises:
| Type | Description |
|---|---|
TypeError
|
If context is not a dict or a pydantic BaseModel. |
Source code in python/scouter/stubs.pyi
context
property
¶
Get the contextual information.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
The context data as a Python object (deserialized from JSON). |
Raises:
| Type | Description |
|---|---|
TypeError
|
If the stored JSON cannot be converted to a Python object. |
entity_type
instance-attribute
¶
Type of entity, always EntityType.LLM for LLMRecord instances.
prompt
instance-attribute
¶
Optional prompt configuration associated with this record.
LLMTestServer ¶
Mock server for OpenAI API. This class is used to simulate the OpenAI API for testing purposes.
Source code in python/scouter/stubs.pyi
LatLng ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
latitude
|
float
|
The latitude value. |
required |
longitude
|
float
|
The longitude value. |
required |
Source code in python/scouter/stubs.pyi
LatencyMetrics ¶
LogProbs ¶
tokens
property
¶
The log probabilities of the tokens in the response. This is primarily used for debugging and analysis purposes.
LoggingConfig ¶
LoggingConfig(
show_threads: bool = True,
log_level: LogLevel = LogLevel.Info,
write_level: WriteLevel = WriteLevel.Stdout,
use_json: bool = False,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
show_threads
|
bool
|
Whether to include thread information in log messages. Default is True. |
True
|
log_level
|
LogLevel
|
Log level for the logger. Default is LogLevel.Info. |
Info
|
write_level
|
WriteLevel
|
Write level for the logger. Default is WriteLevel.Stdout. |
Stdout
|
use_json
|
bool
|
Whether to write log messages in JSON format. Default is False. |
False
|
Source code in python/scouter/stubs.pyi
Manual ¶
Divides the feature range into a fixed number of equally sized bins.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_bins
|
int
|
The exact number of bins to create. |
required |
Source code in python/scouter/stubs.pyi
MediaResolution ¶
Media resolution settings for content generation.
Message ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
content
|
str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl
|
The content of the message. |
required |
Source code in python/scouter/stubs.pyi
content
property
¶
The content of the message
bind ¶
Bind context to a specific variable in the prompt. This is an immutable operation meaning that it will return a new Message object with the context bound.
Example with Prompt that contains two messages
```python
prompt = Prompt(
model="openai:gpt-4o",
message=[
"My prompt variable is ${variable}",
"This is another message",
],
system_instruction="system_prompt",
)
bounded_prompt = prompt.message[0].bind("variable", "hello world").unwrap() # we bind "hello world" to "variable"
```
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the variable to bind. |
required |
value
|
str
|
The value to bind the variable to. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
Message |
Message
|
The message with the context bound. |
Source code in python/scouter/stubs.pyi
bind_mut ¶
Bind context to a specific variable in the prompt. This is a mutable operation meaning that it will modify the current Message object.
Example with Prompt that contains two messages
```python
prompt = Prompt(
model="openai:gpt-4o",
message=[
"My prompt variable is ${variable}",
"This is another message",
],
system_instruction="system_prompt",
)
prompt.message[0].bind_mut("variable", "hello world") # we bind "hello world" to "variable"
```
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the variable to bind. |
required |
value
|
str
|
The value to bind the variable to. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
Message |
Message
|
The message with the context bound. |
Source code in python/scouter/stubs.pyi
model_dump ¶
Unwrap the message content and serialize it to a dictionary.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dict[str, Any]: The message dictionary with keys "content" and "role". |
unwrap ¶
Unwrap the message content.
Returns:
| Type | Description |
|---|---|
Any
|
A serializable representation of the message content, which can be a string, list, or dict. |
Metric ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the metric |
required |
value
|
float | int
|
Value to assign to the metric. Can be an int or float but will be converted to float. |
required |
Source code in python/scouter/stubs.pyi
Metrics ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metrics
|
List[Metric] | Dict[str, Union[int, float]]
|
List of metrics or a dictionary of key-value pairs. If a list, each item must be an instance of Metric. If a dictionary, each key is the metric name and each value is the metric value. |
required |
Example
```python
Passing a list of metrics¶
metrics = Metrics( metrics=[ Metric("metric_1", 1.0), Metric("metric_2", 2.5), Metric("metric_3", 3), ] )
Passing a dictionary (pydantic model) of metrics¶
class MyMetrics(BaseModel): metric1: float metric2: int
my_metrics = MyMetrics( metric1=1.0, metric2=2, )
metrics = Metrics(my_metrics.model_dump())
Source code in python/scouter/stubs.pyi
MockConfig ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Arbitrary keyword arguments to set as attributes. |
{}
|
Source code in python/scouter/stubs.pyi
Modality ¶
Represents different modalities for content generation.
ModelArmorConfig ¶
The name of the prompt template to use.
response_template_name (Optional[str]):
The name of the response template to use.
Source code in python/scouter/stubs.pyi
ModelSettings ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
settings
|
OpenAIChatSettings | GeminiSettings
|
The settings to use for the model. Currently supports OpenAI and Gemini settings. |
required |
Source code in python/scouter/stubs.pyi
settings
property
¶
The settings to use for the model.
NumericStats ¶
ObservabilityMetrics ¶
Observer ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Model space |
required |
name
|
str
|
Model name |
required |
version
|
str
|
Model version |
required |
Source code in python/scouter/stubs.pyi
OpenAIChatSettings ¶
OpenAIChatSettings(
*,
max_completion_tokens: Optional[int] = None,
temperature: Optional[float] = None,
top_p: Optional[float] = None,
top_k: Optional[int] = None,
frequency_penalty: Optional[float] = None,
timeout: Optional[float] = None,
parallel_tool_calls: Optional[bool] = None,
seed: Optional[int] = None,
logit_bias: Optional[Dict[str, int]] = None,
stop_sequences: Optional[List[str]] = None,
logprobs: Optional[bool] = None,
audio: Optional[AudioParam] = None,
metadata: Optional[Dict[str, str]] = None,
modalities: Optional[List[str]] = None,
n: Optional[int] = None,
prediction: Optional[Prediction] = None,
presence_penalty: Optional[float] = None,
prompt_cache_key: Optional[str] = None,
reasoning_effort: Optional[str] = None,
safety_identifier: Optional[str] = None,
service_tier: Optional[str] = None,
store: Optional[bool] = None,
stream: Optional[bool] = None,
stream_options: Optional[StreamOptions] = None,
tool_choice: Optional[ToolChoice] = None,
tools: Optional[List[Tool]] = None,
top_logprobs: Optional[int] = None,
verbosity: Optional[str] = None,
extra_body: Optional[Any] = None
)
OpenAI chat completion settings configuration.
This class provides configuration options for OpenAI chat completions, including model parameters, tool usage, and request options.
Examples:
>>> settings = OpenAIChatSettings(
... temperature=0.7,
... max_completion_tokens=1000,
... stream=True
... )
>>> settings.temperature = 0.5
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_completion_tokens
|
Optional[int]
|
Maximum number of tokens to generate |
None
|
temperature
|
Optional[float]
|
Sampling temperature (0.0 to 2.0) |
None
|
top_p
|
Optional[float]
|
Nucleus sampling parameter |
None
|
top_k
|
Optional[int]
|
Top-k sampling parameter |
None
|
frequency_penalty
|
Optional[float]
|
Frequency penalty (-2.0 to 2.0) |
None
|
timeout
|
Optional[float]
|
Request timeout in seconds |
None
|
parallel_tool_calls
|
Optional[bool]
|
Whether to enable parallel tool calls |
None
|
seed
|
Optional[int]
|
Random seed for deterministic outputs |
None
|
logit_bias
|
Optional[Dict[str, int]]
|
Token bias modifications |
None
|
stop_sequences
|
Optional[List[str]]
|
Sequences where generation should stop |
None
|
logprobs
|
Optional[bool]
|
Whether to return log probabilities |
None
|
audio
|
Optional[AudioParam]
|
Audio generation parameters |
None
|
metadata
|
Optional[Dict[str, str]]
|
Additional metadata for the request |
None
|
modalities
|
Optional[List[str]]
|
List of modalities to use |
None
|
n
|
Optional[int]
|
Number of completions to generate |
None
|
prediction
|
Optional[Prediction]
|
Prediction configuration |
None
|
presence_penalty
|
Optional[float]
|
Presence penalty (-2.0 to 2.0) |
None
|
prompt_cache_key
|
Optional[str]
|
Key for prompt caching |
None
|
reasoning_effort
|
Optional[str]
|
Reasoning effort level |
None
|
safety_identifier
|
Optional[str]
|
Safety configuration identifier |
None
|
service_tier
|
Optional[str]
|
Service tier to use |
None
|
store
|
Optional[bool]
|
Whether to store the conversation |
None
|
stream
|
Optional[bool]
|
Whether to stream the response |
None
|
stream_options
|
Optional[StreamOptions]
|
Streaming configuration options |
None
|
tool_choice
|
Optional[ToolChoice]
|
Tool choice configuration |
None
|
tools
|
Optional[List[Tool]]
|
Available tools for the model |
None
|
top_logprobs
|
Optional[int]
|
Number of top log probabilities to return |
None
|
verbosity
|
Optional[str]
|
Verbosity level for the response |
None
|
extra_body
|
Optional[Any]
|
Additional request body parameters |
None
|
Source code in python/scouter/stubs.pyi
1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 | |
OpenAIEmbeddingConfig ¶
OpenAIEmbeddingConfig(
model: str,
dimensions: Optional[int] = None,
encoding_format: Optional[str] = None,
user: Optional[str] = None,
)
OpenAI embedding configuration settings.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
str
|
The embedding model to use. |
required |
dimensions
|
Optional[int]
|
The output dimensionality of the embeddings. |
None
|
encoding_format
|
Optional[str]
|
The encoding format to use for the embeddings. Can be either "float" or "base64". |
None
|
user
|
Optional[str]
|
The user ID for the embedding request. |
None
|
Source code in python/scouter/stubs.pyi
OpsGenieDispatchConfig ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
team
|
str
|
Opsegenie team to be notified in the event of drift |
required |
Source code in python/scouter/stubs.pyi
OtelHttpConfig ¶
OtelHttpConfig(
headers: Optional[dict[str, str]] = None,
compression: Optional[CompressionType] = None,
)
Configuration for HTTP span exporting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
headers
|
Optional[dict[str, str]]
|
Optional HTTP headers to include in requests. |
None
|
compression
|
Optional[CompressionType]
|
Optional compression type for HTTP requests. |
None
|
Source code in python/scouter/stubs.pyi
OtelProtocol ¶
Enumeration of protocols for HTTP exporting.
PrebuiltVoiceConfig ¶
PredictRequest ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
instances
|
List[dict]
|
A list of instances to be sent in the request. |
required |
parameters
|
Optional[dict]
|
Optional parameters for the request. |
None
|
Source code in python/scouter/stubs.pyi
ProfileStatusRequest ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Model name |
required |
space
|
str
|
Model space |
required |
version
|
str
|
Model version |
required |
drift_type
|
DriftType
|
Profile drift type. A (repo/name/version can be associated with more than one drift type) |
required |
active
|
bool
|
Whether to set the profile as active or inactive |
required |
Source code in python/scouter/stubs.pyi
Prompt ¶
Prompt(
message: (
str
| Sequence[
str
| ImageUrl
| AudioUrl
| BinaryContent
| DocumentUrl
]
| Message
| List[Message]
| List[Dict[str, Any]]
),
model: str,
provider: Provider | str,
system_instruction: Optional[str | List[str]] = None,
model_settings: Optional[
ModelSettings | OpenAIChatSettings | GeminiSettings
] = None,
response_format: Optional[Any] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str | Sequence[str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl] | Message | List[Message]
|
The prompt to use. |
required |
model
|
str
|
The model to use for the prompt |
required |
provider
|
Provider | str
|
The provider to use for the prompt. |
required |
system_instruction
|
Optional[str | List[str]]
|
The system prompt to use in the prompt. |
None
|
model_settings
|
None
|
The model settings to use for the prompt. Defaults to None which means no model settings will be used |
None
|
response_format
|
Optional[BaseModel | Score]
|
The response format to use for the prompt. This is used for Structured Outputs (https://platform.openai.com/docs/guides/structured-outputs?api-mode=chat). Currently, response_format only support Pydantic BaseModel classes and the PotatoHead Score class. The provided response_format will be parsed into a JSON schema. |
None
|
Source code in python/scouter/stubs.pyi
model_identifier
property
¶
Concatenation of provider and model, used for identifying the model in the prompt. This is commonly used with pydantic_ai to identify the model to use for the agent.
response_json_schema
property
¶
The JSON schema for the response if provided.
system_instruction
property
¶
The system message to use in the prompt.
bind ¶
bind(
name: Optional[str] = None,
value: Optional[str | int | float | bool | list] = None,
**kwargs: Any
) -> Prompt
Bind context to a specific variable in the prompt. This is an immutable operation meaning that it will return a new Prompt object with the context bound. This will iterate over all user messages.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the variable to bind. |
None
|
value
|
str | int | float | bool | list
|
The value to bind the variable to. Must be a JSON serializable type. |
None
|
**kwargs
|
Any
|
Additional keyword arguments to bind to the prompt. This can be used to bind multiple variables at once. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
Prompt |
Prompt
|
The prompt with the context bound. |
Source code in python/scouter/stubs.pyi
bind_mut ¶
bind_mut(
name: Optional[str] = None,
value: Optional[str | int | float | bool | list] = None,
**kwargs: Any
) -> Prompt
Bind context to a specific variable in the prompt. This is a mutable operation meaning that it will modify the current Prompt object. This will iterate over all user messages.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the variable to bind. |
None
|
value
|
str | int | float | bool | list
|
The value to bind the variable to. Must be a JSON serializable type. |
None
|
**kwargs
|
Any
|
Additional keyword arguments to bind to the prompt. This can be used to bind multiple variables at once. |
{}
|
Returns:
| Name | Type | Description |
|---|---|---|
Prompt |
Prompt
|
The prompt with the context bound. |
Source code in python/scouter/stubs.pyi
from_path
staticmethod
¶
Load a prompt from a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
The path to the prompt file. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
Prompt |
Prompt
|
The loaded prompt. |
model_dump_json ¶
model_validate_json
staticmethod
¶
Validate the model JSON.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
The JSON string to validate. |
required |
Returns: Prompt: The prompt object.
save_prompt ¶
Save the prompt to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
The path to save the prompt to. If None, the prompt will be saved to the current working directory. |
None
|
Source code in python/scouter/stubs.pyi
PromptTokenDetails ¶
PsiAlertConfig ¶
PsiAlertConfig(
dispatch_config: Optional[
SlackDispatchConfig | OpsGenieDispatchConfig
] = None,
schedule: Optional[str | CommonCrons] = None,
features_to_monitor: List[str] = [],
threshold: Optional[
PsiThresholdType
] = PsiChiSquareThreshold(),
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dispatch_config
|
Optional[SlackDispatchConfig | OpsGenieDispatchConfig]
|
Alert dispatch configuration to use. Defaults to an internal "Console" type where the alerts will be logged to the console |
None
|
schedule
|
Optional[str | CommonCrons]
|
Schedule to run monitor. Defaults to daily at midnight |
None
|
features_to_monitor
|
List[str]
|
List of features to monitor. Defaults to empty list, which means all features |
[]
|
threshold
|
Optional[PsiThresholdType]
|
Configuration that helps determine how to calculate PSI critical values. Defaults to PsiChiSquareThreshold, which uses the chi-square distribution. |
PsiChiSquareThreshold()
|
Source code in python/scouter/stubs.pyi
features_to_monitor
property
writable
¶
Return the features to monitor
PsiChiSquareThreshold ¶
Uses the asymptotic chi-square distribution of PSI.
The chi-square method is generally more statistically rigorous than normal approximation, especially for smaller sample sizes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
alpha
|
float
|
Significance level (0.0 to 1.0, exclusive). Common values: 0.05 (95% confidence), 0.01 (99% confidence) |
0.05
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If alpha not in range (0.0, 1.0) |
Source code in python/scouter/stubs.pyi
PsiDriftConfig ¶
PsiDriftConfig(
space: str = "__missing__",
name: str = "__missing__",
version: str = "0.1.0",
alert_config: PsiAlertConfig = PsiAlertConfig(),
config_path: Optional[Path] = None,
categorical_features: Optional[list[str]] = None,
binning_strategy: (
QuantileBinning | EqualWidthBinning
) = QuantileBinning(num_bins=10),
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Model space |
'__missing__'
|
name
|
str
|
Model name |
'__missing__'
|
version
|
str
|
Model version. Defaults to 0.1.0 |
'0.1.0'
|
alert_config
|
PsiAlertConfig
|
Alert configuration |
PsiAlertConfig()
|
config_path
|
Optional[Path]
|
Optional path to load config from. |
None
|
categorical_features
|
Optional[list[str]]
|
List of features to treat as categorical for PSI calculation. |
None
|
binning_strategy
|
QuantileBinning | EqualWidthBinning
|
Strategy for binning continuous features during PSI calculation. Supports: - QuantileBinning (R-7 method, Hyndman & Fan Type 7). - EqualWidthBinning which divides the range of values into fixed-width bins. Default is QuantileBinning with 10 bins. You can also specify methods like Doane's rule with EqualWidthBinning. |
QuantileBinning(num_bins=10)
|
Source code in python/scouter/stubs.pyi
binning_strategy
property
writable
¶
binning_strategy
categorical_features
property
writable
¶
list of categorical features
load_from_json_file
staticmethod
¶
Load config from json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to json file to load config from. |
required |
model_dump_json ¶
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
alert_config: Optional[PsiAlertConfig] = None,
categorical_features: Optional[list[str]] = None,
binning_strategy: Optional[
QuantileBinning | EqualWidthBinning
] = None,
) -> None
Inplace operation that updates config args
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
Optional[str]
|
Model space |
None
|
name
|
Optional[str]
|
Model name |
None
|
version
|
Optional[str]
|
Model version |
None
|
alert_config
|
Optional[PsiAlertConfig]
|
Alert configuration |
None
|
categorical_features
|
Optional[list[str]]
|
Categorical features |
None
|
binning_strategy
|
Optional[QuantileBinning | EqualWidthBinning]
|
Binning strategy |
None
|
Source code in python/scouter/stubs.pyi
PsiDriftMap ¶
Drift map of features
features
property
¶
Returns dictionary of features and their reported drift, if any
model_dump_json ¶
model_validate_json
staticmethod
¶
Load drift map from json file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string representation of the drift map |
required |
save_to_json ¶
Save drift map to json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
Optional path to save the drift map. If None, outputs to |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the saved json file |
Source code in python/scouter/stubs.pyi
PsiDriftProfile ¶
from_file
staticmethod
¶
Load drift profile from file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to the file |
required |
model_dump ¶
model_dump_json ¶
model_validate
staticmethod
¶
Load drift profile from dictionary
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
DriftProfile dictionary |
required |
model_validate_json
staticmethod
¶
Load drift profile from json
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string representation of the drift profile |
required |
save_to_json ¶
Save drift profile to json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
Optional path to save the drift profile. If None, outputs to |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the saved json file |
Source code in python/scouter/stubs.pyi
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
alert_config: Optional[PsiAlertConfig] = None,
categorical_features: Optional[list[str]] = None,
binning_strategy: Optional[
QuantileBinning | EqualWidthBinning
] = None,
) -> None
Inplace operation that updates config args
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Model name |
None
|
space
|
Optional[str]
|
Model space |
None
|
version
|
Optional[str]
|
Model version |
None
|
alert_config
|
Optional[PsiAlertConfig]
|
Alert configuration |
None
|
categorical_features
|
Optional[list[str]]
|
Categorical Features |
None
|
binning_strategy
|
Optional[QuantileBinning | EqualWidthBinning]
|
Binning strategy |
None
|
Source code in python/scouter/stubs.pyi
PsiFeatureDriftProfile ¶
PsiFixedThreshold ¶
Uses a predetermined PSI threshold value, similar to traditional "rule of thumb" approaches (e.g., 0.10 for moderate drift, 0.25 for significant drift).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
threshold
|
float
|
Fixed PSI threshold value (must be positive). Common industry values: 0.10, 0.25 |
0.25
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If threshold is not positive |
Source code in python/scouter/stubs.pyi
PsiNormalThreshold ¶
Uses the asymptotic normal distribution of PSI to calculate critical values for population drift detection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
alpha
|
float
|
Significance level (0.0 to 1.0, exclusive). Common values: 0.05 (95% confidence), 0.01 (99% confidence) |
0.05
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If alpha not in range (0.0, 1.0) |
Source code in python/scouter/stubs.pyi
PsiServerRecord ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Model space |
required |
name
|
str
|
Model name |
required |
version
|
str
|
Model version |
required |
feature
|
str
|
Feature name |
required |
bin_id
|
int
|
Bundle ID |
required |
bin_count
|
int
|
Bundle ID |
required |
Source code in python/scouter/stubs.pyi
model_dump_json ¶
PyTask ¶
Python-specific task interface for Task objects and results
result
property
¶
The result of the task if it has been executed, otherwise None.
QuantileBinning ¶
This strategy uses the R-7 quantile method (Hyndman & Fan Type 7) to compute bin edges. It is the default quantile method in R and provides continuous, median-unbiased estimates that are approximately unbiased for normal distributions.
The R-7 method defines quantiles using
- m = 1 - p
- j = floor(n * p + m)
- h = n * p + m - j
- Q(p) = (1 - h) * x[j] + h * x[j+1]
Reference
Hyndman, R. J. & Fan, Y. (1996). "Sample quantiles in statistical packages." The American Statistician, 50(4), pp. 361–365. PDF: https://www.amherst.edu/media/view/129116/original/Sample+Quantiles.pdf
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_bins
|
int
|
Number of bins to compute using the R-7 quantile method. |
10
|
Source code in python/scouter/stubs.pyi
num_bins
property
writable
¶
The number of bins you want created using the r7 quantile method
Quantiles ¶
Queue ¶
Individual queue associated with a drift profile
insert ¶
Insert a record into the queue
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entity
|
Union[Features, Metrics, LLMRecord]
|
Entity to insert into the queue. Can be an instance for Features, Metrics, or LLMRecord. |
required |
Example
Source code in python/scouter/stubs.pyi
QueueFeature ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the feature |
required |
value
|
Any
|
Value of the feature. Can be an int, float, or string. |
required |
Example
Source code in python/scouter/stubs.pyi
categorical
staticmethod
¶
Create a categorical feature
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the feature |
required |
value
|
str
|
Value of the feature |
required |
float
staticmethod
¶
Create a float feature
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the feature |
required |
value
|
float
|
Value of the feature |
required |
int
staticmethod
¶
Create an integer feature
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the feature |
required |
value
|
int
|
Value of the feature |
required |
string
staticmethod
¶
Create a string feature
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Name of the feature |
required |
value
|
str
|
Value of the feature |
required |
RabbitMQConfig ¶
RabbitMQConfig(
host: Optional[str] = None,
port: Optional[int] = None,
username: Optional[str] = None,
password: Optional[str] = None,
queue: Optional[str] = None,
max_retries: int = 3,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
Optional[str]
|
RabbitMQ host. If not provided, the value of the RABBITMQ_HOST environment variable is used. |
None
|
port
|
Optional[int]
|
RabbitMQ port. If not provided, the value of the RABBITMQ_PORT environment variable is used. |
None
|
username
|
Optional[str]
|
RabbitMQ username. If not provided, the value of the RABBITMQ_USERNAME environment variable is used. |
None
|
password
|
Optional[str]
|
RabbitMQ password. If not provided, the value of the RABBITMQ_PASSWORD environment variable is used. |
None
|
queue
|
Optional[str]
|
RabbitMQ queue to publish messages to. If not provided, the value of the RABBITMQ_QUEUE environment variable is used. |
None
|
max_retries
|
int
|
Maximum number of retries to attempt when publishing messages. Default is 3. |
3
|
Source code in python/scouter/stubs.pyi
RedisConfig ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Redis address. If not provided, the value of the REDIS_ADDR environment variable is used and defaults to "redis://localhost:6379". |
None
|
channel
|
str
|
Redis channel to publish messages to. If not provided, the value of the REDIS_CHANNEL environment variable is used and defaults to "scouter_monitoring". |
required |
Source code in python/scouter/stubs.pyi
ResponseLogProbs ¶
RetrievalConfig ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
lat_lng
|
LatLng
|
The latitude and longitude configuration. |
required |
language_code
|
str
|
The language code for the retrieval. |
required |
Source code in python/scouter/stubs.pyi
Rice ¶
RouteMetrics ¶
RustyLogger ¶
debug ¶
Log a debug message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Message to log. |
required |
args
|
Any
|
Additional arguments to format the message. |
()
|
error ¶
Log an error message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Message to log. |
required |
args
|
Any
|
Additional arguments to format the message. |
()
|
get_logger
staticmethod
¶
Get a logger with the provided name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
Optional[LoggingConfig]
|
Logging configuration options. |
None
|
info ¶
Log an info message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Message to log. |
required |
args
|
Any
|
Additional arguments to format the message. |
()
|
setup_logging
staticmethod
¶
Setup logging with the provided configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
Optional[LoggingConfig]
|
Logging configuration options. |
None
|
trace ¶
Log a trace message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Message to log. |
required |
args
|
Any
|
Additional arguments to format the message. |
()
|
warn ¶
Log a warning message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Message to log. |
required |
args
|
Any
|
Additional arguments to format the message. |
()
|
SafetySetting ¶
SafetySetting(
category: HarmCategory,
threshold: HarmBlockThreshold,
method: Optional[HarmBlockMethod] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
category
|
HarmCategory
|
The category of harm to protect against. |
required |
threshold
|
HarmBlockThreshold
|
The threshold for blocking content. |
required |
method
|
Optional[HarmBlockMethod]
|
The method used for blocking (if any). |
None
|
Source code in python/scouter/stubs.pyi
Score ¶
A class representing a score with a score value and a reason. This is typically used as a response type for tasks/prompts that require scoring or evaluation of results.
Example:
Prompt(
model="openai:gpt-4o",
message="What is the score of this response?",
system_instruction="system_prompt",
response_format=Score,
)
Scott ¶
ScouterClient ¶
Helper client for interacting with Scouter Server
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
Optional[HttpConfig]
|
HTTP configuration for interacting with the server. |
None
|
Source code in python/scouter/stubs.pyi
download_profile ¶
Download profile
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
GetProfileRequest
|
GetProfileRequest |
required |
path
|
Optional[Path]
|
Path to save profile |
required |
Returns:
| Type | Description |
|---|---|
str
|
Path to downloaded profile |
Source code in python/scouter/stubs.pyi
get_alerts ¶
Get alerts
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
DriftAlertRequest
|
DriftAlertRequest |
required |
Returns:
| Type | Description |
|---|---|
List[Alert]
|
List[Alert] |
get_binned_drift ¶
Get drift map from server
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
drift_request
|
DriftRequest
|
DriftRequest object |
required |
Returns:
| Type | Description |
|---|---|
Any
|
Drift map of type BinnedMetrics | BinnedPsiFeatureMetrics | BinnedSpcFeatureMetrics |
Source code in python/scouter/stubs.pyi
get_paginated_traces ¶
Get paginated traces Args: filters: TraceFilters object Returns: TracePaginationResponse
get_tags ¶
Get tags for an entity
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entity_type
|
str
|
Entity type |
required |
entity_id
|
str
|
Entity ID |
required |
Returns:
| Type | Description |
|---|---|
TagsResponse
|
TagsResponse |
get_trace_baggage ¶
Get trace baggage
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
trace_id
|
str
|
Trace ID |
required |
Returns:
| Type | Description |
|---|---|
TraceBaggageResponse
|
TraceBaggageResponse |
get_trace_metrics ¶
Get trace metrics
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
TraceMetricsRequest
|
TraceMetricsRequest |
required |
Returns:
| Type | Description |
|---|---|
TraceMetricsResponse
|
TraceMetricsResponse |
get_trace_spans ¶
Get trace spans
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
trace_id
|
str
|
Trace ID |
required |
Returns:
| Type | Description |
|---|---|
TraceSpansResponse
|
TraceSpansResponse |
refresh_trace_summary ¶
register_profile ¶
Registers a drift profile with the server
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
profile
|
Any
|
Drift profile |
required |
set_active
|
bool
|
Whether to set the profile as active or inactive |
False
|
Returns:
| Type | Description |
|---|---|
bool
|
boolean |
Source code in python/scouter/stubs.pyi
update_profile_status ¶
Update profile status
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
ProfileStatusRequest
|
ProfileStatusRequest |
required |
Returns:
| Type | Description |
|---|---|
bool
|
boolean |
ScouterQueue ¶
Main queue class for Scouter. Publishes drift records to the configured transport
transport_config
property
¶
Return the transport configuration used by the queue
from_path
staticmethod
¶
from_path(
path: Dict[str, Path],
transport_config: Union[
KafkaConfig, RabbitMQConfig, RedisConfig, HttpConfig
],
) -> ScouterQueue
Initializes Scouter queue from one or more drift profile paths
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Dict[str, Path]
|
Dictionary of drift profile paths. Each key is a user-defined alias for accessing a queue |
required |
transport_config
|
Union[KafkaConfig, RabbitMQConfig, RedisConfig, HttpConfig]
|
Transport configuration for the queue publisher Can be KafkaConfig, RabbitMQConfig RedisConfig, or HttpConfig |
required |
Example
queue = ScouterQueue(
path={
"spc": Path("spc_profile.json"),
"psi": Path("psi_profile.json"),
},
transport_config=KafkaConfig(
brokers="localhost:9092",
topic="scouter_topic",
),
)
queue["psi"].insert(
Features(
features=[
Feature("feature_1", 1),
Feature("feature_2", 2.0),
Feature("feature_3", "value"),
]
)
)
Source code in python/scouter/stubs.pyi
ScouterTestServer ¶
ScouterTestServer(
cleanup: bool = True,
rabbit_mq: bool = False,
kafka: bool = False,
openai: bool = False,
base_path: Optional[Path] = None,
)
When the test server is used as a context manager, it will start the server in a background thread and set the appropriate env vars so that the client can connect to the server. The server will be stopped when the context manager exits and the env vars will be reset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cleanup
|
bool
|
Whether to cleanup the server after the test. Defaults to True. |
True
|
rabbit_mq
|
bool
|
Whether to use RabbitMQ as the transport. Defaults to False. |
False
|
kafka
|
bool
|
Whether to use Kafka as the transport. Defaults to False. |
False
|
openai
|
bool
|
Whether to create a mock OpenAITest server. Defaults to False. |
False
|
base_path
|
Optional[Path]
|
The base path for the server. Defaults to None. This is primarily used for testing loading attributes from a pyproject.toml file. |
None
|
Source code in python/scouter/stubs.pyi
ServerRecord ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
record
|
Any
|
Server record to initialize |
required |
Source code in python/scouter/stubs.pyi
record
property
¶
Return the drift server record.
ServerRecords ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
records
|
List[ServerRecord]
|
List of server records |
required |
Source code in python/scouter/stubs.pyi
SlackDispatchConfig ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
channel
|
str
|
Slack channel name for where alerts will be reported |
required |
Source code in python/scouter/stubs.pyi
SpanEvent ¶
Represents an event within a span.
SpanKind ¶
Enumeration of span kinds.
SpanLink ¶
Represents a link to another span.
SpcAlert ¶
SpcAlertConfig ¶
SpcAlertConfig(
rule: Optional[SpcAlertRule] = None,
dispatch_config: Optional[
SlackDispatchConfig | OpsGenieDispatchConfig
] = None,
schedule: Optional[str | CommonCrons] = None,
features_to_monitor: List[str] = [],
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
rule
|
Optional[SpcAlertRule]
|
Alert rule to use. Defaults to Standard |
None
|
dispatch_config
|
Optional[SlackDispatchConfig | OpsGenieDispatchConfig]
|
Alert dispatch config. Defaults to console |
None
|
schedule
|
Optional[str | CommonCrons]
|
Schedule to run monitor. Defaults to daily at midnight |
None
|
features_to_monitor
|
List[str]
|
List of features to monitor. Defaults to empty list, which means all features |
[]
|
Source code in python/scouter/stubs.pyi
features_to_monitor
property
writable
¶
Return the features to monitor
SpcAlertRule ¶
SpcAlertRule(
rule: str = "8 16 4 8 2 4 1 1",
zones_to_monitor: List[AlertZone] = [
AlertZone.Zone1,
AlertZone.Zone2,
AlertZone.Zone3,
AlertZone.Zone4,
],
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
rule
|
str
|
Rule to use for alerting. Eight digit integer string. Defaults to '8 16 4 8 2 4 1 1' |
'8 16 4 8 2 4 1 1'
|
zones_to_monitor
|
List[AlertZone]
|
List of zones to monitor. Defaults to all zones. |
[Zone1, Zone2, Zone3, Zone4]
|
Source code in python/scouter/stubs.pyi
SpcDriftConfig ¶
SpcDriftConfig(
space: str = "__missing__",
name: str = "__missing__",
version: str = "0.1.0",
sample_size: int = 25,
alert_config: SpcAlertConfig = SpcAlertConfig(),
config_path: Optional[Path] = None,
)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Model space |
'__missing__'
|
name
|
str
|
Model name |
'__missing__'
|
version
|
str
|
Model version. Defaults to 0.1.0 |
'0.1.0'
|
sample_size
|
int
|
Sample size |
25
|
alert_config
|
SpcAlertConfig
|
Alert configuration |
SpcAlertConfig()
|
config_path
|
Optional[Path]
|
Optional path to load config from. |
None
|
Source code in python/scouter/stubs.pyi
load_from_json_file
staticmethod
¶
Load config from json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to json file to load config from. |
required |
model_dump_json ¶
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
sample_size: Optional[int] = None,
alert_config: Optional[SpcAlertConfig] = None,
) -> None
Inplace operation that updates config args
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
Optional[str]
|
Model space |
None
|
name
|
Optional[str]
|
Model name |
None
|
version
|
Optional[str]
|
Model version |
None
|
sample_size
|
Optional[int]
|
Sample size |
None
|
alert_config
|
Optional[SpcAlertConfig]
|
Alert configuration |
None
|
Source code in python/scouter/stubs.pyi
SpcDriftMap ¶
Drift map of features
features
property
¶
Returns dictionary of features and their data profiles
model_dump_json ¶
model_validate_json
staticmethod
¶
Load drift map from json file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string representation of the drift map |
required |
save_to_json ¶
Save drift map to json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
Optional path to save the drift map. If None, outputs to |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the saved json file |
Source code in python/scouter/stubs.pyi
SpcDriftProfile ¶
from_file
staticmethod
¶
Load drift profile from file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Path
|
Path to the file |
required |
model_dump ¶
model_dump_json ¶
model_validate
staticmethod
¶
Load drift profile from dictionary
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
DriftProfile dictionary |
required |
model_validate_json
staticmethod
¶
Load drift profile from json
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
JSON string representation of the drift profile |
required |
save_to_json ¶
Save drift profile to json file
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
Optional[Path]
|
Optional path to save the drift profile. If None, outputs to |
None
|
Returns:
| Type | Description |
|---|---|
Path
|
Path to the saved json file |
Source code in python/scouter/stubs.pyi
update_config_args ¶
update_config_args(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
sample_size: Optional[int] = None,
alert_config: Optional[SpcAlertConfig] = None,
) -> None
Inplace operation that updates config args
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Model name |
None
|
space
|
Optional[str]
|
Model space |
None
|
version
|
Optional[str]
|
Model version |
None
|
sample_size
|
Optional[int]
|
Sample size |
None
|
alert_config
|
Optional[SpcAlertConfig]
|
Alert configuration |
None
|
Source code in python/scouter/stubs.pyi
SpcFeatureDrift ¶
SpcFeatureDriftProfile ¶
SpcServerRecord ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Model space |
required |
name
|
str
|
Model name |
required |
version
|
str
|
Model version |
required |
feature
|
str
|
Feature name |
required |
value
|
float
|
Feature value |
required |
Source code in python/scouter/stubs.pyi
model_dump_json ¶
SpeechConfig ¶
SquareRoot ¶
StdoutSpanExporter ¶
Exporter that outputs spans to standard output (stdout).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_export
|
bool
|
Whether to use batch exporting. Defaults to False. |
False
|
sample_ratio
|
Optional[float]
|
The sampling ratio for traces. If None, defaults to always sample. |
None
|
Source code in python/scouter/stubs.pyi
StringStats ¶
Sturges ¶
TagRecord ¶
Represents a single tag record associated with an entity.
TagsResponse ¶
Response structure containing a list of tag records.
Task ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent_id
|
str
|
The ID of the agent that will execute the task. |
required |
prompt
|
Prompt
|
The prompt to use for the task. |
required |
dependencies
|
List[str]
|
The dependencies of the task. |
[]
|
id
|
Optional[str]
|
The ID of the task. If None, a random uuid7 will be generated. |
None
|
Source code in python/scouter/stubs.pyi
TaskEvent ¶
details
property
¶
Additional details about the event. This can include information such as error messages or other relevant data.
timestamp
property
¶
The timestamp of the event. This is the time when the event occurred.
updated_at
property
¶
The timestamp of when the event was last updated. This is useful for tracking changes to the event.
TaskList ¶
TerrellScott ¶
TestSpanExporter ¶
Exporter for testing that collects spans in memory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_export
|
bool
|
Whether to use batch exporting. Defaults to True. |
True
|
Source code in python/scouter/stubs.pyi
ThinkingConfig ¶
TraceBaggageRecord ¶
Represents a single baggage record associated with a trace.
TraceBaggageResponse ¶
Response structure containing trace baggage records.
TraceFilters ¶
TraceFilters(
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
service_name: Optional[str] = None,
has_errors: Optional[bool] = None,
status_code: Optional[int] = None,
start_time: Optional[datetime] = None,
end_time: Optional[datetime] = None,
limit: Optional[int] = None,
cursor_created_at: Optional[datetime] = None,
cursor_trace_id: Optional[str] = None,
)
A struct for filtering traces, generated from Rust pyclass.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
Optional[str]
|
Model space filter |
None
|
name
|
Optional[str]
|
Model name filter |
None
|
version
|
Optional[str]
|
Model version filter |
None
|
service_name
|
Optional[str]
|
Service name filter |
None
|
has_errors
|
Optional[bool]
|
Filter by presence of errors |
None
|
status_code
|
Optional[int]
|
Filter by root span status code |
None
|
start_time
|
Optional[datetime]
|
Start time boundary (UTC) |
None
|
end_time
|
Optional[datetime]
|
End time boundary (UTC) |
None
|
limit
|
Optional[int]
|
Maximum number of results to return |
None
|
cursor_created_at
|
Optional[datetime]
|
Pagination cursor: created at timestamp |
None
|
cursor_trace_id
|
Optional[str]
|
Pagination cursor: trace ID |
None
|
Source code in python/scouter/stubs.pyi
TraceListItem ¶
Represents a summary item for a trace in a list view.
TraceMetricBucket ¶
Represents aggregated trace metrics for a specific time bucket.
TraceMetricsRequest ¶
TraceMetricsRequest(
start_time: datetime,
end_time: datetime,
bucket_interval: str,
space: Optional[str] = None,
name: Optional[str] = None,
version: Optional[str] = None,
)
Request payload for fetching trace metrics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
start_time
|
datetime
|
Start time boundary (UTC) |
required |
end_time
|
datetime
|
End time boundary (UTC) |
required |
bucket_interval
|
str
|
The time interval for metric aggregation buckets (e.g., '1 minutes', '30 minutes') |
required |
space
|
Optional[str]
|
Model space filter |
None
|
name
|
Optional[str]
|
Model name filter |
None
|
version
|
Optional[str]
|
Model version filter |
None
|
Source code in python/scouter/stubs.pyi
TraceMetricsResponse ¶
Response structure containing aggregated trace metrics.
TracePaginationResponse ¶
Response structure for paginated trace list requests.
TraceSpan ¶
Detailed information for a single span within a trace.
TraceSpansResponse ¶
Response structure containing a list of spans for a trace.
Usage ¶
Usage statistics for a model response.
completion_tokens
property
¶
The number of completion tokens used in the response.
completion_tokens_details
property
¶
Details about the completion tokens used in the response.
prompt_tokens_details
property
¶
Details about the prompt tokens used in the request.
total_tokens
property
¶
The total number of tokens used in the request and response.
VoiceConfig ¶
Configuration for voice generation.
Source code in python/scouter/stubs.pyi
Workflow ¶
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The name of the workflow. |
required |
Source code in python/scouter/stubs.pyi
is_workflow
property
¶
Returns True if the workflow is a valid workflow, otherwise False. This is used to determine if the workflow can be executed.
add_agent ¶
Add an agent to the workflow.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent
|
Agent
|
The agent to add to the workflow. |
required |
add_task ¶
Add a task to the workflow.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
task
|
Task
|
The task to add to the workflow. |
required |
output_type
|
Optional[Any]
|
The output type to use for the task. This can either be a Pydantic |
required |
Source code in python/scouter/stubs.pyi
add_task_output_types ¶
Add output types for tasks in the workflow. This is primarily used for when loading a workflow as python objects are not serializable.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
task_output_types
|
Dict[str, Any]
|
A dictionary mapping task IDs to their output types.
This can either be a Pydantic |
required |
Source code in python/scouter/stubs.pyi
add_tasks ¶
Add multiple tasks to the workflow.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tasks
|
List[Task]
|
The tasks to add to the workflow. |
required |
execution_plan ¶
Get the execution plan for the workflow.
Returns:
| Type | Description |
|---|---|
Dict[str, List[str]]
|
Dict[str, List[str]]: A dictionary where the keys are task IDs and the values are lists of task IDs that the task depends on. |
Source code in python/scouter/stubs.pyi
is_complete ¶
Check if the workflow is complete.
Returns:
| Name | Type | Description |
|---|---|---|
bool |
bool
|
True if the workflow is complete, False otherwise. |
model_dump_json ¶
model_validate_json
staticmethod
¶
Load a workflow from a JSON string.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
json_string
|
str
|
The JSON string to validate. |
required |
output_types
|
Optional[Dict[str, Any]]
|
A dictionary mapping task IDs to their output types.
This can either be a Pydantic |
required |
Returns:
| Name | Type | Description |
|---|---|---|
Workflow |
Workflow
|
The workflow object. |
Source code in python/scouter/stubs.pyi
pending_count ¶
Get the number of pending tasks in the workflow.
Returns:
| Name | Type | Description |
|---|---|---|
int |
int
|
The number of pending tasks in the workflow. |
run ¶
Run the workflow. This will execute all tasks in the workflow and return when all tasks are complete.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
global_context
|
Optional[Dict[str, Any]]
|
A dictionary of global context to bind to the workflow. All tasks in the workflow will have this context bound to them. |
None
|
Source code in python/scouter/stubs.pyi
WorkflowResult ¶
events
property
¶
The events that occurred during the workflow execution. This is a list of dictionaries where each dictionary contains information about the event such as the task ID, status, and timestamp.
evaluate_llm ¶
evaluate_llm(
records: List[LLMEvalRecord],
metrics: List[LLMEvalMetric],
config: Optional[EvaluationConfig] = None,
) -> LLMEvalResults
Evaluate LLM responses using the provided evaluation metrics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
records
|
List[LLMEvalRecord]
|
List of LLM evaluation records to evaluate. |
required |
metrics
|
List[LLMEvalMetric]
|
List of LLMEvalMetric instances to use for evaluation. |
required |
config
|
Optional[EvaluationConfig]
|
Optional EvaluationConfig instance to configure evaluation options. |
None
|
Returns:
| Type | Description |
|---|---|
LLMEvalResults
|
LLMEvalResults |
Source code in python/scouter/stubs.pyi
flush_tracer ¶
get_function_type ¶
Determine the function type (sync, async, generator, async generator).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
Callable[..., Any]
|
The function to analyze. |
required |
init_tracer ¶
init_tracer(
service_name: str = "scouter_service",
transport_config: Optional[
HttpConfig
| KafkaConfig
| RabbitMQConfig
| RedisConfig
] = None,
exporter: (
HttpSpanExporter
| StdoutSpanExporter
| TestSpanExporter
) = StdoutSpanExporter(),
batch_config: Optional[BatchConfig] = None,
profile_space: Optional[str] = None,
profile_name: Optional[str] = None,
profile_version: Optional[str] = None,
) -> None
Initialize the tracer for a service with specific transport and exporter configurations.
This function configures a service tracer, allowing for the specification of the service name, the transport mechanism for exporting spans, and the chosen span exporter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
service_name
|
str
|
The required name of the service this tracer is associated with. This is typically a logical identifier for the application or component. |
'scouter_service'
|
transport_config
|
HttpConfig | KafkaConfig | RabbitMQConfig | RedisConfig | None
|
The configuration detailing how spans should be sent out.
If None, a default The supported configuration types are:
* |
None
|
exporter
|
HttpSpanExporter | StdoutSpanExporter | TestSpanExporter | None
|
The span exporter implementation to use.
If None, a default Available exporters:
* |
StdoutSpanExporter()
|
batch_config
|
BatchConfig | None
|
Configuration for the batching process. If provided, spans will be queued
and exported in batches according to these settings. If |
None
|
Drift Profile Association (Optional): Use these parameters to associate the tracer with a specific drift profile.
profile_space (str | None):
The space for the drift profile.
profile_name (str | None):
A name of the associated drift profile or service.
profile_version (str | None):
The version of the drift profile.