Skip to content

API

ActiveSpan

Represents an active tracing span.

context_id property

context_id: str

Get the context ID of the active span.

span_id property

span_id: str

Get the span ID of the current active span.

Returns:

Name Type Description
str str

The span ID.

trace_id property

trace_id: str

Get the trace ID of the current active span.

Returns:

Name Type Description
str str

The trace ID.

add_event

add_event(name: str, attributes: Any) -> None

Add an event to the active span.

Parameters:

Name Type Description Default
name str

The name of the event.

required
attributes Any

Optional attributes for the event. Can be any serializable type or pydantic BaseModel.

required
Source code in python/scouter/stubs.pyi
def add_event(self, name: str, attributes: Any) -> None:
    """Add an event to the active span.

    Args:
        name (str):
            The name of the event.
        attributes (Any):
            Optional attributes for the event.
            Can be any serializable type or pydantic `BaseModel`.
    """

add_queue_item

add_queue_item(
    alias: str,
    item: Union[Features, Metrics, GenAIEvalRecord],
) -> None

Helpers to add queue entities into a specified queue associated with the active span. This is an convenience method that abstracts away the details of queue management and leverages tracing's sampling capabilities to control data ingestion. Thus, correlated queue records and spans/traces can be sampled together based on the same sampling decision.

Parameters:

Name Type Description Default
alias str

Alias of the queue to add the item into.

required
item Union[Features, Metrics, GenAIEvalRecord]

Item to add into the queue. Can be an instance for Features, Metrics, or GenAIEvalRecord.

required
Example
features = Features(
    features=[
        Feature("feature_1", 1),
        Feature("feature_2", 2.0),
        Feature("feature_3", "value"),
    ]
)
span.add_queue_item(alias, features)
Source code in python/scouter/stubs.pyi
def add_queue_item(
    self,
    alias: str,
    item: Union[Features, Metrics, GenAIEvalRecord],
) -> None:
    """Helpers to add queue entities into a specified queue associated with the active span.
    This is an convenience method that abstracts away the details of queue management and
    leverages tracing's sampling capabilities to control data ingestion. Thus, correlated queue
    records and spans/traces can be sampled together based on the same sampling decision.

    Args:
        alias (str):
            Alias of the queue to add the item into.
        item (Union[Features, Metrics, GenAIEvalRecord]):
            Item to add into the queue.
            Can be an instance for Features, Metrics, or GenAIEvalRecord.

    Example:
        ```python
        features = Features(
            features=[
                Feature("feature_1", 1),
                Feature("feature_2", 2.0),
                Feature("feature_3", "value"),
            ]
        )
        span.add_queue_item(alias, features)
        ```
    """

set_attribute

set_attribute(key: str, value: SerializedType) -> None

Set an attribute on the active span.

Parameters:

Name Type Description Default
key str

The attribute key.

required
value SerializedType

The attribute value.

required
Source code in python/scouter/stubs.pyi
def set_attribute(self, key: str, value: SerializedType) -> None:
    """Set an attribute on the active span.

    Args:
        key (str):
            The attribute key.
        value (SerializedType):
            The attribute value.
    """

set_input

set_input(input: Any, max_length: int = 1000) -> None

Set the input for the active span.

Parameters:

Name Type Description Default
input Any

The input to set. Can be any serializable primitive type (str, int, float, bool, list, dict), or a pydantic BaseModel.

required
max_length int

The maximum length for a given string input. Defaults to 1000.

1000
Source code in python/scouter/stubs.pyi
def set_input(self, input: Any, max_length: int = 1000) -> None:
    """Set the input for the active span.

    Args:
        input (Any):
            The input to set. Can be any serializable primitive type (str, int, float, bool, list, dict),
            or a pydantic `BaseModel`.
        max_length (int):
            The maximum length for a given string input. Defaults to 1000.
    """

set_output

set_output(output: Any, max_length: int = 1000) -> None

Set the output for the active span.

Parameters:

Name Type Description Default
output Any

The output to set. Can be any serializable primitive type (str, int, float, bool, list, dict), or a pydantic BaseModel.

required
max_length int

The maximum length for a given string output. Defaults to 1000.

1000
Source code in python/scouter/stubs.pyi
def set_output(self, output: Any, max_length: int = 1000) -> None:
    """Set the output for the active span.

    Args:
        output (Any):
            The output to set. Can be any serializable primitive type (str, int, float, bool, list, dict),
            or a pydantic `BaseModel`.
        max_length (int):
            The maximum length for a given string output. Defaults to 1000.

    """

set_status

set_status(
    status: str, description: Optional[str] = None
) -> None

Set the status of the active span.

Parameters:

Name Type Description Default
status str

The status code (e.g., "OK", "ERROR").

required
description Optional[str]

Optional description for the status.

None
Source code in python/scouter/stubs.pyi
def set_status(self, status: str, description: Optional[str] = None) -> None:
    """Set the status of the active span.

    Args:
        status (str):
            The status code (e.g., "OK", "ERROR").
        description (Optional[str]):
            Optional description for the status.
    """

set_tag

set_tag(key: str, value: str) -> None

Set a tag on the active span. Tags are similar to attributes except they are often used for indexing and searching spans/traces. All tags are also set as attributes on the span. Before export, tags are extracted and stored in a separate backend table for efficient querying.

Parameters:

Name Type Description Default
key str

The tag key.

required
value str

The tag value.

required
Source code in python/scouter/stubs.pyi
def set_tag(self, key: str, value: str) -> None:
    """Set a tag on the active span. Tags are similar to attributes
    except they are often used for indexing and searching spans/traces.
    All tags are also set as attributes on the span. Before export, tags are
    extracted and stored in a separate backend table for efficient querying.

    Args:
        key (str):
            The tag key.
        value (str):
            The tag value.
    """

Agent

Agent(
    provider: Provider | str,
    system_instruction: Optional[PromptMessage] = None,
)

Parameters:

Name Type Description Default
provider Provider | str

The provider to use for the agent. This can be a Provider enum or a string representing the provider.

required
system_instruction Optional[PromptMessage]

The system message to use for the agent. This can be a string, a list of strings, a Message object, or a list of Message objects. If None, no system message will be used. This is added to all tasks that the agent executes. If a given task contains it's own system message, the agent's system message will be prepended to the task's system message.

None

Example:

    agent = Agent(
        provider=Provider.OpenAI,
        system_instructions="You are a helpful assistant.",
    )

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    provider: Provider | str,
    system_instruction: Optional[PromptMessage] = None,
) -> None:
    """Create an Agent object.

    Args:
        provider (Provider | str):
            The provider to use for the agent. This can be a Provider enum or a string
            representing the provider.
        system_instruction (Optional[PromptMessage]):
            The system message to use for the agent. This can be a string, a list of strings,
            a Message object, or a list of Message objects. If None, no system message will be used.
            This is added to all tasks that the agent executes. If a given task contains it's own
            system message, the agent's system message will be prepended to the task's system message.

    Example:
    ```python
        agent = Agent(
            provider=Provider.OpenAI,
            system_instructions="You are a helpful assistant.",
        )
    ```
    """

id property

id: str

The ID of the agent. This is a random uuid7 that is generated when the agent is created.

system_instruction property

system_instruction: List[Any]

The system message to use for the agent. This is a list of Message objects.

execute_prompt

execute_prompt(
    prompt: Prompt, output_type: Optional[Any] = None
) -> AgentResponse

Execute a prompt.

Parameters:

Name Type Description Default
prompt Prompt

The prompt to execute.

required
output_type Optional[Any]

The output type to use for the task. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

None

Returns:

Name Type Description
AgentResponse AgentResponse

The response from the agent after executing the task.

Source code in python/scouter/stubs.pyi
def execute_prompt(
    self,
    prompt: Prompt,
    output_type: Optional[Any] = None,
) -> AgentResponse:
    """Execute a prompt.

    Args:
        prompt (Prompt):
            The prompt to execute.
        output_type (Optional[Any]):
            The output type to use for the task. This can either be a Pydantic `BaseModel` class
            or a supported potato_head response type such as `Score`.

    Returns:
        AgentResponse:
            The response from the agent after executing the task.
    """

execute_task

execute_task(
    task: Task, output_type: Optional[Any] = None
) -> AgentResponse

Execute a task.

Parameters:

Name Type Description Default
task Task

The task to execute.

required
output_type Optional[Any]

The output type to use for the task. This can either be a Pydantic BaseModel class or a supported PotatoHead response type such as Score.

None

Returns: AgentResponse: The response from the agent after executing the task.

Source code in python/scouter/stubs.pyi
def execute_task(
    self,
    task: Task,
    output_type: Optional[Any] = None,
) -> AgentResponse:
    """Execute a task.

    Args:
        task (Task):
            The task to execute.
        output_type (Optional[Any]):
            The output type to use for the task. This can either be a Pydantic `BaseModel` class
            or a supported PotatoHead response type such as `Score`.
    Returns:
        AgentResponse:
            The response from the agent after executing the task.
    """

AgentResponse

id property

id: str

The ID of the agent response.

log_probs property

log_probs: ResponseLogProbs

Returns the log probabilities of the agent response if supported. This is primarily used for debugging and analysis purposes.

response property

response: _ResponseType

The response of the agent. This can be an OpenAIChatResponse, GenerateContentResponse, or AnthropicMessageResponse depending on the provider used.

structured_output property

structured_output: Any

Returns the structured output of the agent response if supported.

token_usage property

token_usage: Any

Returns the token usage of the agent response if supported

response_text

response_text() -> str

The response text from the agent if available, otherwise an empty string.

Source code in python/scouter/stubs.pyi
def response_text(self) -> str:
    """The response text from the agent if available, otherwise an empty string."""

AlertCondition

AlertCondition(
    baseline_value: float,
    alert_threshold: AlertThreshold,
    delta: Optional[float],
)
baseline_value (float):
    The baseline value to compare against for alerting.
alert_threshold (AlertThreshold):
    The condition that determines when an alert should be triggered.
    Must be one of the AlertThreshold enum members like Below, Above, or Outside.
delta (Optional[float], optional):
    Optional delta value that modifies the baseline to create the alert boundary.
    The interpretation depends on alert_threshold:
    - Above: alert if value > (baseline + delta)
    - Below: alert if value < (baseline - delta)
    - Outside: alert if value is outside [baseline - delta, baseline + delta]

Example: alert_threshold = AlertCondition(AlertCondition.BELOW, 2.0)

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    baseline_value: float,
    alert_threshold: AlertThreshold,
    delta: Optional[float],
):
    """Initialize a AlertCondition instance.
    Args:
        baseline_value (float):
            The baseline value to compare against for alerting.
        alert_threshold (AlertThreshold):
            The condition that determines when an alert should be triggered.
            Must be one of the AlertThreshold enum members like Below, Above, or Outside.
        delta (Optional[float], optional):
            Optional delta value that modifies the baseline to create the alert boundary.
            The interpretation depends on alert_threshold:
            - Above: alert if value > (baseline + delta)
            - Below: alert if value < (baseline - delta)
            - Outside: alert if value is outside [baseline - delta, baseline + delta]
    Example:
        alert_threshold = AlertCondition(AlertCondition.BELOW, 2.0)
    """

lower_bound

lower_bound() -> float

Calculate and return the lower bound for alerting based on baseline and delta.

Source code in python/scouter/stubs.pyi
def lower_bound(self) -> float:
    """Calculate and return the lower bound for alerting based on baseline and delta."""

should_alert

should_alert(value: float) -> bool

Determine if an alert should be triggered based on the provided value.

Source code in python/scouter/stubs.pyi
def should_alert(self, value: float) -> bool:
    """Determine if an alert should be triggered based on the provided value."""

upper_bound

upper_bound() -> float

Calculate and return the upper bound for alerting based on baseline and delta.

Source code in python/scouter/stubs.pyi
def upper_bound(self) -> float:
    """Calculate and return the upper bound for alerting based on baseline and delta."""

AlertDispatchType

to_string staticmethod

to_string() -> str

Return the string representation of the alert dispatch type

Source code in python/scouter/stubs.pyi
@staticmethod
def to_string() -> str:
    """Return the string representation of the alert dispatch type"""

AlertThreshold

Enum representing different alert conditions for monitoring metrics.

Attributes:

Name Type Description
Below AlertThreshold

Indicates that an alert should be triggered when the metric is below a threshold.

Above AlertThreshold

Indicates that an alert should be triggered when the metric is above a threshold.

Outside AlertThreshold

Indicates that an alert should be triggered when the metric is outside a specified range.

from_value staticmethod

from_value(value: str) -> AlertThreshold

Creates an AlertThreshold enum member from a string value.

Parameters:

Name Type Description Default
value str

The string representation of the alert condition.

required

Returns:

Name Type Description
AlertThreshold AlertThreshold

The corresponding AlertThreshold enum member.

Source code in python/scouter/stubs.pyi
@staticmethod
def from_value(value: str) -> "AlertThreshold":
    """
    Creates an AlertThreshold enum member from a string value.

    Args:
        value (str): The string representation of the alert condition.

    Returns:
        AlertThreshold: The corresponding AlertThreshold enum member.
    """

AlignedEvalResult

Eval Result for a specific evaluation

embedding property

embedding: Dict[str, List[float]]

Get embeddings of embedding targets

error_message property

error_message: Optional[str]

Get the error message if the evaluation failed

eval_set property

eval_set: GenAIEvalSet

Get the eval results

mean_embeddings property

mean_embeddings: Dict[str, float]

Get mean embeddings of embedding targets

record_uid property

record_uid: str

Get the unique identifier for the record associated with this result

similarity_scores property

similarity_scores: Dict[str, float]

Get similarity scores of embedding targets

success property

success: bool

Check if the evaluation was successful

task_count property

task_count: int

Get the total number of tasks in the evaluation

AllowedTools

AllowedTools(
    mode: AllowedToolsMode, tools: List[ToolDefinition]
)

Configuration for constraining model to specific tools.

This class specifies a list of tools the model is allowed to use, along with the behavior mode.

Examples:

>>> tools = [ToolDefinition("get_weather")]
>>> allowed = AllowedTools(mode=AllowedToolsMode.Auto, tools=tools)
>>>
>>> # Or from function names
>>> allowed = AllowedTools.from_function_names(
...     mode=AllowedToolsMode.Required,
...     function_names=["get_weather", "get_time"]
... )

Parameters:

Name Type Description Default
mode AllowedToolsMode

The mode for tool usage behavior

required
tools List[ToolDefinition]

List of allowed tools

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    mode: AllowedToolsMode,
    tools: List[ToolDefinition],
) -> None:
    """Initialize allowed tools configuration.

    Args:
        mode (AllowedToolsMode):
            The mode for tool usage behavior
        tools (List[ToolDefinition]):
            List of allowed tools
    """

allowed_tools property

allowed_tools: InnerAllowedTools

The inner allowed tools configuration.

type property

type: str

The configuration type (always 'allowed_tools').

from_function_names staticmethod

from_function_names(
    mode: AllowedToolsMode, function_names: List[str]
) -> AllowedTools

Create AllowedTools from function names.

Parameters:

Name Type Description Default
mode AllowedToolsMode

The mode for tool usage behavior

required
function_names List[str]

List of function names to allow

required

Returns:

Name Type Description
AllowedTools AllowedTools

Configured allowed tools instance

Source code in python/scouter/stubs.pyi
@staticmethod
def from_function_names(
    mode: AllowedToolsMode,
    function_names: List[str],
) -> "AllowedTools":
    """Create AllowedTools from function names.

    Args:
        mode (AllowedToolsMode):
            The mode for tool usage behavior
        function_names (List[str]):
            List of function names to allow

    Returns:
        AllowedTools: Configured allowed tools instance
    """

AllowedToolsMode

Mode for allowed tools constraint behavior.

This enum defines how the model should behave when constrained to specific tools.

Examples:

>>> mode = AllowedToolsMode.Auto
>>> mode.value
'auto'

Auto class-attribute instance-attribute

Auto = 'AllowedToolsMode'

Model can pick from allowed tools or generate a message

Required class-attribute instance-attribute

Required = 'AllowedToolsMode'

Model must call one or more of the allowed tools

Annotations

Annotations attached to OpenAI message content.

This class contains metadata and citations for message content, such as URL citations from web search.

Examples:

>>> # Checking for citations
>>> choice = response.choices[0]
>>> for annotation in choice.message.annotations:
...     print(f"Type: {annotation.type}")
...     for citation in annotation.url_citations:
...         print(f"  {citation.title}")

type property

type: str

The annotation type.

url_citations property

url_citations: List[UrlCitation]

URL citations.

AnthropicMessageResponse

Response from Anthropic chat completion API.

Complete response containing generated content and metadata.

Examples:

>>> response = AnthropicMessageResponse(...)
>>> print(response.content[0].text)
>>> print(f"Stop reason: {response.stop_reason}")
>>> print(f"Usage: {response.usage.total_tokens} tokens")

content property

content: List[Any]

Generated content blocks.

id property

id: str

Response ID.

model property

model: str

Model used.

role property

role: str

Message role (always 'assistant').

stop_reason property

stop_reason: Optional[StopReason]

Reason for stopping.

stop_sequence property

stop_sequence: Optional[str]

Stop sequence matched.

type property

type: str

Response type.

usage property

usage: AnthropicUsage

Token usage statistics.

AnthropicSettings

AnthropicSettings(
    max_tokens: int = 4096,
    metadata: Optional[Metadata] = None,
    service_tier: Optional[str] = None,
    stop_sequences: Optional[List[str]] = None,
    stream: Optional[bool] = None,
    system: Optional[str] = None,
    temperature: Optional[float] = None,
    thinking: Optional[AnthropicThinkingConfig] = None,
    top_k: Optional[int] = None,
    top_p: Optional[float] = None,
    tools: Optional[List[AnthropicTool]] = None,
    tool_choice: Optional[AnthropicToolChoice] = None,
    extra_body: Optional[Any] = None,
)

Settings for Anthropic chat completion requests.

Comprehensive configuration for chat completion behavior.

Examples:

>>> # Basic settings
>>> settings = AnthropicSettings(
...     max_tokens=1024,
...     temperature=0.7
... )
>>>
>>> # Advanced settings with tools
>>> tool = AnthropicTool(name="get_weather", ...)
>>> choice = AnthropicToolChoice(type="auto")
>>> settings = AnthropicSettings(
...     max_tokens=2048,
...     temperature=0.5,
...     tools=[tool],
...     tool_choice=choice
... )

Parameters:

Name Type Description Default
max_tokens int

Maximum tokens to generate

4096
metadata Optional[Metadata]

Request metadata

None
service_tier Optional[str]

Service tier ("auto" or "standard_only")

None
stop_sequences Optional[List[str]]

Stop sequences

None
stream Optional[bool]

Enable streaming

None
system Optional[str]

System prompt

None
temperature Optional[float]

Sampling temperature (0.0-1.0)

None
thinking Optional[AnthropicThinkingConfig]

Thinking configuration

None
top_k Optional[int]

Top-k sampling parameter

None
top_p Optional[float]

Nucleus sampling parameter

None
tools Optional[List[AnthropicTool]]

Available tools

None
tool_choice Optional[AnthropicToolChoice]

Tool choice configuration

None
extra_body Optional[Any]

Additional request parameters

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    max_tokens: int = 4096,
    metadata: Optional[Metadata] = None,
    service_tier: Optional[str] = None,
    stop_sequences: Optional[List[str]] = None,
    stream: Optional[bool] = None,
    system: Optional[str] = None,
    temperature: Optional[float] = None,
    thinking: Optional[AnthropicThinkingConfig] = None,
    top_k: Optional[int] = None,
    top_p: Optional[float] = None,
    tools: Optional[List[AnthropicTool]] = None,
    tool_choice: Optional[AnthropicToolChoice] = None,
    extra_body: Optional[Any] = None,
) -> None:
    """Initialize Anthropic settings.

    Args:
        max_tokens (int):
            Maximum tokens to generate
        metadata (Optional[Metadata]):
            Request metadata
        service_tier (Optional[str]):
            Service tier ("auto" or "standard_only")
        stop_sequences (Optional[List[str]]):
            Stop sequences
        stream (Optional[bool]):
            Enable streaming
        system (Optional[str]):
            System prompt
        temperature (Optional[float]):
            Sampling temperature (0.0-1.0)
        thinking (Optional[AnthropicThinkingConfig]):
            Thinking configuration
        top_k (Optional[int]):
            Top-k sampling parameter
        top_p (Optional[float]):
            Nucleus sampling parameter
        tools (Optional[List[AnthropicTool]]):
            Available tools
        tool_choice (Optional[AnthropicToolChoice]):
            Tool choice configuration
        extra_body (Optional[Any]):
            Additional request parameters
    """

extra_body property

extra_body: Optional[Any]

Extra request parameters.

max_tokens property

max_tokens: int

Maximum tokens.

metadata property

metadata: Optional[Metadata]

Request metadata.

service_tier property

service_tier: Optional[str]

Service tier.

stop_sequences property

stop_sequences: Optional[List[str]]

Stop sequences.

stream property

stream: Optional[bool]

Streaming enabled.

system property

system: Optional[str]

System prompt.

temperature property

temperature: Optional[float]

Sampling temperature.

thinking property

thinking: Optional[AnthropicThinkingConfig]

Thinking configuration.

tool_choice property

tool_choice: Optional[AnthropicToolChoice]

Tool choice configuration.

tools property

tools: Optional[List[AnthropicTool]]

Available tools.

top_k property

top_k: Optional[int]

Top-k parameter.

top_p property

top_p: Optional[float]

Top-p parameter.

model_dump

model_dump() -> Dict[str, Any]

Convert settings to dictionary.

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Convert settings to dictionary."""

AnthropicThinkingConfig

AnthropicThinkingConfig(
    type: str, budget_tokens: Optional[int] = None
)

Configuration for extended thinking.

Controls Claude's extended thinking feature.

Examples:

>>> # Enable thinking with budget
>>> config = AnthropicThinkingConfig(type="enabled", budget_tokens=2000)
>>>
>>> # Disable thinking
>>> config = AnthropicThinkingConfig(type="disabled", budget_tokens=None)

Parameters:

Name Type Description Default
type str

Configuration type ("enabled" or "disabled")

required
budget_tokens Optional[int]

Token budget for thinking

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    type: str,
    budget_tokens: Optional[int] = None,
) -> None:
    """Initialize thinking configuration.

    Args:
        type (str):
            Configuration type ("enabled" or "disabled")
        budget_tokens (Optional[int]):
            Token budget for thinking
    """

budget_tokens property

budget_tokens: Optional[int]

Token budget.

type property

type: str

Configuration type.

AnthropicTool

AnthropicTool(
    name: str,
    description: Optional[str] = None,
    input_schema: Any = None,
    cache_control: Optional[CacheControl] = None,
)

Tool definition for Anthropic API.

Defines a tool that Claude can use.

Examples:

>>> schema = {
...     "type": "object",
...     "properties": {
...         "location": {"type": "string"}
...     },
...     "required": ["location"]
... }
>>> tool = AnthropicTool(
...     name="get_weather",
...     description="Get weather for a location",
...     input_schema=schema,
...     cache_control=None
... )

Parameters:

Name Type Description Default
name str

Tool name

required
description Optional[str]

Tool description

None
input_schema Any

JSON schema for tool input

None
cache_control Optional[CacheControl]

Cache control settings

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    description: Optional[str] = None,
    input_schema: Any = None,
    cache_control: Optional[CacheControl] = None,
) -> None:
    """Initialize tool definition.

    Args:
        name (str):
            Tool name
        description (Optional[str]):
            Tool description
        input_schema (Any):
            JSON schema for tool input
        cache_control (Optional[CacheControl]):
            Cache control settings
    """

AnthropicToolChoice

AnthropicToolChoice(
    type: str,
    disable_parallel_tool_use: Optional[bool] = None,
    name: Optional[str] = None,
)

Tool choice configuration.

Controls how Claude uses tools.

Examples:

>>> # Automatic tool choice
>>> choice = AnthropicToolChoice(
...     type="auto",
...     disable_parallel_tool_use=False,
...     name=None
... )
>>>
>>> # Specific tool
>>> choice = AnthropicToolChoice(
...     type="tool",
...     disable_parallel_tool_use=False,
...     name="get_weather"
... )
>>>
>>> # No tools
>>> choice = AnthropicToolChoice(
...     type="none",
...     disable_parallel_tool_use=None,
...     name=None
... )

Parameters:

Name Type Description Default
type str

Choice type ("auto", "any", "tool", "none")

required
disable_parallel_tool_use Optional[bool]

Whether to disable parallel tool use

None
name Optional[str]

Specific tool name (required if type is "tool")

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    type: str,
    disable_parallel_tool_use: Optional[bool] = None,
    name: Optional[str] = None,
) -> None:
    """Initialize tool choice configuration.

    Args:
        type (str):
            Choice type ("auto", "any", "tool", "none")
        disable_parallel_tool_use (Optional[bool]):
            Whether to disable parallel tool use
        name (Optional[str]):
            Specific tool name (required if type is "tool")
    """

disable_parallel_tool_use property

disable_parallel_tool_use: Optional[bool]

Disable parallel tool use.

name property

name: Optional[str]

Tool name.

type property

type: str

Choice type.

AnthropicUsage

Token usage statistics.

Token usage information for the request.

Examples:

>>> usage = response.usage
>>> print(f"Input tokens: {usage.input_tokens}")
>>> print(f"Output tokens: {usage.output_tokens}")
>>> print(f"Total: {usage.input_tokens + usage.output_tokens}")
>>> if usage.cache_read_input_tokens:
...     print(f"Cache hits: {usage.cache_read_input_tokens}")

cache_creation_input_tokens property

cache_creation_input_tokens: Optional[int]

Tokens used to create cache.

cache_read_input_tokens property

cache_read_input_tokens: Optional[int]

Tokens read from cache.

input_tokens property

input_tokens: int

Input tokens used.

output_tokens property

output_tokens: int

Output tokens generated.

service_tier property

service_tier: Optional[str]

Service tier used.

ApiKeyConfig

ApiKeyConfig(
    name: Optional[str] = None,
    api_key_secret: Optional[str] = None,
    api_key_string: Optional[str] = None,
    http_element_location: Optional[
        HttpElementLocation
    ] = None,
)

API key authentication configuration.

Configures API key authentication for external APIs.

Examples:

>>> config = ApiKeyConfig(
...     name="X-API-Key",
...     api_key_secret="projects/my-project/secrets/api-key",
...     http_element_location=HttpElementLocation.HttpInHeader
... )

Parameters:

Name Type Description Default
name Optional[str]

Name of the API key parameter

None
api_key_secret Optional[str]

Secret manager resource name

None
api_key_string Optional[str]

Direct API key string

None
http_element_location Optional[HttpElementLocation]

Where to place the API key

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: Optional[str] = None,
    api_key_secret: Optional[str] = None,
    api_key_string: Optional[str] = None,
    http_element_location: Optional[HttpElementLocation] = None,
) -> None:
    """Initialize API key configuration.

    Args:
        name (Optional[str]):
            Name of the API key parameter
        api_key_secret (Optional[str]):
            Secret manager resource name
        api_key_string (Optional[str]):
            Direct API key string
        http_element_location (Optional[HttpElementLocation]):
            Where to place the API key
    """

api_key_secret property

api_key_secret: Optional[str]

The secret resource name.

api_key_string property

api_key_string: Optional[str]

The direct API key string.

http_element_location property

http_element_location: Optional[HttpElementLocation]

Where to place the API key.

name property

name: Optional[str]

The API key parameter name.

ApiSpecType

API specification type for external retrieval.

Defines the type of external API used for grounding/retrieval.

Examples:

>>> spec = ApiSpecType.ElasticSearch
>>> spec.value
'ELASTIC_SEARCH'

ApiSpecUnspecified class-attribute instance-attribute

ApiSpecUnspecified = 'ApiSpecType'

Unspecified API spec

ElasticSearch class-attribute instance-attribute

ElasticSearch = 'ApiSpecType'

Elasticsearch API

SimpleSearch class-attribute instance-attribute

SimpleSearch = 'ApiSpecType'

Simple search API

AssertionTask

AssertionTask(
    id: str,
    expected_value: Any,
    operator: ComparisonOperator,
    field_path: Optional[str] = None,
    description: Optional[str] = None,
    depends_on: Optional[Sequence[str]] = None,
    condition: bool = False,
)

Assertion-based evaluation task for LLM monitoring.

Defines a rule-based assertion that evaluates values extracted from LLM context/responses against expected conditions without requiring additional LLM calls. Assertions are efficient, deterministic evaluations ideal for validating structured outputs, checking thresholds, or verifying data constraints.

Assertions can operate on
  • Nested fields via dot-notation paths (e.g., "response.user.age")
  • Top-level context values when field_path is None
  • String, numeric, boolean, or collection values
Common Use Cases
  • Validate response structure ("response.status" == "success")
  • Check numeric thresholds ("response.confidence" >= 0.8)
  • Verify required fields exist ("response.user.id" is not None)
  • Validate string patterns ("response.language" contains "en")

Examples:

Basic numeric comparison:

>>> # Context at runtime: {"response": {"user": {"age": 25}}}
>>> task = AssertionTask(
...     id="check_user_age",
...     field_path="response.user.age",
...     operator=ComparisonOperator.GreaterThan,
...     expected_value=18,
...     description="Verify user is an adult"
... )

Checking top-level fields:

>>> # Context at runtime: {"user": {"age": 25}}
>>> task = AssertionTask(
...     id="check_age",
...     field_path="user.age",
...     operator=ComparisonOperator.GreaterThanOrEqual,
...     expected_value=21,
...     description="Check minimum age requirement"
... )

Operating on entire context (no nested path):

>>> # Context at runtime: 25
>>> task = AssertionTask(
...     id="age_threshold",
...     field_path=None,
...     operator=ComparisonOperator.GreaterThan,
...     expected_value=18,
...     description="Validate age value"
... )

String validation:

>>> # Context: {"response": {"status": "completed"}}
>>> task = AssertionTask(
...     id="status_check",
...     field_path="response.status",
...     operator=ComparisonOperator.Equals,
...     expected_value="completed",
...     description="Verify completion status"
... )

Collection membership:

>>> # Context: {"response": {"tags": ["valid", "processed"]}}
>>> task = AssertionTask(
...     id="tag_validation",
...     field_path="response.tags",
...     operator=ComparisonOperator.Contains,
...     expected_value="valid",
...     description="Check for required tag"
... )

With dependencies:

>>> task = AssertionTask(
...     id="confidence_check",
...     field_path="response.confidence",
...     operator=ComparisonOperator.GreaterThan,
...     expected_value=0.9,
...     description="High confidence validation",
...     depends_on=["status_check"]
... )
Note
  • Field paths use dot-notation for nested access
  • Field paths are case-sensitive
  • When field_path is None, the entire context is used as the value
  • Type mismatches between actual and expected values will fail the assertion
  • Dependencies are executed before this task

Parameters:

Name Type Description Default
id str

Unique identifier for the task. Will be converted to lowercase. Used to reference this task in dependencies and results.

required
expected_value Any

The expected value to compare against. Can be any JSON-serializable type: str, int, float, bool, list, dict, or None.

required
operator ComparisonOperator

Comparison operator to use for the assertion. Must be a ComparisonOperator enum value.

required
field_path Optional[str]

Optional dot-notation path to extract value from context (e.g., "response.user.age"). If None, the entire context is used as the comparison value.

None
description Optional[str]

Optional human-readable description of what this assertion validates. Useful for understanding evaluation results.

None
depends_on Optional[Sequence[str]]

Optional list of task IDs that must complete successfully before this task executes. Empty list if not provided.

None
condition bool

If True, this assertion task acts as a condition for subsequent tasks. If the assertion fails, dependent tasks will be skipped and this task will be excluded from final results.

False

Raises:

Type Description
TypeError

If expected_value is not JSON-serializable or if operator is not a valid ComparisonOperator.

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    id: str,
    expected_value: Any,
    operator: ComparisonOperator,
    field_path: Optional[str] = None,
    description: Optional[str] = None,
    depends_on: Optional[Sequence[str]] = None,
    condition: bool = False,
):
    """Initialize an assertion task for rule-based evaluation.

    Args:
        id:
            Unique identifier for the task. Will be converted to lowercase.
            Used to reference this task in dependencies and results.
        expected_value:
            The expected value to compare against. Can be any JSON-serializable
            type: str, int, float, bool, list, dict, or None.
        operator:
            Comparison operator to use for the assertion. Must be a
            ComparisonOperator enum value.
        field_path:
            Optional dot-notation path to extract value from context
            (e.g., "response.user.age"). If None, the entire context
            is used as the comparison value.
        description:
            Optional human-readable description of what this assertion validates.
            Useful for understanding evaluation results.
        depends_on:
            Optional list of task IDs that must complete successfully before
            this task executes. Empty list if not provided.
        condition:
            If True, this assertion task acts as a condition for subsequent tasks.
            If the assertion fails, dependent tasks will be skipped and this task
            will be excluded from final results.

    Raises:
        TypeError: If expected_value is not JSON-serializable or if operator
            is not a valid ComparisonOperator.
    """

depends_on property writable

depends_on: List[str]

List of task IDs this task depends on.

description property writable

description: Optional[str]

Human-readable description of the assertion.

expected_value property

expected_value: Any

Expected value for comparison.

Returns:

Type Description
Any

The expected value as a Python object (deserialized from internal

Any

JSON representation).

field_path property writable

field_path: Optional[str]

Dot-notation path to field in context, or None for entire context.

id property writable

id: str

Unique task identifier (lowercase).

operator property writable

operator: ComparisonOperator

Comparison operator for the assertion.

Attribute

Represents a key-value attribute associated with a span.

Audio

Audio output from OpenAI chat completions.

This class contains audio data generated by the model when audio output is requested.

Examples:

>>> # Accessing audio from response
>>> choice = response.choices[0]
>>> if choice.message.audio:
...     audio = choice.message.audio
...     print(f"Audio ID: {audio.id}")
...     print(f"Transcript: {audio.transcript}")
...     # audio.data contains base64 encoded audio

data property

data: str

Base64 encoded audio data.

expires_at property

expires_at: int

Unix timestamp when audio expires.

id property

id: str

Audio ID.

transcript property

transcript: str

Audio transcript.

AudioParam

AudioParam(format: str, voice: str)

Audio output configuration for OpenAI chat completions.

This class provides configuration for audio output in chat completions, including format and voice selection for text-to-speech capabilities.

Examples:

>>> audio = AudioParam(format="mp3", voice="alloy")
>>> audio.format
'mp3'
>>> audio.voice
'alloy'

Parameters:

Name Type Description Default
format str

Audio output format (e.g., "mp3", "opus", "aac", "flac", "wav", "pcm")

required
voice str

Voice to use for text-to-speech (e.g., "alloy", "echo", "fable", "onyx", "nova", "shimmer")

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    format: str,
    voice: str,
) -> None:
    """Initialize audio output parameters.

    Args:
        format (str):
            Audio output format (e.g., "mp3", "opus", "aac", "flac", "wav", "pcm")
        voice (str):
            Voice to use for text-to-speech (e.g., "alloy", "echo", "fable",
            "onyx", "nova", "shimmer")
    """

format property

format: str

The audio output format.

voice property

voice: str

The voice to use for text-to-speech.

model_dump

model_dump() -> Dict[str, Any]

Convert audio parameters to a dictionary.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Dictionary representation of audio parameters

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Convert audio parameters to a dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of audio parameters
    """

AuthConfig

Authentication configuration wrapper.

Wraps authentication type and configuration.

Examples:

>>> config = AuthConfig(
...     auth_type=AuthType.ApiKeyAuth,
...     auth_config=AuthConfigValue(
...         api_key_config=ApiKeyConfig(...)
...     )
... )

auth_config property

auth_config: AuthConfigValue

The authentication configuration.

auth_type property

auth_type: AuthType

The authentication type.

AuthConfigValue

AuthConfigValue(
    api_key_config: Optional[ApiKeyConfig] = None,
    http_basic_auth_config: Optional[
        HttpBasicAuthConfig
    ] = None,
    google_service_account_config: Optional[
        GoogleServiceAccountConfig
    ] = None,
    oauth_config: Optional[OauthConfig] = None,
    oidc_config: Optional[OidcConfig] = None,
)

Union type for authentication configuration.

Represents one of several authentication methods.

Examples:

>>> # API key auth
>>> config = AuthConfigValue(
...     api_key_config=ApiKeyConfig(...)
... )
>>> # OAuth
>>> config = AuthConfigValue(
...     oauth_config=OauthConfig(...)
... )

Exactly one configuration type must be provided.

Parameters:

Name Type Description Default
api_key_config Optional[ApiKeyConfig]

API key authentication

None
http_basic_auth_config Optional[HttpBasicAuthConfig]

HTTP Basic authentication

None
google_service_account_config Optional[GoogleServiceAccountConfig]

Service account authentication

None
oauth_config Optional[OauthConfig]

OAuth authentication

None
oidc_config Optional[OidcConfig]

OIDC authentication

None

Raises:

Type Description
TypeError

If configuration is invalid

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    api_key_config: Optional[ApiKeyConfig] = None,
    http_basic_auth_config: Optional[HttpBasicAuthConfig] = None,
    google_service_account_config: Optional[GoogleServiceAccountConfig] = None,
    oauth_config: Optional[OauthConfig] = None,
    oidc_config: Optional[OidcConfig] = None,
) -> None:
    """Initialize auth configuration value.

    Exactly one configuration type must be provided.

    Args:
        api_key_config (Optional[ApiKeyConfig]):
            API key authentication
        http_basic_auth_config (Optional[HttpBasicAuthConfig]):
            HTTP Basic authentication
        google_service_account_config (Optional[GoogleServiceAccountConfig]):
            Service account authentication
        oauth_config (Optional[OauthConfig]):
            OAuth authentication
        oidc_config (Optional[OidcConfig]):
            OIDC authentication

    Raises:
        TypeError: If configuration is invalid
    """

AuthType

Authentication type for external APIs.

Specifies the authentication method used to access external APIs.

Examples:

>>> auth = AuthType.ApiKeyAuth
>>> auth.value
'API_KEY_AUTH'

ApiKeyAuth class-attribute instance-attribute

ApiKeyAuth = 'AuthType'

API key authentication

AuthTypeUnspecified class-attribute instance-attribute

AuthTypeUnspecified = 'AuthType'

Unspecified auth type

GoogleServiceAccountAuth class-attribute instance-attribute

GoogleServiceAccountAuth = 'AuthType'

Google service account authentication

HttpBasicAuth class-attribute instance-attribute

HttpBasicAuth = 'AuthType'

HTTP basic authentication

NoAuth class-attribute instance-attribute

NoAuth = 'AuthType'

No authentication

Oauth class-attribute instance-attribute

Oauth = 'AuthType'

OAuth authentication

OidcAuth class-attribute instance-attribute

OidcAuth = 'AuthType'

OIDC authentication

AutoRoutingMode

AutoRoutingMode(
    model_routing_preference: Optional[
        ModelRoutingPreference
    ] = None,
)

Configuration for automatic model routing.

Controls model selection based on routing preferences when using automatic routing features.

Examples:

>>> # Prioritize quality over cost
>>> mode = AutoRoutingMode(
...     model_routing_preference=ModelRoutingPreference.PrioritizeQuality
... )
>>> # Balance quality and cost
>>> mode = AutoRoutingMode(
...     model_routing_preference=ModelRoutingPreference.Balanced
... )

Parameters:

Name Type Description Default
model_routing_preference Optional[ModelRoutingPreference]

Preference for model selection

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model_routing_preference: Optional[ModelRoutingPreference] = None,
) -> None:
    """Initialize automatic routing configuration.

    Args:
        model_routing_preference (Optional[ModelRoutingPreference]):
            Preference for model selection
    """

model_routing_preference property

model_routing_preference: Optional[ModelRoutingPreference]

The routing preference.

Base64ImageSource

Base64ImageSource(media_type: str, data: str)

Base64-encoded image source.

Image data encoded in base64 format with media type.

Examples:

>>> source = Base64ImageSource(
...     media_type="image/jpeg",
...     data="base64_encoded_data_here"
... )

Parameters:

Name Type Description Default
media_type str

Image media type (e.g., "image/jpeg", "image/png")

required
data str

Base64-encoded image data

required
Source code in python/scouter/stubs.pyi
def __init__(self, media_type: str, data: str) -> None:
    """Initialize base64 image source.

    Args:
        media_type (str):
            Image media type (e.g., "image/jpeg", "image/png")
        data (str):
            Base64-encoded image data
    """

data property

data: str

Base64-encoded image data.

media_type property

media_type: str

Image media type.

type property

type: str

Source type (always 'base64').

Base64PDFSource

Base64PDFSource(data: str)

Base64-encoded PDF source.

PDF document data encoded in base64 format.

Examples:

>>> source = Base64PDFSource(data="base64_encoded_pdf_data")

Parameters:

Name Type Description Default
data str

Base64-encoded PDF data

required
Source code in python/scouter/stubs.pyi
def __init__(self, data: str) -> None:
    """Initialize base64 PDF source.

    Args:
        data (str):
            Base64-encoded PDF data
    """

data property

data: str

Base64-encoded PDF data.

media_type property

media_type: str

Media type (always 'application/pdf').

type property

type: str

Source type (always 'base64').

BaseModel

Bases: Protocol

Protocol for pydantic BaseModel to ensure compatibility with context

model_dump

model_dump() -> Dict[str, Any]

Dump the model as a dictionary

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Dump the model as a dictionary"""

model_dump_json

model_dump_json() -> str

Dump the model as a JSON string

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Dump the model as a JSON string"""

BaseTracer

BaseTracer(name: str)

Parameters:

Name Type Description Default
name str

The name of the service for tracing.

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str) -> None:
    """Initialize the BaseTracer with a service name.

    Args:
        name (str):
            The name of the service for tracing.
    """

current_span

current_span() -> ActiveSpan

Get the current active span.

Returns:

Name Type Description
ActiveSpan ActiveSpan

The current active span. Raises an error if no active span exists.

Source code in python/scouter/stubs.pyi
def current_span(self) -> ActiveSpan:
    """Get the current active span.

    Returns:
        ActiveSpan:
            The current active span.
            Raises an error if no active span exists.
    """

set_scouter_queue

set_scouter_queue(queue: ScouterQueue) -> None

Add a ScouterQueue to the tracer. This allows the tracer to manage and export queue entities in conjunction with span data for correlated monitoring and observability.

Parameters:

Name Type Description Default
queue ScouterQueue

The ScouterQueue instance to add.

required
Source code in python/scouter/stubs.pyi
def set_scouter_queue(self, queue: "ScouterQueue") -> None:
    """Add a ScouterQueue to the tracer. This allows the tracer to manage
    and export queue entities in conjunction with span data for correlated
    monitoring and observability.

    Args:
        queue (ScouterQueue):
            The ScouterQueue instance to add.
    """

shutdown

shutdown() -> None

Shutdown the tracer and flush any remaining spans.

Source code in python/scouter/stubs.pyi
def shutdown(self) -> None:
    """Shutdown the tracer and flush any remaining spans."""

start_as_current_span

start_as_current_span(
    name: str,
    kind: Optional[SpanKind] = SpanKind.Internal,
    label: Optional[str] = None,
    attributes: Optional[dict[str, str]] = None,
    baggage: Optional[dict[str, str]] = None,
    tags: Optional[dict[str, str]] = None,
    parent_context_id: Optional[str] = None,
    trace_id: Optional[str] = None,
    span_id: Optional[str] = None,
    remote_sampled: Optional[bool] = None,
) -> ActiveSpan

Context manager to start a new span as the current span.

Parameters:

Name Type Description Default
name str

The name of the span.

required
kind Optional[SpanKind]

The kind of span (e.g., "SERVER", "CLIENT").

Internal
label Optional[str]

An optional label for the span.

None
attributes Optional[dict[str, str]]

Optional attributes to set on the span.

None
baggage Optional[dict[str, str]]

Optional baggage items to attach to the span.

None
tags Optional[dict[str, str]]

Optional tags to set on the span and trace.

None
parent_context_id Optional[str]

Optional parent span context ID.

None
trace_id Optional[str]

Optional trace ID to associate with the span. This is useful for when linking spans across different services or systems.

None
span_id Optional[str]

Optional span ID to associate with the span. This will be the parent span ID.

None
remote_sampled Optional[bool]

Optional flag indicating if the span was sampled remotely.

None

Returns: ActiveSpan:

Source code in python/scouter/stubs.pyi
def start_as_current_span(
    self,
    name: str,
    kind: Optional[SpanKind] = SpanKind.Internal,
    label: Optional[str] = None,
    attributes: Optional[dict[str, str]] = None,
    baggage: Optional[dict[str, str]] = None,
    tags: Optional[dict[str, str]] = None,
    parent_context_id: Optional[str] = None,
    trace_id: Optional[str] = None,
    span_id: Optional[str] = None,
    remote_sampled: Optional[bool] = None,
) -> ActiveSpan:
    """Context manager to start a new span as the current span.

    Args:
        name (str):
            The name of the span.
        kind (Optional[SpanKind]):
            The kind of span (e.g., "SERVER", "CLIENT").
        label (Optional[str]):
            An optional label for the span.
        attributes (Optional[dict[str, str]]):
            Optional attributes to set on the span.
        baggage (Optional[dict[str, str]]):
            Optional baggage items to attach to the span.
        tags (Optional[dict[str, str]]):
            Optional tags to set on the span and trace.
        parent_context_id (Optional[str]):
            Optional parent span context ID.
        trace_id (Optional[str]):
            Optional trace ID to associate with the span. This is useful for
            when linking spans across different services or systems.
        span_id (Optional[str]):
            Optional span ID to associate with the span. This will be the parent span ID.
        remote_sampled (Optional[bool]):
            Optional flag indicating if the span was sampled remotely.
    Returns:
        ActiveSpan:
    """

BatchConfig

BatchConfig(
    max_queue_size: int = 2048,
    scheduled_delay_ms: int = 5000,
    max_export_batch_size: int = 512,
)

Configuration for batch exporting of spans.

Parameters:

Name Type Description Default
max_queue_size int

The maximum queue size for spans. Defaults to 2048.

2048
scheduled_delay_ms int

The delay in milliseconds between export attempts. Defaults to 5000.

5000
max_export_batch_size int

The maximum batch size for exporting spans. Defaults to 512.

512
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    max_queue_size: int = 2048,
    scheduled_delay_ms: int = 5000,
    max_export_batch_size: int = 512,
) -> None:
    """Initialize the BatchConfig.

    Args:
        max_queue_size (int):
            The maximum queue size for spans. Defaults to 2048.
        scheduled_delay_ms (int):
            The delay in milliseconds between export attempts. Defaults to 5000.
        max_export_batch_size (int):
            The maximum batch size for exporting spans. Defaults to 512.
    """

Behavior

Function execution behavior.

Specifies whether function calls are blocking or non-blocking.

Examples:

>>> behavior = Behavior.Blocking
>>> behavior.value
'BLOCKING'

Blocking class-attribute instance-attribute

Blocking = 'Behavior'

Function execution blocks until complete

NonBlocking class-attribute instance-attribute

NonBlocking = 'Behavior'

Function execution does not block

Unspecified class-attribute instance-attribute

Unspecified = 'Behavior'

Unspecified behavior

Bin

id property

id: int

Return the bin id.

lower_limit property

lower_limit: float

Return the lower limit of the bin.

proportion property

proportion: float

Return the proportion of data found in the bin.

upper_limit property

upper_limit: Optional[float]

Return the upper limit of the bin.

Blob

Blob(
    mime_type: str,
    data: str,
    display_name: Optional[str] = None,
)

Inline binary data.

Contains raw binary data encoded in base64.

Examples:

>>> import base64
>>> image_data = base64.b64encode(image_bytes).decode('utf-8')
>>> blob = Blob(
...     mime_type="image/png",
...     data=image_data,
...     display_name="Example Image"
... )

Parameters:

Name Type Description Default
mime_type str

IANA MIME type

required
data str

Base64-encoded binary data

required
display_name Optional[str]

Optional display name

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    mime_type: str,
    data: str,
    display_name: Optional[str] = None,
) -> None:
    """Initialize binary data blob.

    Args:
        mime_type (str):
            IANA MIME type
        data (str):
            Base64-encoded binary data
        display_name (Optional[str]):
            Optional display name
    """

data property

data: str

The base64-encoded data.

display_name property

display_name: Optional[str]

The display name.

mime_type property

mime_type: str

The MIME type.

BlockedReason

Reason why content was blocked.

Indicates why a prompt or response was blocked by content filters.

Examples:

>>> reason = BlockedReason.Safety
>>> reason.value
'SAFETY'

BlockedReasonUnspecified class-attribute instance-attribute

BlockedReasonUnspecified = 'BlockedReason'

Unspecified reason

Blocklist class-attribute instance-attribute

Blocklist = 'BlockedReason'

Blocked due to blocklist match

ImageSafety class-attribute instance-attribute

ImageSafety = 'BlockedReason'

Blocked for image safety

Jailbreak class-attribute instance-attribute

Jailbreak = 'BlockedReason'

Blocked as jailbreak attempt

ModelArmor class-attribute instance-attribute

ModelArmor = 'BlockedReason'

Blocked by Model Armor

Other class-attribute instance-attribute

Other = 'BlockedReason'

Blocked for other reasons

ProhibitedContent class-attribute instance-attribute

ProhibitedContent = 'BlockedReason'

Contains prohibited content

Safety class-attribute instance-attribute

Safety = 'BlockedReason'

Blocked for safety reasons

CacheControl

CacheControl(cache_type: str, ttl: Optional[str] = None)

Cache control configuration.

Controls prompt caching behavior.

Examples:

>>> # 5 minute cache
>>> cache = CacheControl(cache_type="ephemeral", ttl="5m")
>>>
>>> # 1 hour cache
>>> cache = CacheControl(cache_type="ephemeral", ttl="1h")

Parameters:

Name Type Description Default
cache_type str

Cache type (typically "ephemeral")

required
ttl Optional[str]

Time-to-live ("5m" or "1h")

None
Source code in python/scouter/stubs.pyi
def __init__(self, cache_type: str, ttl: Optional[str] = None) -> None:
    """Initialize cache control.

    Args:
        cache_type (str):
            Cache type (typically "ephemeral")
        ttl (Optional[str]):
            Time-to-live ("5m" or "1h")
    """

Candidate

Response candidate from the model.

A single generated response option with content and metadata.

Examples:

>>> candidate = Candidate(
...     index=0,
...     content=GeminiContent(...),
...     finish_reason=FinishReason.Stop,
...     safety_ratings=[SafetyRating(...)],
...     citation_metadata=CitationMetadata(...)
... )

avg_logprobs property

avg_logprobs: Optional[float]

Average log probability.

citation_metadata property

citation_metadata: Optional[CitationMetadata]

Citation metadata.

content property

content: GeminiContent

Generated content.

finish_message property

finish_message: Optional[str]

Detailed finish reason message.

finish_reason property

finish_reason: Optional[FinishReason]

Why generation stopped.

grounding_metadata property

grounding_metadata: Optional[GroundingMetadata]

Grounding metadata.

index property

index: Optional[int]

Candidate index.

logprobs_result property

logprobs_result: Optional[LogprobsResult]

Detailed log probabilities.

safety_ratings property

safety_ratings: Optional[List[SafetyRating]]

Safety ratings.

url_context_metadata property

url_context_metadata: Optional[UrlContextMetadata]

URL context metadata.

CharStats

max_length property

max_length: int

Maximum string length

mean_length property

mean_length: float

Mean string length

median_length property

median_length: int

Median string length

min_length property

min_length: int

Minimum string length

ChatCompletionMessage

Message from OpenAI chat completion response.

This class represents the model's response message, including text content, tool calls, audio, and annotations.

Examples:

>>> # Accessing message from response
>>> choice = response.choices[0]
>>> message = choice.message
>>> print(f"Role: {message.role}")
>>> print(f"Content: {message.content}")
>>>
>>> # Checking for tool calls
>>> if message.tool_calls:
...     for call in message.tool_calls:
...         print(f"Function: {call.function.name}")

annotations property

annotations: List[Annotations]

Message annotations.

audio property

audio: Optional[Audio]

Audio output if requested.

content property

content: Optional[str]

The message content.

refusal property

refusal: Optional[str]

Refusal reason if model refused request.

role property

role: str

The message role.

tool_calls property

tool_calls: List[ToolCall]

Tool calls made by the model.

ChatMessage

ChatMessage(
    role: str,
    content: Union[
        str,
        List[
            Union[
                str,
                TextContentPart,
                ImageContentPart,
                InputAudioContentPart,
                FileContentPart,
            ]
        ],
    ],
    name: Optional[str] = None,
)

Message for OpenAI chat completions.

This class represents a single message in a chat completion conversation, supporting multiple content types including text, images, audio, and files.

Examples:

>>> # Simple text message
>>> msg = ChatMessage(role="user", content="Hello!")
>>>
>>> # Message with image
>>> image = ImageContentPart(url="https://example.com/image.jpg")
>>> msg = ChatMessage(role="user", content=[image])
>>>
>>> # Mixed content message
>>> msg = ChatMessage(
...     role="user",
...     content=["Describe this image:", image]
... )
>>>
>>> # System message with name
>>> msg = ChatMessage(
...     role="system",
...     content="You are a helpful assistant.",
...     name="assistant_v1"
... )

Parameters:

Name Type Description Default
role str

Message role ("system", "user", "assistant", "tool", "developer")

required
content Union[str, List[...]]

Message content - can be: - String: Simple text message - List: Mixed content with strings and content parts - ContentPart: Single structured content part

required
name Optional[str]

Optional name for the message

None

Raises:

Type Description
TypeError

If content format is invalid

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    role: str,
    content: Union[
        str,
        List[
            Union[
                str,
                TextContentPart,
                ImageContentPart,
                InputAudioContentPart,
                FileContentPart,
            ]
        ],
    ],
    name: Optional[str] = None,
) -> None:
    """Initialize chat message.

    Args:
        role (str):
            Message role ("system", "user", "assistant", "tool", "developer")
        content (Union[str, List[...]]):
            Message content - can be:
            - String: Simple text message
            - List: Mixed content with strings and content parts
            - ContentPart: Single structured content part
        name (Optional[str]):
            Optional name for the message

    Raises:
        TypeError: If content format is invalid
    """

content property

content: List[
    Union[
        TextContentPart,
        ImageContentPart,
        InputAudioContentPart,
        FileContentPart,
    ]
]

The message content parts.

name property

name: Optional[str]

The message name.

role property

role: str

The message role.

bind

bind(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> ChatMessage

Bind variables to the message content. Args: name (Optional[str]): The variable name to bind. value (Optional[Union[str, int, float, bool, list]]): The variable value to bind. Returns: ChatMessage: A new ChatMessage instance with bound variables.

Source code in python/scouter/stubs.pyi
def bind(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> "ChatMessage":
    """Bind variables to the message content.
    Args:
        name (Optional[str]):
            The variable name to bind.
        value (Optional[Union[str, int, float, bool, list]]):
            The variable value to bind.
    Returns:
        ChatMessage: A new ChatMessage instance with bound variables.
    """

bind_mut

bind_mut(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> None

Bind variables to the message content in place. Args: name (Optional[str]): The variable name to bind. value (Optional[Union[str, int, float, bool, list]]): The variable value to bind. Returns: None

Source code in python/scouter/stubs.pyi
def bind_mut(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> None:
    """Bind variables to the message content in place.
    Args:
        name (Optional[str]):
            The variable name to bind.
        value (Optional[Union[str, int, float, bool, list]]):
            The variable value to bind.
    Returns:
        None
    """

model_dump

model_dump() -> dict

Dump the message to a dictionary.

Source code in python/scouter/stubs.pyi
def model_dump(self) -> dict:
    """Dump the message to a dictionary."""

text

text() -> str

Get the text content of the first part, if available. Returns an empty string if the first part is not text. This is meant for convenience when working with simple text messages.

Source code in python/scouter/stubs.pyi
def text(self) -> str:
    """Get the text content of the first part, if available. Returns
    an empty string if the first part is not text.
    This is meant for convenience when working with simple text messages.
    """

Choice

Choice from OpenAI chat completion response.

This class represents one possible completion from the model, including the message, finish reason, and optional log probabilities.

Examples:

>>> # Accessing choice from response
>>> choice = response.choices[0]
>>> print(f"Message: {choice.message.content}")
>>> print(f"Finish reason: {choice.finish_reason}")
>>>
>>> # Multiple choices (when n > 1)
>>> for i, choice in enumerate(response.choices):
...     print(f"Choice {i}: {choice.message.content}")

finish_reason property

finish_reason: str

Reason for completion finishing.

logprobs property

logprobs: Optional[LogProbs]

Log probability information.

message property

message: ChatCompletionMessage

The completion message.

Citation

Source citation information.

Citation for a piece of generated content with source details.

Examples:

>>> citation = Citation(
...     start_index=10,
...     end_index=50,
...     uri="https://example.com",
...     title="Example Source",
...     license="CC-BY-4.0",
...     publication_date=GoogleDate(year=2024, month=1, day=1)
... )

end_index property

end_index: Optional[int]

End index in content.

license property

license: Optional[str]

Source license.

publication_date property

publication_date: Optional[GoogleDate]

Publication date.

start_index property

start_index: Optional[int]

Start index in content.

title property

title: Optional[str]

Source title.

uri property

uri: Optional[str]

Source URI.

CitationCharLocation

Character-level citation location in response.

Citation with character-level location information.

Examples:

>>> citation = CitationCharLocation(...)
>>> print(citation.cited_text)
>>> print(f"Characters {citation.start_char_index}-{citation.end_char_index}")

cited_text property

cited_text: str

Cited text.

document_index property

document_index: int

Document index.

document_title property

document_title: str

Document title.

end_char_index property

end_char_index: int

End character index.

file_id property

file_id: str

File ID.

start_char_index property

start_char_index: int

Start character index.

type property

type: str

Citation type.

CitationCharLocationParam

CitationCharLocationParam(
    cited_text: str,
    document_index: int,
    document_title: str,
    end_char_index: int,
    start_char_index: int,
)

Citation with character-level location in document.

Specifies a citation reference using character indices within a document.

Examples:

>>> citation = CitationCharLocationParam(
...     cited_text="Example text",
...     document_index=0,
...     document_title="Document Title",
...     end_char_index=100,
...     start_char_index=50
... )

Parameters:

Name Type Description Default
cited_text str

The text being cited

required
document_index int

Index of the document in the input

required
document_title str

Title of the document

required
end_char_index int

Ending character position

required
start_char_index int

Starting character position

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    cited_text: str,
    document_index: int,
    document_title: str,
    end_char_index: int,
    start_char_index: int,
) -> None:
    """Initialize character location citation.

    Args:
        cited_text (str):
            The text being cited
        document_index (int):
            Index of the document in the input
        document_title (str):
            Title of the document
        end_char_index (int):
            Ending character position
        start_char_index (int):
            Starting character position
    """

cited_text property

cited_text: str

The cited text.

document_index property

document_index: int

Document index.

document_title property

document_title: str

Document title.

end_char_index property

end_char_index: int

End character index.

start_char_index property

start_char_index: int

Start character index.

type property

type: str

Citation type (always 'char_location').

CitationContentBlockLocation

Content block citation location in response.

Citation with content block-level location information.

Examples:

>>> citation = CitationContentBlockLocation(...)
>>> print(f"Blocks {citation.start_block_index}-{citation.end_block_index}")

cited_text property

cited_text: str

Cited text.

document_index property

document_index: int

Document index.

document_title property

document_title: str

Document title.

end_block_index property

end_block_index: int

End block index.

file_id property

file_id: str

File ID.

start_block_index property

start_block_index: int

Start block index.

type property

type: str

Citation type.

CitationContentBlockLocationParam

CitationContentBlockLocationParam(
    cited_text: str,
    document_index: int,
    document_title: str,
    end_block_index: int,
    start_block_index: int,
)

Citation with content block location in document.

Specifies a citation reference using content block indices within a document.

Examples:

>>> citation = CitationContentBlockLocationParam(
...     cited_text="Example text",
...     document_index=0,
...     document_title="Document Title",
...     end_block_index=5,
...     start_block_index=2
... )

Parameters:

Name Type Description Default
cited_text str

The text being cited

required
document_index int

Index of the document in the input

required
document_title str

Title of the document

required
end_block_index int

Ending content block index

required
start_block_index int

Starting content block index

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    cited_text: str,
    document_index: int,
    document_title: str,
    end_block_index: int,
    start_block_index: int,
) -> None:
    """Initialize content block location citation.

    Args:
        cited_text (str):
            The text being cited
        document_index (int):
            Index of the document in the input
        document_title (str):
            Title of the document
        end_block_index (int):
            Ending content block index
        start_block_index (int):
            Starting content block index
    """

cited_text property

cited_text: str

The cited text.

document_index property

document_index: int

Document index.

document_title property

document_title: str

Document title.

end_block_index property

end_block_index: int

End block index.

start_block_index property

start_block_index: int

Start block index.

type property

type: str

Citation type (always 'content_block_location').

CitationMetadata

Collection of citations.

Contains all citations for a piece of content.

Examples:

>>> metadata = CitationMetadata(
...     citations=[Citation(...), Citation(...)]
... )

citations property

citations: Optional[List[Citation]]

List of citations.

CitationPageLocation

Page-level citation location in response.

Citation with page-level location information.

Examples:

>>> citation = CitationPageLocation(...)
>>> print(f"Pages {citation.start_page_number}-{citation.end_page_number}")

cited_text property

cited_text: str

Cited text.

document_index property

document_index: int

Document index.

document_title property

document_title: str

Document title.

end_page_number property

end_page_number: int

End page number.

file_id property

file_id: str

File ID.

start_page_number property

start_page_number: int

Start page number.

type property

type: str

Citation type.

CitationPageLocationParam

CitationPageLocationParam(
    cited_text: str,
    document_index: int,
    document_title: str,
    end_page_number: int,
    start_page_number: int,
)

Citation with page-level location in document.

Specifies a citation reference using page numbers within a document.

Examples:

>>> citation = CitationPageLocationParam(
...     cited_text="Example text",
...     document_index=0,
...     document_title="Document Title",
...     end_page_number=10,
...     start_page_number=5
... )

Parameters:

Name Type Description Default
cited_text str

The text being cited

required
document_index int

Index of the document in the input

required
document_title str

Title of the document

required
end_page_number int

Ending page number

required
start_page_number int

Starting page number

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    cited_text: str,
    document_index: int,
    document_title: str,
    end_page_number: int,
    start_page_number: int,
) -> None:
    """Initialize page location citation.

    Args:
        cited_text (str):
            The text being cited
        document_index (int):
            Index of the document in the input
        document_title (str):
            Title of the document
        end_page_number (int):
            Ending page number
        start_page_number (int):
            Starting page number
    """

cited_text property

cited_text: str

The cited text.

document_index property

document_index: int

Document index.

document_title property

document_title: str

Document title.

end_page_number property

end_page_number: int

End page number.

start_page_number property

start_page_number: int

Start page number.

type property

type: str

Citation type (always 'page_location').

CitationSearchResultLocationParam

CitationSearchResultLocationParam(
    cited_text: str,
    end_block_index: int,
    search_result_index: int,
    source: str,
    start_block_index: int,
    title: str,
)

Citation from search result.

Specifies a citation reference from a search result with block-level location.

Examples:

>>> citation = CitationSearchResultLocationParam(
...     cited_text="Example text",
...     end_block_index=5,
...     search_result_index=0,
...     source="https://example.com",
...     start_block_index=2,
...     title="Search Result"
... )

Parameters:

Name Type Description Default
cited_text str

The text being cited

required
end_block_index int

Ending content block index

required
search_result_index int

Index of the search result

required
source str

Source URL or identifier

required
start_block_index int

Starting content block index

required
title str

Title of the search result

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    cited_text: str,
    end_block_index: int,
    search_result_index: int,
    source: str,
    start_block_index: int,
    title: str,
) -> None:
    """Initialize search result citation.

    Args:
        cited_text (str):
            The text being cited
        end_block_index (int):
            Ending content block index
        search_result_index (int):
            Index of the search result
        source (str):
            Source URL or identifier
        start_block_index (int):
            Starting content block index
        title (str):
            Title of the search result
    """

cited_text property

cited_text: str

The cited text.

end_block_index property

end_block_index: int

End block index.

search_result_index property

search_result_index: int

Search result index.

source property

source: str

Result source.

start_block_index property

start_block_index: int

Start block index.

title property

title: str

Result title.

type property

type: str

Citation type (always 'search_result_location').

CitationWebSearchResultLocationParam

CitationWebSearchResultLocationParam(
    cited_text: str,
    encrypted_index: str,
    title: str,
    url: str,
)

Citation from web search result.

Specifies a citation reference from a web search result.

Examples:

>>> citation = CitationWebSearchResultLocationParam(
...     cited_text="Example text",
...     encrypted_index="abc123",
...     title="Search Result",
...     url="https://example.com"
... )

Parameters:

Name Type Description Default
cited_text str

The text being cited

required
encrypted_index str

Encrypted search result index

required
title str

Title of the search result

required
url str

URL of the search result

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    cited_text: str,
    encrypted_index: str,
    title: str,
    url: str,
) -> None:
    """Initialize web search result citation.

    Args:
        cited_text (str):
            The text being cited
        encrypted_index (str):
            Encrypted search result index
        title (str):
            Title of the search result
        url (str):
            URL of the search result
    """

cited_text property

cited_text: str

The cited text.

encrypted_index property

encrypted_index: str

Encrypted index.

title property

title: str

Result title.

type property

type: str

Citation type (always 'web_search_result_location').

url property

url: str

Result URL.

CitationsConfigParams

Configuration for citations.

Controls whether citations are enabled for document content.

Examples:

>>> config = CitationsConfigParams()
>>> config.enabled = True

enabled property

enabled: Optional[bool]

Whether citations are enabled.

CitationsSearchResultLocation

Search result citation location in response.

Citation from a search result with block-level information.

Examples:

>>> citation = CitationsSearchResultLocation(...)
>>> print(f"{citation.title} from {citation.source}")

cited_text property

cited_text: str

Cited text.

end_block_index property

end_block_index: int

End block index.

search_result_index property

search_result_index: int

Search result index.

source property

source: str

Result source.

start_block_index property

start_block_index: int

Start block index.

title property

title: str

Result title.

type property

type: str

Citation type.

CitationsWebSearchResultLocation

Web search result citation location in response.

Citation from a web search result.

Examples:

>>> citation = CitationsWebSearchResultLocation(...)
>>> print(f"{citation.title}: {citation.url}")

cited_text property

cited_text: str

Cited text.

encrypted_index property

encrypted_index: str

Encrypted index.

title property

title: str

Result title.

type property

type: str

Citation type.

url property

url: str

Result URL.

CodeExecution

CodeExecution()

Code execution tool configuration.

Enables the model to execute generated code.

This type has no configuration fields.

Examples:

>>> code_exec = CodeExecution()
Source code in python/scouter/stubs.pyi
def __init__(self) -> None:
    """Initialize code execution tool."""

CodeExecutionResult

Result of code execution.

Contains the outcome and output from executing code.

Examples:

>>> result = CodeExecutionResult(
...     outcome=Outcome.OutcomeOk,
...     output="4\n"
... )
>>> # Error result
>>> result = CodeExecutionResult(
...     outcome=Outcome.OutcomeFailed,
...     output="NameError: name 'x' is not defined"
... )

outcome property

outcome: Outcome

The execution outcome.

output property

output: Optional[str]

The output.

CommonCrons

cron property

cron: str

Return the cron

get_next

get_next() -> str

Return the next cron time

Source code in python/scouter/stubs.pyi
def get_next(self) -> str:
    """Return the next cron time"""

ComparisonOperator

Comparison operators for assertion-based evaluations.

Defines the available comparison operators that can be used to evaluate assertions against expected values in LLM evaluation workflows.

Examples:

>>> operator = ComparisonOperator.GreaterThan
>>> operator = ComparisonOperator.Equal

ApproximatelyEquals instance-attribute

ApproximatelyEquals: ComparisonOperator

Approximately equals within a tolerance

Contains instance-attribute

Contains: ComparisonOperator

Contains substring or element (in)

ContainsAll instance-attribute

ContainsAll: ComparisonOperator

Contains all specified elements

ContainsAny instance-attribute

ContainsAny: ComparisonOperator

Contains any of the specified elements

ContainsNone instance-attribute

ContainsNone: ComparisonOperator

Contains none of the specified elements

ContainsWord instance-attribute

ContainsWord: ComparisonOperator

Contains a specific word

EndsWith instance-attribute

EndsWith: ComparisonOperator

Ends with substring

Equals instance-attribute

Equals: ComparisonOperator

Equality comparison (==)

GreaterThan instance-attribute

GreaterThan: ComparisonOperator

Greater than comparison (>)

GreaterThanOrEqual instance-attribute

GreaterThanOrEqual: ComparisonOperator

Greater than or equal comparison (>=)

HasLengthEqual instance-attribute

HasLengthEqual: ComparisonOperator

Has specified length equal to

HasLengthGreaterThan instance-attribute

HasLengthGreaterThan: ComparisonOperator

Has specified length greater than

HasLengthGreaterThanOrEqual instance-attribute

HasLengthGreaterThanOrEqual: ComparisonOperator

Has specified length greater than or equal to

HasLengthLessThan instance-attribute

HasLengthLessThan: ComparisonOperator

Has specified length less than

HasLengthLessThanOrEqual instance-attribute

HasLengthLessThanOrEqual: ComparisonOperator

Has specified length less than or equal to

HasUniqueItems instance-attribute

HasUniqueItems: ComparisonOperator

Has unique items

InRange instance-attribute

InRange: ComparisonOperator

Is within a specified numeric range

IsAlphabetic instance-attribute

IsAlphabetic: ComparisonOperator

Is alphabetic

IsAlphanumeric instance-attribute

IsAlphanumeric: ComparisonOperator

Is alphanumeric

IsArray instance-attribute

IsArray: ComparisonOperator

Is an array (list) value

IsBoolean instance-attribute

IsBoolean: ComparisonOperator

Is a boolean value

IsEmail instance-attribute

IsEmail: ComparisonOperator

Is a valid email format

IsEmpty instance-attribute

IsEmpty: ComparisonOperator

Is empty

IsIso8601 instance-attribute

IsIso8601: ComparisonOperator

Is a valid ISO 8601 date format

IsJson instance-attribute

IsJson: ComparisonOperator

Is a valid JSON format

IsLowerCase instance-attribute

IsLowerCase: ComparisonOperator

Is lowercase

IsNegative instance-attribute

IsNegative: ComparisonOperator

Is a negative number

IsNotEmpty instance-attribute

IsNotEmpty: ComparisonOperator

Is not empty

IsNull instance-attribute

IsNull: ComparisonOperator

Is null (None) value

IsNumeric instance-attribute

IsNumeric: ComparisonOperator

Is a numeric value

IsObject instance-attribute

IsObject: ComparisonOperator

Is an object (dict) value

IsPositive instance-attribute

IsPositive: ComparisonOperator

Is a positive number

IsString instance-attribute

IsString: ComparisonOperator

Is a string value

IsUpperCase instance-attribute

IsUpperCase: ComparisonOperator

Is uppercase

IsUrl instance-attribute

IsUrl: ComparisonOperator

Is a valid URL format

IsUuid instance-attribute

IsUuid: ComparisonOperator

Is a valid UUID format

IsZero instance-attribute

IsZero: ComparisonOperator

Is zero

LessThan instance-attribute

LessThan: ComparisonOperator

Less than comparison (<)

LessThanOrEqual instance-attribute

LessThanOrEqual: ComparisonOperator

Less than or equal comparison (<=)

Matches instance-attribute

Matches: ComparisonOperator

Matches regular expression pattern

MatchesRegex instance-attribute

MatchesRegex: ComparisonOperator

Matches a regular expression pattern

NotContains instance-attribute

NotContains: ComparisonOperator

Does not contain substring or element (not in)

NotEqual instance-attribute

NotEqual: ComparisonOperator

Inequality comparison (!=)

NotInRange instance-attribute

NotInRange: ComparisonOperator

Is outside a specified numeric range

StartsWith instance-attribute

StartsWith: ComparisonOperator

Starts with substring

ComparisonResults

Results from comparing two GenAIEvalResults evaluations

baseline_workflow_count property

baseline_workflow_count: int

Get the number of workflows in the baseline evaluation

comparison_workflow_count property

comparison_workflow_count: int

Get the number of workflows in the comparison evaluation

has_missing_tasks property

has_missing_tasks: bool

Check if there are any missing tasks between evaluations

improved_workflows property

improved_workflows: int

Get the count of workflows that improved

mean_pass_rate_delta property

mean_pass_rate_delta: float

Get the mean change in pass rate across all workflows

missing_tasks property

missing_tasks: List[MissingTask]

Get all tasks present in only one evaluation

regressed property

regressed: bool

Check if any workflows regressed in the comparison

regressed_workflows property

regressed_workflows: int

Get the count of workflows that regressed

task_status_changes property

task_status_changes: List[TaskComparison]

Get all tasks where pass/fail status changed

total_workflows property

total_workflows: int

Get the total number of workflows compared

unchanged_workflows property

unchanged_workflows: int

Get the count of workflows with no significant change

workflow_comparisons property

workflow_comparisons: List[WorkflowComparison]

Get all workflow-level comparisons

as_table

as_table() -> None

Print comparison results as formatted tables to the console.

Displays: - Workflow-level summary table - Task status changes table (if any) - Missing tasks list (if any)

Source code in python/scouter/stubs.pyi
def as_table(self) -> None:
    """Print comparison results as formatted tables to the console.

    Displays:
    - Workflow-level summary table
    - Task status changes table (if any)
    - Missing tasks list (if any)
    """

print_missing_tasks

print_missing_tasks() -> None

Print a formatted list of missing tasks to the console

Source code in python/scouter/stubs.pyi
def print_missing_tasks(self) -> None:
    """Print a formatted list of missing tasks to the console"""

print_status_changes_table

print_status_changes_table() -> None

Print a formatted table of task status changes to the console

Source code in python/scouter/stubs.pyi
def print_status_changes_table(self) -> None:
    """Print a formatted table of task status changes to the console"""

print_summary_stats

print_summary_stats() -> None

Print summary statistics of the comparison results to the console

Source code in python/scouter/stubs.pyi
def print_summary_stats(self) -> None:
    """Print summary statistics of the comparison results to the console"""

print_summary_table

print_summary_table() -> None

Print a formatted summary table of workflow comparisons to the console

Source code in python/scouter/stubs.pyi
def print_summary_table(self) -> None:
    """Print a formatted summary table of workflow comparisons to the console"""

print_task_aggregate_table

print_task_aggregate_table() -> None

Print a formatted table of task status changes to the console

Source code in python/scouter/stubs.pyi
def print_task_aggregate_table(self) -> None:
    """Print a formatted table of task status changes to the console"""

CompletionTokenDetails

Detailed token usage for completion output.

This class provides granular information about tokens used in the completion, including reasoning tokens and audio tokens.

Examples:

>>> # Accessing token details
>>> usage = response.usage
>>> details = usage.completion_tokens_details
>>> print(f"Reasoning tokens: {details.reasoning_tokens}")
>>> print(f"Audio tokens: {details.audio_tokens}")

accepted_prediction_tokens property

accepted_prediction_tokens: int

Number of accepted prediction tokens.

audio_tokens property

audio_tokens: int

Number of audio tokens.

reasoning_tokens property

reasoning_tokens: int

Number of reasoning tokens.

rejected_prediction_tokens property

rejected_prediction_tokens: int

Number of rejected prediction tokens.

ComputerUse

ComputerUse(
    environment: ComputerUseEnvironment,
    excluded_predefined_functions: List[str],
)

Computer use tool configuration.

Enables the model to interact with computer interfaces.

Examples:

>>> computer_use = ComputerUse(
...     environment=ComputerUseEnvironment.EnvironmentBrowser,
...     excluded_predefined_functions=["take_screenshot"]
... )

Parameters:

Name Type Description Default
environment ComputerUseEnvironment

Operating environment

required
excluded_predefined_functions List[str]

Functions to exclude from auto-inclusion

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    environment: ComputerUseEnvironment,
    excluded_predefined_functions: List[str],
) -> None:
    """Initialize computer use configuration.

    Args:
        environment (ComputerUseEnvironment):
            Operating environment
        excluded_predefined_functions (List[str]):
            Functions to exclude from auto-inclusion
    """

environment property

environment: ComputerUseEnvironment

The operating environment.

excluded_predefined_functions property

excluded_predefined_functions: List[str]

Excluded functions.

ComputerUseEnvironment

Environment for computer use capabilities.

Specifies the environment in which the model operates when using computer control features.

Examples:

>>> env = ComputerUseEnvironment.EnvironmentBrowser
>>> env.value
'ENVIRONMENT_BROWSER'

EnvironmentBrowser class-attribute instance-attribute

EnvironmentBrowser = 'ComputerUseEnvironment'

Web browser environment

EnvironmentUnspecified class-attribute instance-attribute

EnvironmentUnspecified = 'ComputerUseEnvironment'

Unspecified environment

ConsoleDispatchConfig

ConsoleDispatchConfig()
Source code in python/scouter/stubs.pyi
def __init__(self):
    """Initialize alert config"""

enabled property

enabled: bool

Return the alert dispatch type

Content

Content(
    text: Optional[str] = None,
    parts: Optional[List[PredictionContentPart]] = None,
)

Content for predicted outputs, supporting text or structured parts.

This class represents the content of a predicted output, which can be either a simple text string or an array of structured content parts.

Examples:

>>> # Text content
>>> content = Content(text="Predicted response")
>>>
>>> # Structured content
>>> parts = [PredictionContentPart(type="text", text="Part 1")]
>>> content = Content(parts=parts)

Parameters:

Name Type Description Default
text Optional[str]

Simple text content (mutually exclusive with parts)

None
parts Optional[List[PredictionContentPart]]

Structured content parts (mutually exclusive with text)

None

Raises:

Type Description
TypeError

If both text and parts are provided or neither is provided

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    text: Optional[str] = None,
    parts: Optional[List[PredictionContentPart]] = None,
) -> None:
    """Initialize content for predictions.

    Args:
        text (Optional[str]):
            Simple text content (mutually exclusive with parts)
        parts (Optional[List[PredictionContentPart]]):
            Structured content parts (mutually exclusive with text)

    Raises:
        TypeError: If both text and parts are provided or neither is provided
    """

ContentEmbedding

Content embedding result.

Contains the embedding vector values.

Examples:

>>> embedding = ContentEmbedding(
...     values=[0.1, 0.2, 0.3, ...]
... )

values property

values: List[float]

Embedding vector values.

CustomChoice

CustomChoice(name: str)

Specification for a custom tool to call.

This class identifies a custom tool by name for tool calling.

Examples:

>>> custom = CustomChoice(name="custom_tool")
>>> custom.name
'custom_tool'

Parameters:

Name Type Description Default
name str

Name of the custom tool to call

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str) -> None:
    """Initialize custom choice.

    Args:
        name (str):
            Name of the custom tool to call
    """

name property

name: str

The custom tool name.

CustomDefinition

CustomDefinition(
    name: str,
    description: Optional[str] = None,
    format: Optional[CustomToolFormat] = None,
)

Definition of a custom tool for OpenAI chat completions.

This class defines a custom tool with optional format constraints.

Examples:

>>> # Simple custom tool
>>> custom = CustomDefinition(
...     name="analyzer",
...     description="Analyze data"
... )
>>>
>>> # With format constraints
>>> format = CustomToolFormat(type="text")
>>> custom = CustomDefinition(
...     name="analyzer",
...     description="Analyze data",
...     format=format
... )

Parameters:

Name Type Description Default
name str

Name of the custom tool

required
description Optional[str]

Description of what the tool does

None
format Optional[CustomToolFormat]

Output format constraints

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    description: Optional[str] = None,
    format: Optional[CustomToolFormat] = None,
) -> None:
    """Initialize custom tool definition.

    Args:
        name (str):
            Name of the custom tool
        description (Optional[str]):
            Description of what the tool does
        format (Optional[CustomToolFormat]):
            Output format constraints
    """

description property

description: Optional[str]

The tool description.

format property

format: Optional[CustomToolFormat]

The output format constraints.

name property

name: str

The tool name.

CustomDriftProfile

CustomDriftProfile(
    config: CustomMetricDriftConfig,
    metrics: list[CustomMetric],
)

Parameters:

Name Type Description Default
config CustomMetricDriftConfig

The configuration for custom metric drift detection.

required
metrics list[CustomMetric]

A list of CustomMetric objects representing the metrics to be monitored.

required
Example

config = CustomMetricDriftConfig(...) metrics = [CustomMetric("accuracy", 0.95), CustomMetric("f1_score", 0.88)] profile = CustomDriftProfile(config, metrics, "1.0.0")

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    config: CustomMetricDriftConfig,
    metrics: list[CustomMetric],
):
    """Initialize a CustomDriftProfile instance.

    Args:
        config (CustomMetricDriftConfig):
            The configuration for custom metric drift detection.
        metrics (list[CustomMetric]):
            A list of CustomMetric objects representing the metrics to be monitored.

    Example:
        config = CustomMetricDriftConfig(...)
        metrics = [CustomMetric("accuracy", 0.95), CustomMetric("f1_score", 0.88)]
        profile = CustomDriftProfile(config, metrics, "1.0.0")
    """

config property

config: CustomMetricDriftConfig

Return the drift config

custom_metrics property

custom_metrics: list[CustomMetric]

Return custom metric objects that were used to create the drift profile

metrics property

metrics: dict[str, float]

Return custom metrics and their corresponding values

scouter_version property

scouter_version: str

Return scouter version used to create DriftProfile

uid property

uid: str

Return the unique identifier for the drift profile

from_file staticmethod

from_file(path: Path) -> CustomDriftProfile

Load drift profile from file

Parameters:

Name Type Description Default
path Path

Path to the file

required
Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "CustomDriftProfile":
    """Load drift profile from file

    Args:
        path: Path to the file
    """

model_dump

model_dump() -> Dict[str, Any]

Return dictionary representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Return dictionary representation of drift profile"""

model_dump_json

model_dump_json() -> str

Return json representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of drift profile"""

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> CustomDriftProfile

Load drift profile from dictionary

Parameters:

Name Type Description Default
data Dict[str, Any]

DriftProfile dictionary

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "CustomDriftProfile":
    """Load drift profile from dictionary

    Args:
        data:
            DriftProfile dictionary
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> CustomDriftProfile

Load drift profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "CustomDriftProfile":
    """Load drift profile from json

    Args:
        json_string:
            JSON string representation of the drift profile

    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift profile. If None, outputs to custom_drift_profile.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift profile to json file

    Args:
        path:
            Optional path to save the drift profile. If None, outputs to `custom_drift_profile.json`

    Returns:
        Path to the saved json file
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
space Optional[str]

Model space

None
name Optional[str]

Model name

None
version Optional[str]

Model version

None
alert_config Optional[CustomMetricAlertConfig]

Custom metric alert configuration

None

Returns:

Type Description
None

None

Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        space (Optional[str]):
            Model space
        name (Optional[str]):
            Model name
        version (Optional[str]):
            Model version
        alert_config (Optional[CustomMetricAlertConfig]):
            Custom metric alert configuration

    Returns:
        None
    """

CustomMetric

CustomMetric(
    name: str,
    baseline_value: float,
    alert_threshold: AlertThreshold,
    delta: Optional[float] = None,
)

This class represents a custom metric that uses comparison-based alerting. It applies an alert condition to a single metric value.

Parameters:

Name Type Description Default
name str

The name of the metric being monitored. This should be a descriptive identifier for the metric.

required
baseline_value float

The baseline value of the metric.

required
alert_threshold AlertThreshold

The condition used to determine when an alert should be triggered.

required
delta Optional[float]

The delta value used in conjunction with the alert_threshold. If supplied, this value will be added or subtracted from the provided metric value to determine if an alert should be triggered.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    baseline_value: float,
    alert_threshold: AlertThreshold,
    delta: Optional[float] = None,
):
    """
    Initialize a custom metric for alerting.

    This class represents a custom metric that uses comparison-based alerting. It applies
    an alert condition to a single metric value.

    Args:
        name (str):
            The name of the metric being monitored. This should be a descriptive identifier for the metric.
        baseline_value (float):
            The baseline value of the metric.
        alert_threshold (AlertThreshold):
            The condition used to determine when an alert should be triggered.
        delta (Optional[float]):
            The delta value used in conjunction with the alert_threshold.
            If supplied, this value will be added or subtracted from the provided metric value to
            determine if an alert should be triggered.

    """

alert_condition property writable

alert_condition: AlertCondition

Return the alert_condition

alert_threshold property

alert_threshold: AlertThreshold

Return the alert_threshold

baseline_value property writable

baseline_value: float

Return the baseline value

delta property

delta: Optional[float]

Return the delta value

name property writable

name: str

Return the metric name

CustomMetricAlertConfig

CustomMetricAlertConfig(
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
)

Parameters:

Name Type Description Default
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch config. Defaults to console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
):
    """Initialize alert config

    Args:
        dispatch_config:
            Alert dispatch config. Defaults to console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight

    """

alert_conditions property writable

alert_conditions: dict[str, AlertCondition]

Return the alert_condition that were set during metric definition

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

schedule property writable

schedule: str

Return the schedule

CustomMetricDriftConfig

CustomMetricDriftConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: CustomMetricAlertConfig = CustomMetricAlertConfig(),
)
space:
    Model space
name:
    Model name
version:
    Model version. Defaults to 0.1.0
sample_size:
    Sample size
alert_config:
    Custom metric alert configuration
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: CustomMetricAlertConfig = CustomMetricAlertConfig(),
):
    """Initialize drift config
    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version. Defaults to 0.1.0
        sample_size:
            Sample size
        alert_config:
            Custom metric alert configuration
    """

alert_config property writable

alert_config: CustomMetricAlertConfig

get alert_config

drift_type property

drift_type: DriftType

Drift type

name property writable

name: str

Model Name

space property writable

space: str

Model space

uid property writable

uid: str

Unique identifier for the drift config

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> CustomMetricDriftConfig

Load config from json file Args: path: Path to json file to load config from.

Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "CustomMetricDriftConfig":
    """Load config from json file
    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None

Inplace operation that updates config args Args: space: Model space name: Model name version: Model version alert_config: Custom metric alert configuration

Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args
    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        alert_config:
            Custom metric alert configuration
    """

CustomMetricRecord

CustomMetricRecord(uid: str, metric: str, value: float)

Parameters:

Name Type Description Default
uid str

Unique identifier for the metric record. Must correspond to an existing entity in Scouter.

required
metric str

Metric name

required
value float

Metric value

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    uid: str,
    metric: str,
    value: float,
):
    """Initialize spc drift server record

    Args:
        uid:
            Unique identifier for the metric record.
            Must correspond to an existing entity in Scouter.
        metric:
            Metric name
        value:
            Metric value
    """

created_at property

created_at: datetime

Return the created at timestamp.

metric property

metric: str

Return the metric name.

uid property

uid: str

Returns the unique identifier.

value property

value: float

Return the metric value.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

to_dict

to_dict() -> Dict[str, str]

Return the dictionary representation of the record.

Source code in python/scouter/stubs.pyi
def to_dict(self) -> Dict[str, str]:
    """Return the dictionary representation of the record."""

CustomTool

CustomTool(custom: CustomDefinition, type: str)

Custom tool for OpenAI chat completions.

This class wraps a custom tool definition to create a callable tool for the model.

Examples:

>>> custom = CustomDefinition(name="analyzer")
>>> tool = CustomTool(custom=custom, type="custom")
>>> tool.type
'custom'

Parameters:

Name Type Description Default
custom CustomDefinition

The custom tool definition

required
type str

Tool type (typically "custom")

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    custom: CustomDefinition,
    type: str,
) -> None:
    """Initialize custom tool.

    Args:
        custom (CustomDefinition):
            The custom tool definition
        type (str):
            Tool type (typically "custom")
    """

custom property

custom: CustomDefinition

The custom tool definition.

type property

type: str

The tool type.

CustomToolChoice

CustomToolChoice(custom: CustomChoice)

Tool choice configuration for a custom tool.

This class specifies that the model should call a specific custom tool.

Examples:

>>> custom = CustomChoice(name="custom_tool")
>>> tool_choice = CustomToolChoice(custom=custom)
>>> tool_choice.type
'custom'

Parameters:

Name Type Description Default
custom CustomChoice

The custom tool to call

required
Source code in python/scouter/stubs.pyi
def __init__(self, custom: CustomChoice) -> None:
    """Initialize custom tool choice.

    Args:
        custom (CustomChoice):
            The custom tool to call
    """

custom property

custom: CustomChoice

The custom tool specification.

type property

type: str

The tool type (always 'custom').

CustomToolFormat

CustomToolFormat(
    type: Optional[str] = None,
    grammar: Optional[Grammar] = None,
)

Format specification for custom tool outputs.

This class supports either free-form text or grammar-constrained output formats for custom tools.

Examples:

>>> # Text format
>>> format = CustomToolFormat(type="text")
>>>
>>> # Grammar format
>>> grammar = Grammar(definition="...", syntax="lark")
>>> format = CustomToolFormat(grammar=grammar)

Parameters:

Name Type Description Default
type Optional[str]

Format type for text output

None
grammar Optional[Grammar]

Grammar definition for structured output

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    type: Optional[str] = None,
    grammar: Optional[Grammar] = None,
) -> None:
    """Initialize custom tool format.

    Args:
        type (Optional[str]):
            Format type for text output
        grammar (Optional[Grammar]):
            Grammar definition for structured output
    """

DataProfile

Data profile of features

features property

features: Dict[str, FeatureProfile]

Returns dictionary of features and their data profiles

model_dump_json

model_dump_json() -> str

Return json representation of data profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of data profile"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> DataProfile

Load Data profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the data profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "DataProfile":
    """Load Data profile from json

    Args:
        json_string:
            JSON string representation of the data profile
    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save data profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the data profile. If None, outputs to data_profile.json

None

Returns:

Type Description
Path

Path to the saved data profile

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save data profile to json file

    Args:
        path:
            Optional path to save the data profile. If None, outputs to `data_profile.json`

    Returns:
        Path to the saved data profile

    """

DataProfiler

DataProfiler()
Source code in python/scouter/stubs.pyi
def __init__(self):
    """Instantiate DataProfiler class that is
    used to profile data"""

TraceMetricsRequest

TraceMetricsRequest(
    service_name: str,
    start_time: datetime,
    end_time: datetime,
    bucket_interval: str,
)

Request to get trace metrics from the Scouter server.

Parameters:

Name Type Description Default
service_name str

The name of the service to query metrics for.

required
start_time datetime

The start time for the metrics query.

required
end_time datetime

The end time for the metrics query.

required
bucket_interval str

Optional interval for aggregating metrics (e.g., "1m", "5m").

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    service_name: str,
    start_time: datetime.datetime,
    end_time: datetime.datetime,
    bucket_interval: str,
):
    """
    Initialize a TraceMetricsRequest.

    Args:
        service_name (str):
            The name of the service to query metrics for.
        start_time (datetime):
            The start time for the metrics query.
        end_time (datetime):
            The end time for the metrics query.
        bucket_interval (str):
            Optional interval for aggregating metrics (e.g., "1m", "5m").
    """

create_data_profile

create_data_profile(
    data: Any,
    data_type: Optional[ScouterDataType] = None,
    bin_size: int = 20,
    compute_correlations: bool = False,
) -> DataProfile

Create a data profile from data.

Parameters:

Name Type Description Default
data Any

Data to create a data profile from. Data can be a numpy array, a polars dataframe or pandas dataframe.

Data is expected to not contain any missing values, NaNs or infinities

These types are incompatible with computing quantiles, histograms, and correlations. These values must be removed or imputed.

required
data_type Optional[ScouterDataType]

Optional data type. Inferred from data if not provided.

None
bin_size int

Optional bin size for histograms. Defaults to 20 bins.

20
compute_correlations bool

Whether to compute correlations or not.

False

Returns:

Type Description
DataProfile

DataProfile

Source code in python/scouter/stubs.pyi
def create_data_profile(
    self,
    data: Any,
    data_type: Optional[ScouterDataType] = None,
    bin_size: int = 20,
    compute_correlations: bool = False,
) -> DataProfile:
    """Create a data profile from data.

    Args:
        data:
            Data to create a data profile from. Data can be a numpy array,
            a polars dataframe or pandas dataframe.

            **Data is expected to not contain any missing values, NaNs or infinities**

            These types are incompatible with computing
            quantiles, histograms, and correlations. These values must be removed or imputed.

        data_type:
            Optional data type. Inferred from data if not provided.
        bin_size:
            Optional bin size for histograms. Defaults to 20 bins.
        compute_correlations:
            Whether to compute correlations or not.

    Returns:
        DataProfile
    """

DataStoreSpec

DataStoreSpec(
    data_store: str, filter: Optional[str] = None
)

Specification for a Vertex AI Search datastore.

Defines a datastore to search with optional filtering.

Examples:

>>> spec = DataStoreSpec(
...     data_store="projects/my-project/locations/us/collections/default/dataStores/my-store",
...     filter="category:electronics"
... )

Parameters:

Name Type Description Default
data_store str

Full resource name of the datastore

required
filter Optional[str]

Optional filter expression

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    data_store: str,
    filter: Optional[str] = None,
) -> None:
    """Initialize datastore specification.

    Args:
        data_store (str):
            Full resource name of the datastore
        filter (Optional[str]):
            Optional filter expression
    """

data_store property

data_store: str

The datastore resource name.

filter property

filter: Optional[str]

The filter expression.

Distinct

count property

count: int

total unique value counts

percent property

percent: float

percent value uniqueness

Doane

Doane()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Doane equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

DocumentBlockParam

DocumentBlockParam(
    source: Any,
    cache_control: Optional[CacheControl] = None,
    title: Optional[str] = None,
    context: Optional[str] = None,
    citations: Optional[CitationsConfigParams] = None,
)

Document content block parameter.

Document content with source, optional cache control, title, context, and citations.

Examples:

>>> # PDF document
>>> source = Base64PDFSource(data="...")
>>> block = DocumentBlockParam(
...     source=source,
...     title="Document Title",
...     context="Additional context",
...     citations=CitationsConfigParams()
... )

Parameters:

Name Type Description Default
source Any

Document source (Base64PDFSource, UrlPDFSource, or PlainTextSource)

required
cache_control Optional[CacheControl]

Cache control settings

None
title Optional[str]

Document title

None
context Optional[str]

Additional context about the document

None
citations Optional[CitationsConfigParams]

Citations configuration

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    source: Any,
    cache_control: Optional["CacheControl"] = None,
    title: Optional[str] = None,
    context: Optional[str] = None,
    citations: Optional[CitationsConfigParams] = None,
) -> None:
    """Initialize document block parameter.

    Args:
        source (Any):
            Document source (Base64PDFSource, UrlPDFSource, or PlainTextSource)
        cache_control (Optional[CacheControl]):
            Cache control settings
        title (Optional[str]):
            Document title
        context (Optional[str]):
            Additional context about the document
        citations (Optional[CitationsConfigParams]):
            Citations configuration
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

citations property

citations: Optional[CitationsConfigParams]

Citations configuration.

context property

context: Optional[str]

Document context.

title property

title: Optional[str]

Document title.

type property

type: str

Content type (always 'document').

DriftAlertPaginationRequest

DriftAlertPaginationRequest(
    uid: str,
    active: bool = False,
    limit: Optional[int] = None,
    cursor_created_at: Optional[datetime] = None,
    cursor_id: Optional[int] = None,
    direction: Optional[
        Literal["next", "previous"]
    ] = "previous",
    start_datetime: Optional[datetime] = None,
    end_datetime: Optional[datetime] = None,
)

Parameters:

Name Type Description Default
uid str

Unique identifier tied to drift profile

required
active bool

Whether to get active alerts only

False
limit Optional[int]

Limit for number of alerts to return

None
cursor_created_at Optional[datetime]

Pagination cursor: created at timestamp

None
cursor_id Optional[int]

Pagination cursor: alert ID

None
direction Optional[Literal['next', 'previous']]

Pagination direction: "next" or "previous"

'previous'
start_datetime Optional[datetime]

Optional start datetime for alert filtering

None
end_datetime Optional[datetime]

Optional end datetime for alert filtering

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    uid: str,
    active: bool = False,
    limit: Optional[int] = None,
    cursor_created_at: Optional[datetime.datetime] = None,
    cursor_id: Optional[int] = None,
    direction: Optional[Literal["next", "previous"]] = "previous",
    start_datetime: Optional[datetime.datetime] = None,
    end_datetime: Optional[datetime.datetime] = None,
) -> None:
    """Initialize drift alert request. Used for paginated alert retrieval.

    Args:
        uid:
            Unique identifier tied to drift profile
        active:
            Whether to get active alerts only
        limit:
            Limit for number of alerts to return
        cursor_created_at:
            Pagination cursor: created at timestamp
        cursor_id:
            Pagination cursor: alert ID
        direction:
            Pagination direction: "next" or "previous"
        start_datetime:
            Optional start datetime for alert filtering
        end_datetime:
            Optional end datetime for alert filtering
    """

DriftRequest

DriftRequest(
    uid: str,
    space: str,
    time_interval: TimeInterval,
    max_data_points: int,
    start_datetime: Optional[datetime] = None,
    end_datetime: Optional[datetime] = None,
)

Parameters:

Name Type Description Default
uid str

Unique identifier tied to drift profile

required
space str

Space associated with drift profile

required
time_interval TimeInterval

Time window for drift request

required
max_data_points int

Maximum data points to return

required
start_datetime Optional[datetime]

Optional start datetime for drift request

None
end_datetime Optional[datetime]

Optional end datetime for drift request

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    uid: str,
    space: str,
    time_interval: TimeInterval,
    max_data_points: int,
    start_datetime: Optional[datetime.datetime] = None,
    end_datetime: Optional[datetime.datetime] = None,
) -> None:
    """Initialize drift request

    Args:
        uid:
            Unique identifier tied to drift profile
        space:
            Space associated with drift profile
        time_interval:
            Time window for drift request
        max_data_points:
            Maximum data points to return
        start_datetime:
            Optional start datetime for drift request
        end_datetime:
            Optional end datetime for drift request
    """

Drifter

Drifter()
Source code in python/scouter/stubs.pyi
def __init__(self) -> None:
    """Instantiate Rust Drifter class that is
    used to create monitoring profiles and compute drifts.
    """

compute_drift

compute_drift(
    data: Any,
    drift_profile: SpcDriftProfile,
    data_type: Optional[ScouterDataType] = None,
) -> SpcDriftMap
compute_drift(
    data: Any,
    drift_profile: PsiDriftProfile,
    data_type: Optional[ScouterDataType] = None,
) -> PsiDriftMap
compute_drift(
    data: List[GenAIEvalRecord],
    drift_profile: GenAIEvalProfile,
    data_type: Optional[ScouterDataType] = None,
) -> GenAIEvalResultSet
compute_drift(
    data: Any,
    drift_profile: Union[
        SpcDriftProfile, PsiDriftProfile, GenAIEvalProfile
    ],
    data_type: Optional[ScouterDataType] = None,
) -> Union[SpcDriftMap, PsiDriftMap, GenAIEvalResultSet]

Create a drift map from data.

Parameters:

Name Type Description Default
data Any

Data to create a data profile from. Data can be a numpy array, a polars dataframe or a pandas dataframe.

required
drift_profile Union[SpcDriftProfile, PsiDriftProfile, GenAIEvalProfile]

Drift profile to use to compute drift map

required
data_type Optional[ScouterDataType]

Optional data type. Inferred from data if not provided.

None

Returns:

Type Description
Union[SpcDriftMap, PsiDriftMap, GenAIEvalResultSet]

SpcDriftMap, PsiDriftMap or GenAIEvalResultSet

Source code in python/scouter/stubs.pyi
def compute_drift(  # type: ignore
    self,
    data: Any,
    drift_profile: Union[SpcDriftProfile, PsiDriftProfile, GenAIEvalProfile],
    data_type: Optional[ScouterDataType] = None,
) -> Union[SpcDriftMap, PsiDriftMap, GenAIEvalResultSet]:
    """Create a drift map from data.

    Args:
        data:
            Data to create a data profile from. Data can be a numpy array,
            a polars dataframe or a pandas dataframe.
        drift_profile:
            Drift profile to use to compute drift map
        data_type:
            Optional data type. Inferred from data if not provided.

    Returns:
        SpcDriftMap, PsiDriftMap or GenAIEvalResultSet
    """

create_drift_profile

create_drift_profile(
    data: Any,
    config: SpcDriftConfig,
    data_type: Optional[ScouterDataType] = None,
) -> SpcDriftProfile
create_drift_profile(
    data: Any, data_type: Optional[ScouterDataType] = None
) -> SpcDriftProfile
create_drift_profile(
    data: Any,
    config: PsiDriftConfig,
    data_type: Optional[ScouterDataType] = None,
) -> PsiDriftProfile
create_drift_profile(
    data: Union[CustomMetric, List[CustomMetric]],
    config: CustomMetricDriftConfig,
    data_type: Optional[ScouterDataType] = None,
) -> CustomDriftProfile
create_drift_profile(
    data: Any,
    config: Optional[
        Union[
            SpcDriftConfig,
            PsiDriftConfig,
            CustomMetricDriftConfig,
        ]
    ] = None,
    data_type: Optional[ScouterDataType] = None,
) -> Union[
    SpcDriftProfile, PsiDriftProfile, CustomDriftProfile
]

Create a drift profile from data.

Parameters:

Name Type Description Default
data Any

Data to create a data profile from. Data can be a numpy array, a polars dataframe, pandas dataframe or a list of CustomMetric if creating a custom metric profile.

Data is expected to not contain any missing values, NaNs or infinities

required
config Optional[Union[SpcDriftConfig, PsiDriftConfig, CustomMetricDriftConfig]]

Drift config that will be used for monitoring

None
data_type Optional[ScouterDataType]

Optional data type. Inferred from data if not provided.

None

Returns:

Type Description
Union[SpcDriftProfile, PsiDriftProfile, CustomDriftProfile]

SpcDriftProfile, PsiDriftProfile or CustomDriftProfile

Source code in python/scouter/stubs.pyi
def create_drift_profile(  # type: ignore
    self,
    data: Any,
    config: Optional[Union[SpcDriftConfig, PsiDriftConfig, CustomMetricDriftConfig]] = None,
    data_type: Optional[ScouterDataType] = None,
) -> Union[SpcDriftProfile, PsiDriftProfile, CustomDriftProfile]:
    """Create a drift profile from data.

    Args:
        data:
            Data to create a data profile from. Data can be a numpy array,
            a polars dataframe, pandas dataframe or a list of CustomMetric if creating
            a custom metric profile.

            **Data is expected to not contain any missing values, NaNs or infinities**

        config:
            Drift config that will be used for monitoring
        data_type:
            Optional data type. Inferred from data if not provided.

    Returns:
        SpcDriftProfile, PsiDriftProfile or CustomDriftProfile
    """

create_genai_drift_profile

create_genai_drift_profile(
    config: GenAIEvalConfig,
    tasks: Sequence[LLMJudgeTask | AssertionTask],
) -> GenAIEvalProfile

Initialize a GenAIEvalProfile for LLM evaluation and drift detection.

LLM evaluations are run asynchronously on the scouter server.

Overview

GenAI evaluations are defined using assertion tasks and LLM judge tasks. Assertion tasks evaluate specific metrics based on model responses, and do not require the use of an LLM judge or extra call. It is recommended to use assertion tasks whenever possible to reduce cost and latency. LLM judge tasks leverage an additional LLM call to evaluate model responses based on more complex criteria. Together, these tasks provide a flexible framework for monitoring LLM performance and detecting drift over time.

Parameters:

Name Type Description Default
config GenAIEvalConfig

The configuration for the GenAI drift profile containing space, name, version, and alert settings.

required
tasks List[LLMJudgeTask | AssertionTask]

List of evaluation tasks to include in the profile. Can contain both AssertionTask and LLMJudgeTask instances. At least one task (assertion or LLM judge) is required.

required

Returns:

Name Type Description
GenAIEvalProfile GenAIEvalProfile

Configured profile ready for GenAI drift monitoring.

Raises:

Type Description
ProfileError

If workflow validation fails, metrics are empty when no workflow is provided, or if workflow tasks don't match metric names.

Examples:

Basic usage with metrics only:

>>> config = GenAIEvalConfig("my_space", "my_model", "1.0")
>>>  tasks = [
...     LLMJudgeTask(
...         id="response_relevance",
...         prompt=relevance_prompt,
...         expected_value=7,
...         field_path="score",
...         operator=ComparisonOperator.GreaterThanOrEqual,
...         description="Ensure relevance score >= 7"
...     )
... ]
>>> profile = Drifter().create_genai_drift_profile(config, tasks)
Source code in python/scouter/stubs.pyi
def create_genai_drift_profile(
    self, config: GenAIEvalConfig, tasks: Sequence[LLMJudgeTask | AssertionTask]
) -> GenAIEvalProfile:
    """Initialize a GenAIEvalProfile for LLM evaluation and drift detection.

    LLM evaluations are run asynchronously on the scouter server.

    Overview:
        GenAI evaluations are defined using assertion tasks and LLM judge tasks.
        Assertion tasks evaluate specific metrics based on model responses, and do not require
        the use of an LLM judge or extra call. It is recommended to use assertion tasks whenever possible
        to reduce cost and latency. LLM judge tasks leverage an additional LLM call to evaluate
        model responses based on more complex criteria. Together, these tasks provide a flexible framework
        for monitoring LLM performance and detecting drift over time.


    Args:
        config (GenAIEvalConfig):
            The configuration for the GenAI drift profile containing space, name,
            version, and alert settings.
        tasks (List[LLMJudgeTask | AssertionTask]):
            List of evaluation tasks to include in the profile. Can contain
            both AssertionTask and LLMJudgeTask instances. At least one task
            (assertion or LLM judge) is required.

    Returns:
        GenAIEvalProfile: Configured profile ready for GenAI drift monitoring.

    Raises:
        ProfileError: If workflow validation fails, metrics are empty when no
            workflow is provided, or if workflow tasks don't match metric names.

    Examples:
        Basic usage with metrics only:

        >>> config = GenAIEvalConfig("my_space", "my_model", "1.0")
        >>>  tasks = [
        ...     LLMJudgeTask(
        ...         id="response_relevance",
        ...         prompt=relevance_prompt,
        ...         expected_value=7,
        ...         field_path="score",
        ...         operator=ComparisonOperator.GreaterThanOrEqual,
        ...         description="Ensure relevance score >= 7"
        ...     )
        ... ]
        >>> profile = Drifter().create_genai_drift_profile(config, tasks)

    """

DynamicRetrievalConfig

DynamicRetrievalConfig(
    mode: Optional[DynamicRetrievalMode] = None,
    dynamic_threshold: Optional[float] = None,
)

Configuration for dynamic retrieval behavior.

Controls when and how retrieval is triggered.

Examples:

>>> config = DynamicRetrievalConfig(
...     mode=DynamicRetrievalMode.ModeDynamic,
...     dynamic_threshold=0.5
... )

Parameters:

Name Type Description Default
mode Optional[DynamicRetrievalMode]

Retrieval mode

None
dynamic_threshold Optional[float]

Threshold for dynamic retrieval

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    mode: Optional[DynamicRetrievalMode] = None,
    dynamic_threshold: Optional[float] = None,
) -> None:
    """Initialize dynamic retrieval configuration.

    Args:
        mode (Optional[DynamicRetrievalMode]):
            Retrieval mode
        dynamic_threshold (Optional[float]):
            Threshold for dynamic retrieval
    """

dynamic_threshold property

dynamic_threshold: Optional[float]

The dynamic threshold.

mode property

mode: Optional[DynamicRetrievalMode]

The retrieval mode.

DynamicRetrievalMode

Mode for dynamic retrieval behavior.

Controls when the model triggers retrieval operations.

Examples:

>>> mode = DynamicRetrievalMode.ModeDynamic
>>> mode.value
'MODE_DYNAMIC'

ModeDynamic class-attribute instance-attribute

ModeDynamic = 'DynamicRetrievalMode'

Trigger retrieval only when necessary

ModeUnspecified class-attribute instance-attribute

ModeUnspecified = 'DynamicRetrievalMode'

Unspecified mode (always trigger)

ElasticSearchParams

ElasticSearchParams(
    index: str,
    search_template: str,
    num_hits: Optional[int] = None,
)

Parameters for Elasticsearch API.

Configures Elasticsearch index and search template.

Examples:

>>> params = ElasticSearchParams(
...     index="my-index",
...     search_template="my-template",
...     num_hits=10
... )

Parameters:

Name Type Description Default
index str

Elasticsearch index name

required
search_template str

Search template name

required
num_hits Optional[int]

Number of hits to request

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    index: str,
    search_template: str,
    num_hits: Optional[int] = None,
) -> None:
    """Initialize Elasticsearch parameters.

    Args:
        index (str):
            Elasticsearch index name
        search_template (str):
            Search template name
        num_hits (Optional[int]):
            Number of hits to request
    """

index property

index: str

The Elasticsearch index.

num_hits property

num_hits: Optional[int]

Number of hits.

search_template property

search_template: str

The search template.

Embedder

Embedder(
    provider: Provider | str,
    config: Optional[
        OpenAIEmbeddingConfig | GeminiEmbeddingConfig
    ] = None,
)

Class for creating embeddings.

Parameters:

Name Type Description Default
provider Provider | str

The provider to use for the embedder. This can be a Provider enum or a string representing the provider.

required
config Optional[OpenAIEmbeddingConfig | GeminiEmbeddingConfig]

The configuration to use for the embedder. This can be a Pydantic BaseModel class representing the configuration for the provider. If no config is provided, defaults to OpenAI provider configuration.

None
Source code in python/scouter/stubs.pyi
def __init__(  # type: ignore
    self,
    provider: Provider | str,
    config: Optional[OpenAIEmbeddingConfig | GeminiEmbeddingConfig] = None,
) -> None:
    """Create an Embedder object.

    Args:
        provider (Provider | str):
            The provider to use for the embedder. This can be a Provider enum or a string
            representing the provider.
        config (Optional[OpenAIEmbeddingConfig | GeminiEmbeddingConfig]):
            The configuration to use for the embedder. This can be a Pydantic BaseModel class
            representing the configuration for the provider. If no config is provided,
            defaults to OpenAI provider configuration.
    """

embed

embed(
    input: str | List[str] | PredictRequest,
) -> (
    OpenAIEmbeddingResponse
    | GeminiEmbeddingResponse
    | PredictResponse
)

Create embeddings for input.

Parameters:

Name Type Description Default
input str | List[str] | PredictRequest

The input to embed. Type depends on provider: - OpenAI/Gemini: str | List[str] - Vertex: PredictRequest

required

Returns:

Type Description
OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

Provider-specific response type.

OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

OpenAIEmbeddingResponse for OpenAI,

OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

GeminiEmbeddingResponse for Gemini,

OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

PredictResponse for Vertex.

Examples:

## OpenAI
embedder = Embedder(Provider.OpenAI)
response = embedder.embed(input="Test input")

## Gemini
embedder = Embedder(Provider.Gemini, config=GeminiEmbeddingConfig(model="gemini-embedding-001"))
response = embedder.embed(input="Test input")

## Vertex
from potato_head.google import PredictRequest
embedder = Embedder(Provider.Vertex)
response = embedder.embed(input=PredictRequest(text="Test input"))
Source code in python/scouter/stubs.pyi
def embed(
    self,
    input: str | List[str] | PredictRequest,
) -> OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse:
    """Create embeddings for input.

    Args:
        input: The input to embed. Type depends on provider:
            - OpenAI/Gemini: str | List[str]
            - Vertex: PredictRequest

    Returns:
        Provider-specific response type.
        OpenAIEmbeddingResponse for OpenAI,
        GeminiEmbeddingResponse for Gemini,
        PredictResponse for Vertex.

    Examples:
        ```python
        ## OpenAI
        embedder = Embedder(Provider.OpenAI)
        response = embedder.embed(input="Test input")

        ## Gemini
        embedder = Embedder(Provider.Gemini, config=GeminiEmbeddingConfig(model="gemini-embedding-001"))
        response = embedder.embed(input="Test input")

        ## Vertex
        from potato_head.google import PredictRequest
        embedder = Embedder(Provider.Vertex)
        response = embedder.embed(input=PredictRequest(text="Test input"))
        ```
    """

EmbeddingObject

Single embedding from OpenAI embedding response.

This class represents one embedding vector from the response.

Examples:

>>> # Accessing embeddings
>>> for embedding in response.data:
...     print(f"Index: {embedding.index}")
...     print(f"Dimensions: {len(embedding.embedding)}")

embedding property

embedding: List[float]

The embedding vector.

index property

index: int

Index in the input list.

object property

object: str

Object type (always 'embedding').

EmbeddingTaskType

Task type for embedding generation.

Specifies the intended use case for embeddings, which may affect how they are computed.

Examples:

>>> task = EmbeddingTaskType.RetrievalDocument
>>> task.value
'RETRIEVAL_DOCUMENT'

Classification class-attribute instance-attribute

Classification = 'EmbeddingTaskType'

Classification tasks

Clustering class-attribute instance-attribute

Clustering = 'EmbeddingTaskType'

Clustering tasks

RetrievalDocument class-attribute instance-attribute

RetrievalDocument = 'EmbeddingTaskType'

Document for retrieval tasks

RetrievalQuery class-attribute instance-attribute

RetrievalQuery = 'EmbeddingTaskType'

Query for retrieval tasks

SemanticSimilarity class-attribute instance-attribute

SemanticSimilarity = 'EmbeddingTaskType'

Semantic similarity comparison

TaskTypeUnspecified class-attribute instance-attribute

TaskTypeUnspecified = 'EmbeddingTaskType'

Unspecified task type

EnterpriseWebSearch

EnterpriseWebSearch(
    exclude_domains: Optional[List[str]] = None,
    blocking_confidence: Optional[
        PhishBlockThreshold
    ] = None,
)

Enterprise web search tool configuration.

Configures enterprise-grade web search with compliance features.

Examples:

>>> search = EnterpriseWebSearch(
...     exclude_domains=["example.com"],
...     blocking_confidence=PhishBlockThreshold.BlockHighAndAbove
... )

Parameters:

Name Type Description Default
exclude_domains Optional[List[str]]

Domains to exclude from results

None
blocking_confidence Optional[PhishBlockThreshold]

Phishing blocking threshold

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    exclude_domains: Optional[List[str]] = None,
    blocking_confidence: Optional[PhishBlockThreshold] = None,
) -> None:
    """Initialize enterprise web search configuration.

    Args:
        exclude_domains (Optional[List[str]]):
            Domains to exclude from results
        blocking_confidence (Optional[PhishBlockThreshold]):
            Phishing blocking threshold
    """

blocking_confidence property

blocking_confidence: Optional[PhishBlockThreshold]

Phishing blocking threshold.

exclude_domains property

exclude_domains: Optional[List[str]]

Domains to exclude.

EqualWidthBinning

EqualWidthBinning(method: EqualWidthMethods = Doane())

This strategy divides the range of values into bins of equal width. Several binning rules are supported to automatically determine the appropriate number of bins based on the input distribution.

Note

Detailed explanations of each method are provided in their respective constructors or documentation.

Parameters:

Name Type Description Default
method EqualWidthMethods

Specifies how the number of bins should be determined. Options include: - Manual(num_bins): Explicitly sets the number of bins. - SquareRoot, Sturges, Rice, Doane, Scott, TerrellScott, FreedmanDiaconis: Rules that infer bin counts from data. Defaults to Doane().

Doane()
Source code in python/scouter/stubs.pyi
def __init__(self, method: EqualWidthMethods = Doane()):
    """Initialize the equal-width binning configuration.

    This strategy divides the range of values into bins of equal width.
    Several binning rules are supported to automatically determine the
    appropriate number of bins based on the input distribution.

    Note:
        Detailed explanations of each method are provided in their respective
        constructors or documentation.

    Args:
        method:
            Specifies how the number of bins should be determined.
            Options include:
              - Manual(num_bins): Explicitly sets the number of bins.
              - SquareRoot, Sturges, Rice, Doane, Scott, TerrellScott,
                FreedmanDiaconis: Rules that infer bin counts from data.
            Defaults to Doane().
    """

method property writable

method: EqualWidthMethods

Specifies how the number of bins should be determined.

EvaluationConfig

EvaluationConfig(
    embedder: Optional[Embedder] = None,
    embedding_targets: Optional[List[str]] = None,
    compute_similarity: bool = False,
    compute_histograms: bool = False,
)

Configuration options for LLM evaluation.

Parameters:

Name Type Description Default
embedder Optional[Embedder]

Optional Embedder instance to use for generating embeddings for similarity-based metrics. If not provided, no embeddings will be generated.

None
embedding_targets Optional[List[str]]

Optional list of context keys to generate embeddings for. If not provided, embeddings will be generated for all string fields in the record context.

None
compute_similarity bool

Whether to compute similarity between embeddings. Default is False.

False
compute_histograms bool

Whether to compute histograms for all calculated features (metrics, embeddings, similarities). Default is False.

False
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    embedder: Optional[Embedder] = None,
    embedding_targets: Optional[List[str]] = None,
    compute_similarity: bool = False,
    compute_histograms: bool = False,
):
    """
    Initialize the EvaluationConfig with optional parameters.

    Args:
        embedder (Optional[Embedder]):
            Optional Embedder instance to use for generating embeddings for similarity-based metrics.
            If not provided, no embeddings will be generated.
        embedding_targets (Optional[List[str]]):
            Optional list of context keys to generate embeddings for. If not provided, embeddings will
            be generated for all string fields in the record context.
        compute_similarity (bool):
            Whether to compute similarity between embeddings. Default is False.
        compute_histograms (bool):
            Whether to compute histograms for all calculated features (metrics, embeddings, similarities).
            Default is False.
    """

EvaluationTaskType

Types of evaluation tasks for LLM assessments.

Assertion instance-attribute

Assertion: EvaluationTaskType

Assertion-based evaluation task.

HumanValidation instance-attribute

HumanValidation: EvaluationTaskType

Human validation evaluation task.

LLMJudge instance-attribute

LLMJudge: EvaluationTaskType

LLM judge-based evaluation task.

EventDetails

duration property

duration: Optional[timedelta]

The duration of the task execution.

end_time property

end_time: Optional[datetime]

The end time of the task execution.

error property

error: Optional[str]

The error message if the task failed, otherwise None.

prompt property

prompt: Optional[Prompt]

The prompt used for the task.

response property

response: Optional[Any]

The response from the agent after executing the task.

start_time property

start_time: Optional[datetime]

The start time of the task execution.

ExecutableCode

Executable code generated by the model.

Contains code that can be executed to perform computations.

code property

code: str

The code.

language property

language: Language

The programming language.

ExternalApi

ExternalApi(
    api_spec: ApiSpecType,
    endpoint: str,
    auth_config: Optional[AuthConfig] = None,
    simple_search_params: Optional[
        SimpleSearchParams
    ] = None,
    elastic_search_params: Optional[
        ElasticSearchParams
    ] = None,
)

External API retrieval configuration.

Configures retrieval from external APIs.

Examples:

>>> api = ExternalApi(
...     api_spec=ApiSpecType.ElasticSearch,
...     endpoint="https://my-es-cluster.com",
...     auth_config=AuthConfig(...),
...     elastic_search_params=ElasticSearchParams(...)
... )

Parameters:

Name Type Description Default
api_spec ApiSpecType

API specification type

required
endpoint str

API endpoint URL

required
auth_config Optional[AuthConfig]

Authentication configuration

None
simple_search_params Optional[SimpleSearchParams]

Simple search parameters

None
elastic_search_params Optional[ElasticSearchParams]

Elasticsearch parameters

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    api_spec: ApiSpecType,
    endpoint: str,
    auth_config: Optional[AuthConfig] = None,
    simple_search_params: Optional[SimpleSearchParams] = None,
    elastic_search_params: Optional[ElasticSearchParams] = None,
) -> None:
    """Initialize external API configuration.

    Args:
        api_spec (ApiSpecType):
            API specification type
        endpoint (str):
            API endpoint URL
        auth_config (Optional[AuthConfig]):
            Authentication configuration
        simple_search_params (Optional[SimpleSearchParams]):
            Simple search parameters
        elastic_search_params (Optional[ElasticSearchParams]):
            Elasticsearch parameters
    """

api_spec property

api_spec: ApiSpecType

The API specification type.

auth_config property

auth_config: Optional[AuthConfig]

The authentication configuration.

endpoint property

endpoint: str

The API endpoint.

params property

params: Optional[
    Union[SimpleSearchParams, ElasticSearchParams]
]

The API parameters.

FeatureDrift

drift property

drift: List[float]

Return list of drift values

samples property

samples: List[float]

Return list of samples

FeatureMap

features property

features: Dict[str, Dict[str, int]]

Return the feature map.

FeatureProfile

correlations property

correlations: Optional[Dict[str, float]]

Feature correlation values

id property

id: str

Return the id.

numeric_stats property

numeric_stats: Optional[NumericStats]

Return the numeric stats.

string_stats property

string_stats: Optional[StringStats]

Return the string stats.

timestamp property

timestamp: str

Return the timestamp.

Features

Features(
    features: (
        List[QueueFeature]
        | Dict[str, Union[int, float, str]]
    )
)

Parameters:

Name Type Description Default
features List[QueueFeature] | Dict[str, Union[int, float, str]]

List of features or a dictionary of key-value pairs. If a list, each item must be an instance of Feature. If a dictionary, each key is the feature name and each value is the feature value. Supported types for values are int, float, and string.

required
Example
# Passing a list of features
features = Features(
    features=[
        Feature.int("feature_1", 1),
        Feature.float("feature_2", 2.0),
        Feature.string("feature_3", "value"),
    ]
)

# Passing a dictionary (pydantic model) of features
class MyFeatures(BaseModel):
    feature1: int
    feature2: float
    feature3: str

my_features = MyFeatures(
    feature1=1,
    feature2=2.0,
    feature3="value",
)

features = Features(my_features.model_dump())
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    features: List[QueueFeature] | Dict[str, Union[int, float, str]],
) -> None:
    """Initialize a features class

    Args:
        features:
            List of features or a dictionary of key-value pairs.
            If a list, each item must be an instance of Feature.
            If a dictionary, each key is the feature name and each value is the feature value.
            Supported types for values are int, float, and string.

    Example:
        ```python
        # Passing a list of features
        features = Features(
            features=[
                Feature.int("feature_1", 1),
                Feature.float("feature_2", 2.0),
                Feature.string("feature_3", "value"),
            ]
        )

        # Passing a dictionary (pydantic model) of features
        class MyFeatures(BaseModel):
            feature1: int
            feature2: float
            feature3: str

        my_features = MyFeatures(
            feature1=1,
            feature2=2.0,
            feature3="value",
        )

        features = Features(my_features.model_dump())
        ```
    """

entity_type property

entity_type: EntityType

Return the entity type

features property

features: List[QueueFeature]

Return the list of features

File

File(
    file_data: Optional[str] = None,
    file_id: Optional[str] = None,
    filename: Optional[str] = None,
)

File reference for OpenAI chat completion messages.

This class represents a file that can be included in a message, either by providing file data directly or referencing a file by ID.

Examples:

>>> # File by ID
>>> file = File(file_id="file-abc123", filename="document.pdf")
>>>
>>> # File with data
>>> file = File(
...     file_data="base64_encoded_data",
...     filename="document.pdf"
... )

Parameters:

Name Type Description Default
file_data Optional[str]

Base64 encoded file data

None
file_id Optional[str]

OpenAI file ID

None
filename Optional[str]

File name

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    file_data: Optional[str] = None,
    file_id: Optional[str] = None,
    filename: Optional[str] = None,
) -> None:
    """Initialize file reference.

    Args:
        file_data (Optional[str]):
            Base64 encoded file data
        file_id (Optional[str]):
            OpenAI file ID
        filename (Optional[str]):
            File name
    """

file_data property

file_data: Optional[str]

The base64 encoded file data.

file_id property

file_id: Optional[str]

The OpenAI file ID.

filename property

filename: Optional[str]

The file name.

FileContentPart

FileContentPart(
    file_data: Optional[str] = None,
    file_id: Optional[str] = None,
    filename: Optional[str] = None,
)

File content part for OpenAI chat messages.

This class represents a file as part of a message's content.

Examples:

>>> file_part = FileContentPart(
...     file_id="file-abc123",
...     filename="document.pdf"
... )
>>> file_part.type
'file'

Parameters:

Name Type Description Default
file_data Optional[str]

Base64 encoded file data

None
file_id Optional[str]

OpenAI file ID

None
filename Optional[str]

File name

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    file_data: Optional[str] = None,
    file_id: Optional[str] = None,
    filename: Optional[str] = None,
) -> None:
    """Initialize file content part.

    Args:
        file_data (Optional[str]):
            Base64 encoded file data
        file_id (Optional[str]):
            OpenAI file ID
        filename (Optional[str]):
            File name
    """

file property

file: File

The file reference.

type property

type: str

The content part type (always 'file').

FileData

FileData(
    mime_type: str,
    file_uri: str,
    display_name: Optional[str] = None,
)

URI-based media data reference.

References media stored in Google Cloud Storage or other URIs.

Examples:

>>> file_data = FileData(
...     mime_type="image/png",
...     file_uri="gs://my-bucket/image.png",
...     display_name="Example Image"
... )

Parameters:

Name Type Description Default
mime_type str

IANA MIME type

required
file_uri str

URI to the file (e.g., gs:// URL)

required
display_name Optional[str]

Optional display name

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    mime_type: str,
    file_uri: str,
    display_name: Optional[str] = None,
) -> None:
    """Initialize file data reference.

    Args:
        mime_type (str):
            IANA MIME type
        file_uri (str):
            URI to the file (e.g., gs:// URL)
        display_name (Optional[str]):
            Optional display name
    """

display_name property

display_name: Optional[str]

The display name.

file_uri property

file_uri: str

The file URI.

mime_type property

mime_type: str

The MIME type.

FileSearch

FileSearch(
    file_search_store_names: List[str],
    metadata_filter: str,
    top_k: int,
)

File search tool configuration.

Enables searching in file stores.

Examples:

>>> file_search = FileSearch(
...     file_search_store_names=["my-store"],
...     metadata_filter="category='docs'",
...     top_k=5
... )

Parameters:

Name Type Description Default
file_search_store_names List[str]

File store names to search

required
metadata_filter str

Metadata filter expression

required
top_k int

Number of top results

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    file_search_store_names: List[str],
    metadata_filter: str,
    top_k: int,
) -> None:
    """Initialize file search configuration.

    Args:
        file_search_store_names (List[str]):
            File store names to search
        metadata_filter (str):
            Metadata filter expression
        top_k (int):
            Number of top results
    """

file_search_store_names property

file_search_store_names: List[str]

File store names.

metadata_filter property

metadata_filter: str

Metadata filter.

top_k property

top_k: int

Number of top results.

Filter

Filter(
    metadata_filter: Optional[str] = None,
    vector_distance_threshold: Optional[float] = None,
    vector_similarity_threshold: Optional[float] = None,
)

Filtering configuration for RAG retrieval.

Configures metadata and vector-based filtering.

Examples:

>>> # Metadata filtering
>>> filter = Filter(
...     metadata_filter="category = 'technical'",
...     vector_similarity_threshold=0.7
... )

Parameters:

Name Type Description Default
metadata_filter Optional[str]

Metadata filter expression

None
vector_distance_threshold Optional[float]

Maximum vector distance

None
vector_similarity_threshold Optional[float]

Minimum vector similarity

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    metadata_filter: Optional[str] = None,
    vector_distance_threshold: Optional[float] = None,
    vector_similarity_threshold: Optional[float] = None,
) -> None:
    """Initialize filter configuration.

    Args:
        metadata_filter (Optional[str]):
            Metadata filter expression
        vector_distance_threshold (Optional[float]):
            Maximum vector distance
        vector_similarity_threshold (Optional[float]):
            Minimum vector similarity
    """

metadata_filter property

metadata_filter: Optional[str]

The metadata filter expression.

vector_distance_threshold property

vector_distance_threshold: Optional[float]

Maximum vector distance threshold.

vector_similarity_threshold property

vector_similarity_threshold: Optional[float]

Minimum vector similarity threshold.

FinishReason

Reason why generation stopped.

Indicates why the model stopped generating tokens.

Examples:

>>> reason = FinishReason.Stop
>>> reason.value
'STOP'

Blocklist class-attribute instance-attribute

Blocklist = 'FinishReason'

Stopped due to blocklist match

FinishReasonUnspecified class-attribute instance-attribute

FinishReasonUnspecified = 'FinishReason'

Unspecified reason

ImageOther class-attribute instance-attribute

ImageOther = 'FinishReason'

Image generation stopped for other reasons

ImageProhibitedContent class-attribute instance-attribute

ImageProhibitedContent = 'FinishReason'

Generated image contains prohibited content

ImageRecitation class-attribute instance-attribute

ImageRecitation = 'FinishReason'

Generated image may be recitation

ImageSafety class-attribute instance-attribute

ImageSafety = 'FinishReason'

Generated image violates safety policies

MalformedFunctionCall class-attribute instance-attribute

MalformedFunctionCall = 'FinishReason'

Stopped due to malformed function call

MaxTokens class-attribute instance-attribute

MaxTokens = 'FinishReason'

Maximum token limit reached

ModelArmor class-attribute instance-attribute

ModelArmor = 'FinishReason'

Stopped by Model Armor

NoImage class-attribute instance-attribute

NoImage = 'FinishReason'

Expected image but none generated

Other class-attribute instance-attribute

Other = 'FinishReason'

Stopped for other reasons

ProhibitedContent class-attribute instance-attribute

ProhibitedContent = 'FinishReason'

Stopped due to prohibited content

Recitation class-attribute instance-attribute

Recitation = 'FinishReason'

Stopped due to potential recitation

Safety class-attribute instance-attribute

Safety = 'FinishReason'

Stopped due to safety concerns

Spii class-attribute instance-attribute

Spii = 'FinishReason'

Stopped due to sensitive personally identifiable information

Stop class-attribute instance-attribute

Stop = 'FinishReason'

Natural stopping point or stop sequence reached

UnexpectedToolCall class-attribute instance-attribute

UnexpectedToolCall = 'FinishReason'

Unexpected tool call generated

FreedmanDiaconis

FreedmanDiaconis()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Freedman–Diaconis equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

Function

Function call information from OpenAI tool calls.

This class represents a function call made by the model, including the function name and JSON-formatted arguments.

Examples:

>>> func = Function(
...     name="get_weather",
...     arguments='{"location": "San Francisco"}'
... )
>>> func.name
'get_weather'

arguments property

arguments: str

The JSON-formatted function arguments.

name property

name: str

The function name.

FunctionCall

FunctionCall(
    name: str,
    id: Optional[str] = None,
    args: Optional[Dict[str, Any]] = None,
    will_continue: Optional[bool] = None,
    partial_args: Optional[List[PartialArgs]] = None,
)

Function call request from the model.

Represents a function that the model wants to call, including the function name and arguments.

Examples:

>>> call = FunctionCall(
...     name="get_weather",
...     args={"location": "San Francisco", "units": "celsius"},
...     id="call_123"
... )

Parameters:

Name Type Description Default
name str

Function name to call

required
id Optional[str]

Unique call identifier

None
args Optional[Dict[str, Any]]

Function arguments as dictionary

None
will_continue Optional[bool]

Whether this is the final part of the call

None
partial_args Optional[List[PartialArgs]]

Incrementally streamed arguments

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    id: Optional[str] = None,
    args: Optional[Dict[str, Any]] = None,
    will_continue: Optional[bool] = None,
    partial_args: Optional[List[PartialArgs]] = None,
) -> None:
    """Initialize function call.

    Args:
        name: Function name to call
        id (Optional[str]):
            Unique call identifier
        args (Optional[Dict[str, Any]]):
            Function arguments as dictionary
        will_continue (Optional[bool]):
            Whether this is the final part of the call
        partial_args (Optional[List[PartialArgs]]):
            Incrementally streamed arguments
    """

args property

args: Optional[Dict[str, Any]]

The function arguments.

id property

id: Optional[str]

The call identifier.

name property

name: str

The function name.

partial_args property

partial_args: Optional[List[PartialArgs]]

Partial arguments.

will_continue property

will_continue: Optional[bool]

Whether more parts follow.

FunctionCallingConfig

FunctionCallingConfig(
    mode: Optional[Mode] = None,
    allowed_function_names: Optional[List[str]] = None,
)

Configuration for function calling behavior.

Controls how the model handles function calls, including whether functions are required and which functions are allowed.

Examples:

>>> # Auto mode - model decides
>>> config = FunctionCallingConfig(mode=Mode.Auto)
>>> # Require specific functions
>>> config = FunctionCallingConfig(
...     mode=Mode.Any,
...     allowed_function_names=["get_weather", "search_web"]
... )
>>> # Disable function calling
>>> config = FunctionCallingConfig(mode=Mode.None_Mode)

Parameters:

Name Type Description Default
mode Optional[Mode]

Function calling mode

None
allowed_function_names Optional[List[str]]

List of allowed function names (for ANY mode)

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    mode: Optional[Mode] = None,
    allowed_function_names: Optional[List[str]] = None,
) -> None:
    """Initialize function calling configuration.

    Args:
        mode (Optional[Mode]):
            Function calling mode
        allowed_function_names (Optional[List[str]]):
            List of allowed function names (for ANY mode)
    """

allowed_function_names property

allowed_function_names: Optional[List[str]]

Allowed function names.

mode property

mode: Optional[Mode]

The function calling mode.

FunctionChoice

FunctionChoice(name: str)

Specification for a specific function to call.

This class identifies a specific function by name for tool calling.

Examples:

>>> function = FunctionChoice(name="get_weather")
>>> function.name
'get_weather'

Parameters:

Name Type Description Default
name str

Name of the function to call

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str) -> None:
    """Initialize function choice.

    Args:
        name (str):
            Name of the function to call
    """

name property

name: str

The function name.

FunctionDeclaration

Function declaration for tool use.

Defines a function that the model can call, including its name, description, parameters, and return type.

Examples:

>>> func = FunctionDeclaration(
...     name="get_weather",
...     description="Get current weather for a location",
...     parameters=Schema(
...         type=SchemaType.Object,
...         properties={
...             "location": Schema(type=SchemaType.String),
...             "units": Schema(
...                 type=SchemaType.String,
...                 enum_=["celsius", "fahrenheit"]
...             )
...         },
...         required=["location"]
...     )
... )

behavior property

behavior: Optional[Behavior]

Execution behavior (blocking/non-blocking).

description property

description: str

The function description.

name property

name: str

The function name.

parameters property

parameters: Optional[Schema]

Parameter schema.

parameters_json_schema property

parameters_json_schema: Optional[Any]

Parameters as raw JSON schema.

response property

response: Optional[Schema]

Response schema.

response_json_schema property

response_json_schema: Optional[Any]

Response as raw JSON schema.

FunctionDefinition

FunctionDefinition(
    name: str,
    description: Optional[str] = None,
    parameters: Optional[Any] = None,
    strict: Optional[bool] = None,
)

Definition of a function tool for OpenAI chat completions.

This class defines a function that can be called by the model, including its name, description, parameters schema, and strict mode setting.

Examples:

>>> # Simple function
>>> func = FunctionDefinition(
...     name="get_weather",
...     description="Get weather for a location"
... )
>>>
>>> # With parameters
>>> params = {
...     "type": "object",
...     "properties": {
...         "location": {"type": "string"}
...     },
...     "required": ["location"]
... }
>>> func = FunctionDefinition(
...     name="get_weather",
...     description="Get weather",
...     parameters=params,
...     strict=True
... )

Parameters:

Name Type Description Default
name str

Name of the function

required
description Optional[str]

Description of what the function does

None
parameters Optional[Any]

JSON schema for function parameters

None
strict Optional[bool]

Whether to use strict schema validation

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    description: Optional[str] = None,
    parameters: Optional[Any] = None,
    strict: Optional[bool] = None,
) -> None:
    """Initialize function definition.

    Args:
        name (str):
            Name of the function
        description (Optional[str]):
            Description of what the function does
        parameters (Optional[Any]):
            JSON schema for function parameters
        strict (Optional[bool]):
            Whether to use strict schema validation
    """

description property

description: Optional[str]

The function description.

name property

name: str

The function name.

strict property

strict: Optional[bool]

Whether strict schema validation is enabled.

FunctionResponse

Function execution result.

Contains the result of executing a function call.

name property

name: str

The function name.

response property

response: Dict[str, Any]

The function response.

FunctionTool

FunctionTool(function: FunctionDefinition, type: str)

Function tool for OpenAI chat completions.

This class wraps a function definition to create a callable tool for the model.

Examples:

>>> func = FunctionDefinition(name="get_weather")
>>> tool = FunctionTool(function=func, type="function")
>>> tool.type
'function'

Parameters:

Name Type Description Default
function FunctionDefinition

The function definition

required
type str

Tool type (typically "function")

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    function: FunctionDefinition,
    type: str,
) -> None:
    """Initialize function tool.

    Args:
        function (FunctionDefinition):
            The function definition
        type (str):
            Tool type (typically "function")
    """

function property

function: FunctionDefinition

The function definition.

type property

type: str

The tool type.

FunctionToolChoice

FunctionToolChoice(function: FunctionChoice)

Tool choice configuration for a specific function.

This class specifies that the model should call a specific function tool.

Examples:

>>> function = FunctionChoice(name="get_weather")
>>> tool_choice = FunctionToolChoice(function=function)
>>> tool_choice.type
'function'

Parameters:

Name Type Description Default
function FunctionChoice

The function to call

required
Source code in python/scouter/stubs.pyi
def __init__(self, function: FunctionChoice) -> None:
    """Initialize function tool choice.

    Args:
        function (FunctionChoice):
            The function to call
    """

function property

function: FunctionChoice

The function specification.

type property

type: str

The tool type (always 'function').

FunctionType

Enumeration of function types.

GeminiContent

GeminiContent(
    parts: Union[
        str,
        Part,
        List[
            Union[
                str,
                Part,
                Blob,
                FileData,
                FunctionCall,
                FunctionResponse,
                ExecutableCode,
                CodeExecutionResult,
            ]
        ],
    ],
    role: Optional[str] = None,
)

Multi-part message content.

Represents a complete message from a user or model, consisting of one or more parts. This is the fundamental message structure for Gemini API.

Examples:

>>> # Simple text message
>>> content = GeminiContent(
...     role="user",
...     parts="What's the weather in San Francisco?"
... )
>>> # Multi-part message with image
>>> content = GeminiContent(
...     role="user",
...     parts=[
...         "What's in this image?",
...         Blob(mime_type="image/png", data=image_data)
...     ]
... )
>>> # Function call response
>>> content = GeminiContent(
...     role="model",
...     parts=[
...         FunctionCall(
...             name="get_weather",
...             args={"location": "San Francisco"}
...         )
...     ]
... )
>>> # Function result
>>> content = GeminiContent(
...     role="function",
...     parts=[
...         FunctionResponse(
...             name="get_weather",
...             response={"output": {"temperature": 72}}
...         )
...     ]
... )

Parameters:

Name Type Description Default
ExecutableCode, CodeExecutionResult]]]

Content from typing import Any, Dict, List, Optional, Union from the message

required
role Optional[str]

Role of the message sender (e.g., "user", "model", "function")

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    parts: Union[
        str,
        Part,
        List[
            Union[
                str,
                Part,
                Blob,
                FileData,
                FunctionCall,
                FunctionResponse,
                ExecutableCode,
                CodeExecutionResult,
            ]
        ],
    ],
    role: Optional[str] = None,
) -> None:
    """Initialize message content.

    Args:
        parts (Union[str, Part, List[Union[str, Part, Blob, FileData, FunctionCall, FunctionResponse,
        ExecutableCode, CodeExecutionResult]]]):
            Content from typing import Any, Dict, List, Optional, Union from the message
        role (Optional[str]):
            Role of the message sender (e.g., "user", "model", "function")
    """

parts property

parts: List[Part]

The message parts.

role property

role: Optional[str]

The role of the message sender.

bind

bind(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> GeminiContent

Bind variables to the message content. Args: name (Optional[str]): The variable name to bind. value (Optional[Union[str, int, float, bool, list]]): The variable value to bind. Returns: GeminiContent: New content with variables bound.

Source code in python/scouter/stubs.pyi
def bind(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> "GeminiContent":
    """Bind variables to the message content.
    Args:
        name (Optional[str]):
            The variable name to bind.
        value (Optional[Union[str, int, float, bool, list]]):
            The variable value to bind.
    Returns:
        GeminiContent:
            New content with variables bound.
    """

bind_mut

bind_mut(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> None

Bind variables to the message content in place. Args: name (Optional[str]): The variable name to bind. value (Optional[Union[str, int, float, bool, list]]): The variable value to bind. Returns: None

Source code in python/scouter/stubs.pyi
def bind_mut(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> None:
    """Bind variables to the message content in place.
    Args:
        name (Optional[str]):
            The variable name to bind.
        value (Optional[Union[str, int, float, bool, list]]):
            The variable value to bind.
    Returns:
        None
    """

model_dump

model_dump() -> dict

Dump the message to a dictionary.

Source code in python/scouter/stubs.pyi
def model_dump(self) -> dict:
    """Dump the message to a dictionary."""

text

text() -> str

Get the text content of the first part, if available. Returns an empty string if the first part is not text. This is meant for convenience when working with simple text messages.

Source code in python/scouter/stubs.pyi
def text(self) -> str:
    """Get the text content of the first part, if available. Returns
    an empty string if the first part is not text.
    This is meant for convenience when working with simple text messages.
    """

GeminiEmbeddingConfig

GeminiEmbeddingConfig(
    model: Optional[str] = None,
    output_dimensionality: Optional[int] = None,
    task_type: Optional[EmbeddingTaskType] = None,
)

Configuration for Gemini embeddings.

Configures embedding generation including dimensionality and task type.

Examples:

>>> config = GeminiEmbeddingConfig(
...     model="text-embedding-004",
...     output_dimensionality=768,
...     task_type=EmbeddingTaskType.RetrievalDocument
... )

Parameters:

Name Type Description Default
model Optional[str]

Model name

None
output_dimensionality Optional[int]

Output embedding dimensionality

None
task_type Optional[EmbeddingTaskType]

Task type for embeddings

None

Raises:

Type Description
TypeError

If neither model nor task_type is provided

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model: Optional[str] = None,
    output_dimensionality: Optional[int] = None,
    task_type: Optional[EmbeddingTaskType] = None,
) -> None:
    """Initialize embedding configuration.

    Args:
        model (Optional[str]):
            Model name
        output_dimensionality (Optional[int]):
            Output embedding dimensionality
        task_type (Optional[EmbeddingTaskType]):
            Task type for embeddings

    Raises:
        TypeError: If neither model nor task_type is provided
    """

is_configured property

is_configured: bool

Whether config has parameters set.

model property

model: Optional[str]

The model name.

output_dimensionality property

output_dimensionality: Optional[int]

Output dimensionality.

task_type property

task_type: Optional[EmbeddingTaskType]

Task type.

GeminiEmbeddingResponse

Response from embedding generation.

Contains the generated embedding.

Examples:

>>> response = GeminiEmbeddingResponse(
...     embedding=ContentEmbedding(values=[0.1, 0.2, ...])
... )

embedding property

embedding: ContentEmbedding

The generated embedding.

GeminiSettings

GeminiSettings(
    labels: Optional[Dict[str, str]] = None,
    tool_config: Optional[ToolConfig] = None,
    generation_config: Optional[GenerationConfig] = None,
    safety_settings: Optional[List[SafetySetting]] = None,
    model_armor_config: Optional[ModelArmorConfig] = None,
    extra_body: Optional[Any] = None,
    cached_content: Optional[str] = None,
    tools: Optional[List[GeminiTool]] = None,
)

Settings for Gemini/Google API requests.

Comprehensive configuration for all aspects of model behavior including generation, safety, tools, and more.

Examples:

>>> settings = GeminiSettings(
...     generation_config=GenerationConfig(
...         temperature=0.7,
...         max_output_tokens=1024
...     ),
...     safety_settings=[
...         SafetySetting(
...             category=HarmCategory.HarmCategoryHateSpeech,
...             threshold=HarmBlockThreshold.BlockMediumAndAbove
...         )
...     ],
...     tool_config=ToolConfig(
...         function_calling_config=FunctionCallingConfig(mode=Mode.Auto)
...     )
... )

Parameters:

Name Type Description Default
labels Optional[Dict[str, str]]

Metadata labels

None
tool_config Optional[ToolConfig]

Tool configuration

None
generation_config Optional[GenerationConfig]

Generation configuration

None
safety_settings Optional[List[SafetySetting]]

Safety filter settings

None
model_armor_config Optional[ModelArmorConfig]

Model Armor configuration

None
extra_body Optional[Any]

Additional request parameters

None
cached_content Optional[str]

Cached content resource name

None
tools Optional[List[Tool]]

Tools available to the model

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    labels: Optional[Dict[str, str]] = None,
    tool_config: Optional[ToolConfig] = None,
    generation_config: Optional[GenerationConfig] = None,
    safety_settings: Optional[List[SafetySetting]] = None,
    model_armor_config: Optional[ModelArmorConfig] = None,
    extra_body: Optional[Any] = None,
    cached_content: Optional[str] = None,
    tools: Optional[List[GeminiTool]] = None,
) -> None:
    """Initialize Gemini settings.

    Args:
        labels (Optional[Dict[str, str]]):
            Metadata labels
        tool_config (Optional[ToolConfig]):
            Tool configuration
        generation_config (Optional[GenerationConfig]):
            Generation configuration
        safety_settings (Optional[List[SafetySetting]]):
            Safety filter settings
        model_armor_config (Optional[ModelArmorConfig]):
            Model Armor configuration
        extra_body (Optional[Any]):
            Additional request parameters
        cached_content (Optional[str]):
            Cached content resource name
        tools (Optional[List["Tool"]]):
            Tools available to the model
    """

cached_content property

cached_content: Optional[str]

Cached content resource name.

extra_body property

extra_body: Optional[Dict[str, Any]]

Additional request parameters.

generation_config property

generation_config: Optional[GenerationConfig]

Generation configuration.

labels property

labels: Optional[Dict[str, str]]

Metadata labels.

model_armor_config property

model_armor_config: Optional[ModelArmorConfig]

Model Armor configuration.

safety_settings property

safety_settings: Optional[List[SafetySetting]]

Safety settings.

tool_config property

tool_config: Optional[ToolConfig]

Tool configuration.

tools property

tools: Optional[List[GeminiTool]]

Available tools.

model_dump

model_dump() -> Dict[str, Any]

Convert settings to dictionary.

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Convert settings to dictionary."""

settings_type

settings_type() -> str

Get settings type identifier.

Source code in python/scouter/stubs.pyi
def settings_type(self) -> str:
    """Get settings type identifier."""

GeminiThinkingConfig

GeminiThinkingConfig(
    include_thoughts: Optional[bool] = None,
    thinking_budget: Optional[int] = None,
    thinking_level: Optional[ThinkingLevel] = None,
)

Configuration for model thinking/reasoning features.

Controls the model's internal reasoning process, including whether to include thoughts in the response and the computational budget.

Examples:

>>> # Enable high-level thinking with thoughts included
>>> config = ThinkingConfig(
...     include_thoughts=True,
...     thinking_level=ThinkingLevel.High
... )
>>> # Limit thinking budget
>>> config = ThinkingConfig(
...     include_thoughts=False,
...     thinking_budget=1000
... )

Parameters:

Name Type Description Default
include_thoughts Optional[bool]

Whether to include reasoning steps in response

None
thinking_budget Optional[int]

Token budget for thinking process

None
thinking_level Optional[ThinkingLevel]

Depth of reasoning to apply

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    include_thoughts: Optional[bool] = None,
    thinking_budget: Optional[int] = None,
    thinking_level: Optional[ThinkingLevel] = None,
) -> None:
    """Initialize thinking configuration.

    Args:
        include_thoughts (Optional[bool]):
            Whether to include reasoning steps in response
        thinking_budget (Optional[int]):
            Token budget for thinking process
        thinking_level (Optional[ThinkingLevel]):
            Depth of reasoning to apply
    """

include_thoughts property

include_thoughts: Optional[bool]

Whether to include thoughts in response.

thinking_budget property

thinking_budget: Optional[int]

Token budget for thinking.

thinking_level property

thinking_level: Optional[ThinkingLevel]

Level of thinking/reasoning.

GeminiTool

GeminiTool(
    function_declarations: Optional[
        List[FunctionDeclaration]
    ] = None,
    retrieval: Optional[Retrieval] = None,
    google_search_retrieval: Optional[
        GoogleSearchRetrieval
    ] = None,
    code_execution: Optional[CodeExecution] = None,
    google_search: Optional[GoogleSearchNum] = None,
    google_maps: Optional[GoogleMaps] = None,
    enterprise_web_search: Optional[
        EnterpriseWebSearch
    ] = None,
    parallel_ai_search: Optional[ParallelAiSearch] = None,
    computer_use: Optional[ComputerUse] = None,
    url_context: Optional[UrlContext] = None,
    file_search: Optional[FileSearch] = None,
)

Tool definition for model use.

Defines tools/functions that the model can use during generation. Tools enable the model to perform actions or retrieve information.

Examples:

>>> # Function tool
>>> tool = Tool(
...     function_declarations=[
...         FunctionDeclaration(
...             name="get_weather",
...             description="Get weather for a location",
...             parameters=Schema(...)
...         )
...     ]
... )
>>> # Google Search tool
>>> tool = Tool(
...     google_search=GoogleSearchNum(
...         vertex_search=VertexGoogleSearch()
...     )
... )
>>> # Code execution tool
>>> tool = Tool(code_execution=CodeExecution())
>>> # Multiple tools
>>> tool = Tool(
...     function_declarations=[...],
...     google_search=GoogleSearchNum(...),
...     code_execution=CodeExecution()
... )

Parameters:

Name Type Description Default
function_declarations Optional[List[FunctionDeclaration]]

Function declarations

None
retrieval Optional[Retrieval]

Retrieval tool configuration

None
google_search_retrieval Optional[GoogleSearchRetrieval]

Google Search retrieval configuration

None
code_execution Optional[CodeExecution]

Code execution tool

None
google_search Optional[GoogleSearchNum]

Google Search tool

None
google_maps Optional[GoogleMaps]

Google Maps tool

None
enterprise_web_search Optional[EnterpriseWebSearch]

Enterprise web search tool

None
parallel_ai_search Optional[ParallelAiSearch]

Parallel.ai search tool

None
computer_use Optional[ComputerUse]

Computer use tool

None
url_context Optional[UrlContext]

URL context tool

None
file_search Optional[FileSearch]

File search tool

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    function_declarations: Optional[List[FunctionDeclaration]] = None,
    retrieval: Optional[Retrieval] = None,
    google_search_retrieval: Optional[GoogleSearchRetrieval] = None,
    code_execution: Optional[CodeExecution] = None,
    google_search: Optional[GoogleSearchNum] = None,
    google_maps: Optional[GoogleMaps] = None,
    enterprise_web_search: Optional[EnterpriseWebSearch] = None,
    parallel_ai_search: Optional[ParallelAiSearch] = None,
    computer_use: Optional[ComputerUse] = None,
    url_context: Optional[UrlContext] = None,
    file_search: Optional[FileSearch] = None,
) -> None:
    """Initialize tool configuration.

    Args:
        function_declarations (Optional[List[FunctionDeclaration]]):
            Function declarations
        retrieval (Optional[Retrieval]):
            Retrieval tool configuration
        google_search_retrieval (Optional[GoogleSearchRetrieval]):
            Google Search retrieval configuration
        code_execution (Optional[CodeExecution]):
            Code execution tool
        google_search (Optional[GoogleSearchNum]):
            Google Search tool
        google_maps (Optional[GoogleMaps]):
            Google Maps tool
        enterprise_web_search (Optional[EnterpriseWebSearch]):
            Enterprise web search tool
        parallel_ai_search (Optional[ParallelAiSearch]):
            Parallel.ai search tool
        computer_use (Optional[ComputerUse]):
            Computer use tool
        url_context (Optional[UrlContext]):
            URL context tool
        file_search (Optional[FileSearch]):
            File search tool
    """

code_execution property

code_execution: Optional[CodeExecution]

Code execution tool.

computer_use property

computer_use: Optional[ComputerUse]

Computer use tool.

enterprise_web_search: Optional[EnterpriseWebSearch]

Enterprise web search tool.

file_search: Optional[FileSearch]

File search tool.

function_declarations property

function_declarations: Optional[List[FunctionDeclaration]]

Function declarations.

google_maps property

google_maps: Optional[GoogleMaps]

Google Maps tool.

google_search: Optional[GoogleSearchNum]

Google Search tool.

google_search_retrieval property

google_search_retrieval: Optional[GoogleSearchRetrieval]

Google Search retrieval configuration.

parallel_ai_search: Optional[ParallelAiSearch]

Parallel.ai search tool.

retrieval property

retrieval: Optional[Retrieval]

Retrieval configuration.

url_context property

url_context: Optional[UrlContext]

URL context tool.

GenAIAlertConfig

GenAIAlertConfig(
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
    alert_condition: Optional[AlertCondition] = None,
)

Parameters:

Name Type Description Default
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch config. Defaults to console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
alert_condition Optional[AlertCondition]

Alert condition for a GenAI drift profile

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
    alert_condition: Optional[AlertCondition] = None,
):
    """Initialize alert config

    Args:
        dispatch_config:
            Alert dispatch config. Defaults to console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight
        alert_condition:
            Alert condition for a GenAI drift profile

    """

alert_conditions property

alert_conditions: Optional[AlertCondition]

Return the alert condition

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

schedule property writable

schedule: str

Return the schedule

GenAIEvalConfig

GenAIEvalConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_ratio: float = 1.0,
    alert_config: GenAIAlertConfig = GenAIAlertConfig(),
)
space:
    Space to associate with the config
name:
    Name to associate with the config
version:
    Version to associate with the config. Defaults to 0.1.0
sample_ratio:
    Sample rate percentage for data collection. Must be between 0.0 and 1.0.
    Defaults to 1.0 (100%).
alert_config:
    Custom metric alert configuration
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_ratio: float = 1.0,
    alert_config: GenAIAlertConfig = GenAIAlertConfig(),
):
    """Initialize drift config
    Args:
        space:
            Space to associate with the config
        name:
            Name to associate with the config
        version:
            Version to associate with the config. Defaults to 0.1.0
        sample_ratio:
            Sample rate percentage for data collection. Must be between 0.0 and 1.0.
            Defaults to 1.0 (100%).
        alert_config:
            Custom metric alert configuration
    """

alert_config property writable

alert_config: GenAIAlertConfig

get alert_config

drift_type property

drift_type: DriftType

Drift type

name property writable

name: str

Model Name

space property writable

space: str

Model space

uid property writable

uid: str

Unique identifier for the drift config

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> GenAIEvalConfig

Load config from json file Args: path: Path to json file to load config from.

Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "GenAIEvalConfig":
    """Load config from json file
    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[GenAIAlertConfig] = None,
) -> None

Inplace operation that updates config args Args: space: Space to associate with the config name: Name to associate with the config version: Version to associate with the config alert_config: LLM alert configuration

Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[GenAIAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args
    Args:
        space:
            Space to associate with the config
        name:
            Name to associate with the config
        version:
            Version to associate with the config
        alert_config:
            LLM alert configuration
    """

GenAIEvalDataset

GenAIEvalDataset(
    records: Sequence[GenAIEvalRecord],
    tasks: Sequence[LLMJudgeTask | AssertionTask],
)

Defines the dataset used for LLM evaluation

Parameters:

Name Type Description Default
records List[GenAIEvalRecord]

List of LLM evaluation records to be evaluated.

required
tasks List[LLMJudgeTask | AssertionTask]

List of evaluation tasks to apply to the records.

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    records: Sequence[GenAIEvalRecord],
    tasks: Sequence[LLMJudgeTask | AssertionTask],
):
    """Initialize the GenAIEvalDataset with records and tasks.

    Args:
        records (List[GenAIEvalRecord]):
            List of LLM evaluation records to be evaluated.
        tasks (List[LLMJudgeTask | AssertionTask]):
            List of evaluation tasks to apply to the records.
    """

assertion_tasks property

assertion_tasks: List[AssertionTask]

Get the list of assertion tasks in this dataset

llm_judge_tasks property

llm_judge_tasks: List[LLMJudgeTask]

Get the list of LLM judge tasks in this dataset

records property

records: List[GenAIEvalRecord]

Get the list of LLM evaluation records in this dataset

evaluate

evaluate(
    config: Optional[EvaluationConfig] = None,
) -> GenAIEvalResults

Evaluate the records using the defined tasks.

Parameters:

Name Type Description Default
config Optional[EvaluationConfig]

Optional configuration for the evaluation process.

None

Returns:

Name Type Description
GenAIEvalResults GenAIEvalResults

The results of the evaluation.

Source code in python/scouter/stubs.pyi
def evaluate(
    self,
    config: Optional[EvaluationConfig] = None,
) -> "GenAIEvalResults":
    """Evaluate the records using the defined tasks.

    Args:
        config (Optional[EvaluationConfig]):
            Optional configuration for the evaluation process.

    Returns:
        GenAIEvalResults:
            The results of the evaluation.
    """

print_execution_plan

print_execution_plan() -> None

Print the execution plan for all tasks in the dataset.

Source code in python/scouter/stubs.pyi
def print_execution_plan(self) -> None:
    """Print the execution plan for all tasks in the dataset."""

with_updated_contexts_by_id

with_updated_contexts_by_id(
    updated_contexts: Dict[str, Any]
) -> GenAIEvalDataset

Create a new GenAIEvalDataset with updated contexts for specific records.

Example

updated_contexts = { ... "record_1_uid": {"new_field": "new_value"}, ... "record_2_uid": {"another_field": 123} ... } new_dataset = dataset.with_updated_contexts_by_id(updated_contexts)

Args: updated_contexts (Dict[str, Any]): A dictionary mapping record UIDs to their new context data. Returns: GenAIEvalDataset: A new dataset instance with the updated contexts.

Source code in python/scouter/stubs.pyi
def with_updated_contexts_by_id(
    self,
    updated_contexts: Dict[str, Any],
) -> "GenAIEvalDataset":
    """Create a new GenAIEvalDataset with updated contexts for specific records.

    Example:
        >>> updated_contexts = {
        ...     "record_1_uid": {"new_field": "new_value"},
        ...     "record_2_uid": {"another_field": 123}
        ... }
        >>> new_dataset = dataset.with_updated_contexts_by_id(updated_contexts)
    Args:
        updated_contexts (Dict[str, Any]):
            A dictionary mapping record UIDs to their new context data.
    Returns:
        GenAIEvalDataset:
            A new dataset instance with the updated contexts.
    """

GenAIEvalProfile

GenAIEvalProfile(
    config: GenAIEvalConfig,
    tasks: List[Union[AssertionTask, LLMJudgeTask]],
)

Profile for LLM evaluation and drift detection.

GenAIEvalProfile combines assertion tasks and LLM judge tasks into a unified evaluation framework for monitoring LLM performance. Evaluations run asynchronously on the Scouter server, enabling scalable drift detection without blocking your application.

Architecture

The profile automatically orchestrates two types of evaluation tasks:

  1. Assertion Tasks: Fast, deterministic rule-based validations
  2. Execute locally without additional LLM calls
  3. Ideal for structural validation, threshold checks, pattern matching
  4. Zero latency overhead, minimal cost

  5. LLM Judge Tasks: Advanced reasoning-based evaluations

  6. Leverage additional LLM calls for complex assessments
  7. Automatically compiled into an internal Workflow for execution
  8. Support dependencies to chain evaluations and pass results
  9. Ideal for semantic similarity, quality assessment, factuality checks
Task Execution Order

Tasks are executed based on their dependency graph using topological sort:

╔══════════════════════════════════════════════════════════════╗
║              TASK EXECUTION ARCHITECTURE                     ║
╠══════════════════════════════════════════════════════════════╣
║                                                              ║
║  Level 0: Independent Tasks (no dependencies)                ║
║  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐          ║
║  │ Assertion A │  │ Assertion B │  │ LLM Judge X │          ║
║  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘          ║
║         │                │                │                  ║
║         └────────┬───────┴────────┬───────┘                  ║
║                  │                │                          ║
║  Level 1: Tasks depending on Level 0                         ║
║         ┌────────▼────────┐  ┌────▼────────┐                ║
║         │  LLM Judge Y    │  │Assertion C  │                ║
║         │ (depends: A, X) │  │(depends: B) │                ║
║         └────────┬────────┘  └────┬────────┘                ║
║                  │                │                          ║
║  Level 2: Final aggregation tasks                            ║
║                  └────────┬───────┘                          ║
║                  ┌────────▼────────┐                         ║
║                  │  LLM Judge Z    │                         ║
║                  │ (depends: Y, C) │                         ║
║                  └─────────────────┘                         ║
║                                                              ║
╚══════════════════════════════════════════════════════════════╝
Workflow Generation

When LLM judge tasks are present, the profile automatically: 1. Builds an internal Workflow from LLMJudgeTask configurations 2. Validates task dependencies form a valid DAG 3. Ensures Prompt configurations are compatible with execution 4. Optimizes execution order for parallel processing where possible

Common Use Cases
  • Multi-stage LLM evaluation (relevance → quality → toxicity)
  • Hybrid assertion + LLM judge pipelines (fast checks, then deep analysis)
  • Dependent evaluations (use upstream results in downstream prompts)
  • Cost-optimized monitoring (assertions for 90%, LLM judges for 10%)

Examples:

Pure assertion-based monitoring (no LLM calls):

>>> config = GenAIEvalConfig(
...     space="production",
...     name="chatbot",
...     version="1.0",
...     sample_ratio=10
... )
>>>
>>> tasks = [
...     AssertionTask(
...         id="response_length",
...         field_path="response",
...         operator=ComparisonOperator.HasLength,
...         expected_value={"min": 10, "max": 500},
...         description="Ensure response is reasonable length"
...     ),
...     AssertionTask(
...         id="confidence_threshold",
...         field_path="metadata.confidence",
...         operator=ComparisonOperator.GreaterThanOrEqual,
...         expected_value=0.7,
...         description="Require minimum confidence"
...     )
... ]
>>>
>>> profile = GenAIEvalProfile(
...     config=config,
...     tasks=tasks
... )

LLM judge-based semantic monitoring:

>>> relevance_prompt = Prompt(
...     system_instructions="Evaluate response relevance to query",
...     messages="Query: {{input}}\nResponse: {{response}}\nRate 0-10:",
...     model="gpt-4o-mini",
...     provider=Provider.OpenAI,
...     output_type=Score
... )
>>>
>>> judge_tasks = [
...     LLMJudgeTask(
...         id="relevance_judge",
...         prompt=relevance_prompt,
...         expected_value=7,
...         field_path="score",
...         operator=ComparisonOperator.GreaterThanOrEqual,
...         description="Ensure relevance score >= 7"
...     )
... ]
>>>
>>> profile = GenAIEvalProfile(
...     config=config,
...     tasks=judge_tasks
... )

Hybrid monitoring with dependencies:

>>> # Fast assertion checks first
>>> assertion_tasks = [
...     AssertionTask(
...         id="not_empty",
...         field_path="response",
...         operator=ComparisonOperator.HasLength,
...         expected_value={"min": 1},
...         description="Response must not be empty"
...     )
... ]
>>>
>>> # Deep LLM analysis only if assertions pass
>>> quality_prompt = Prompt(
...     system_instructions="Assess response quality",
...     messages="{{response}}",
...     model="claude-3-5-sonnet-20241022",
...     provider=Provider.Anthropic,
...     output_type=Score
... )
>>>
>>> judge_tasks = [
...     LLMJudgeTask(
...         id="quality_judge",
...         prompt=quality_prompt,
...         expected_value=8,
...         field_path="score",
...         operator=ComparisonOperator.GreaterThanOrEqual,
...         depends_on=["not_empty"],  # Only run if assertion passes
...         description="Quality assessment after validation"
...     )
... ]
>>>
>>> profile = GenAIEvalProfile(
...     config=config,
...     tasks=assertion_tasks + judge_tasks
... )

Multi-stage dependent LLM judges:

>>> # Stage 1: Relevance check
>>> relevance_task = LLMJudgeTask(
...     id="relevance",
...     prompt=relevance_prompt,
...     expected_value=7,
...     field_path="score",
...     operator=ComparisonOperator.GreaterThanOrEqual
... )
>>>
>>> # Stage 2: Toxicity check (only if relevant)
>>> toxicity_prompt = Prompt(...)
>>> toxicity_task = LLMJudgeTask(
...     id="toxicity",
...     prompt=toxicity_prompt,
...     expected_value=0.2,
...     field_path="relevance.score",
...     operator=ComparisonOperator.LessThan,
...     depends_on=["relevance"]  # Chain evaluations
... )
>>>
>>> # Stage 3: Final quality (only if relevant and non-toxic)
>>> quality_task = LLMJudgeTask(
...     id="quality",
...     prompt=quality_prompt,
...     expected_value=8,
...     field_path="toxicity.score",
...     operator=ComparisonOperator.GreaterThanOrEqual,
...     depends_on=["relevance", "toxicity"]  # Multiple deps
... )
>>>
>>> profile = GenAIEvalProfile(
...     config=config,
...     tasks=[relevance_task, toxicity_task, quality_task]
... )
Note
  • At least one task (assertion or LLM judge) is required
  • LLM judge tasks are automatically compiled into an internal Workflow
  • Task dependencies must form a valid DAG (no circular dependencies)
  • Execution order is optimized via topological sort
  • Independent tasks at the same level can execute in parallel
  • Failed tasks halt execution of dependent downstream tasks

Creates a profile that combines assertion tasks and LLM judge tasks into a unified evaluation framework. LLM judge tasks are automatically compiled into an internal Workflow for execution on the Scouter server.

Parameters:

Name Type Description Default
config GenAIEvalConfig

Configuration for the GenAI drift profile containing space, name, version, sample rate, and alert settings.

required
tasks List[Union[AssertionTask, LLMJudgeTask]]

List of evaluation tasks to include in the profile. Can contain both AssertionTask and LLMJudgeTask instances. At least one task (assertion or LLM judge) is required.

required

Returns:

Name Type Description
GenAIEvalProfile

Configured profile ready for GenAI drift monitoring.

Raises:

Type Description
ProfileError

If validation fails due to: - Empty task lists (both assertion_tasks and llm_judge_tasks are None/empty) - Circular dependencies in task dependency graph - Invalid task configurations (malformed prompts, missing fields, etc.)

Examples:

Assertion-only profile:

>>> config = GenAIEvalConfig(space="prod", name="bot", version="1.0")
>>> assertions = [
...     AssertionTask(id="length_check", ...),
...     AssertionTask(id="confidence_check", ...)
... ]
>>> profile = GenAIEvalProfile(config, tasks=assertions)

LLM judge-only profile:

>>> judges = [
...     LLMJudgeTask(id="relevance", prompt=..., ...),
...     LLMJudgeTask(id="quality", prompt=..., depends_on=["relevance"])
... ]
>>> profile = GenAIEvalProfile(config, tasks=judges)

Hybrid profile:

>>> profile = GenAIEvalProfile(
...     config=config,
...     tasks=assertions + judges
... )
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    config: GenAIEvalConfig,
    tasks: List[Union[AssertionTask, LLMJudgeTask]],
):
    """Initialize a GenAIEvalProfile for LLM evaluation and drift detection.

    Creates a profile that combines assertion tasks and LLM judge tasks into
    a unified evaluation framework. LLM judge tasks are automatically compiled
    into an internal Workflow for execution on the Scouter server.

    Args:
        config (GenAIEvalConfig):
            Configuration for the GenAI drift profile containing space, name,
            version, sample rate, and alert settings.
        tasks (List[Union[AssertionTask, LLMJudgeTask]]):
            List of evaluation tasks to include in the profile. Can contain
            both AssertionTask and LLMJudgeTask instances. At least one task
            (assertion or LLM judge) is required.

    Returns:
        GenAIEvalProfile: Configured profile ready for GenAI drift monitoring.

    Raises:
        ProfileError: If validation fails due to:
            - Empty task lists (both assertion_tasks and llm_judge_tasks are None/empty)
            - Circular dependencies in task dependency graph
            - Invalid task configurations (malformed prompts, missing fields, etc.)

    Examples:
        Assertion-only profile:

        >>> config = GenAIEvalConfig(space="prod", name="bot", version="1.0")
        >>> assertions = [
        ...     AssertionTask(id="length_check", ...),
        ...     AssertionTask(id="confidence_check", ...)
        ... ]
        >>> profile = GenAIEvalProfile(config, tasks=assertions)

        LLM judge-only profile:

        >>> judges = [
        ...     LLMJudgeTask(id="relevance", prompt=..., ...),
        ...     LLMJudgeTask(id="quality", prompt=..., depends_on=["relevance"])
        ... ]
        >>> profile = GenAIEvalProfile(config, tasks=judges)

        Hybrid profile:

        >>> profile = GenAIEvalProfile(
        ...     config=config,
        ...     tasks=assertions + judges
        ... )
    """

assertion_tasks property

assertion_tasks: List[AssertionTask]

List of assertion tasks for deterministic validation.

Assertions execute without additional LLM calls, providing fast, cost-effective validation of structural properties, thresholds, and patterns.

config property

config: GenAIEvalConfig

Configuration for the drift profile.

Contains space, name, version, sample rate, and alert settings.

llm_judge_tasks property

llm_judge_tasks: List[LLMJudgeTask]

List of LLM judge tasks for reasoning-based evaluation.

LLM judges use additional LLM calls to assess complex criteria like semantic similarity, quality, and factuality. Automatically compiled into an internal Workflow for execution.

scouter_version property

scouter_version: str

Scouter version used to create this profile.

Used for compatibility tracking and migration support.

uid property writable

uid: str

Unique identifier for the drift profile.

Derived from the config's space, name, and version. Used for tracking and querying evaluation results.

from_file staticmethod

from_file(path: Path) -> GenAIEvalProfile

Load profile from JSON file.

Parameters:

Name Type Description Default
path Path

Path to the JSON file containing the profile.

required

Returns:

Name Type Description
GenAIEvalProfile GenAIEvalProfile

Loaded profile instance.

Raises:

Type Description
ProfileError

If file doesn't exist, is malformed, or invalid.

Example

profile = GenAIEvalProfile.from_file(Path("my_profile.json"))

Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "GenAIEvalProfile":
    """Load profile from JSON file.

    Args:
        path (Path):
            Path to the JSON file containing the profile.

    Returns:
        GenAIEvalProfile: Loaded profile instance.

    Raises:
        ProfileError: If file doesn't exist, is malformed, or invalid.

    Example:
        >>> profile = GenAIEvalProfile.from_file(Path("my_profile.json"))
    """

get_execution_plan

get_execution_plan() -> List[List[str]]

Get the execution plan for all tasks.

Returns task IDs grouped by execution level based on dependency graph. Tasks at the same level can execute in parallel. Each subsequent level depends on completion of all previous levels.

Uses topological sort to determine optimal execution order while respecting task dependencies.

Returns:

Type Description
List[List[str]]

List[List[str]]: Nested list where each inner list contains task IDs for that execution level. Level 0 contains tasks with no dependencies, Level 1 contains tasks depending only on Level 0, etc.

Raises:

Type Description
ProfileError

If circular dependencies are detected in the task graph.

Example

plan = profile.get_execution_plan() print(f"Level 0 (parallel): {plan[0]}") print(f"Level 1 (after L0): {plan[1]}") print(f"Total levels: {len(plan)}")

Output: Level 0 (parallel): ['assertion_a', 'assertion_b', 'judge_x'] Level 1 (after L0): ['judge_y', 'assertion_c'] Total levels: 2

Source code in python/scouter/stubs.pyi
def get_execution_plan(self) -> List[List[str]]:
    """Get the execution plan for all tasks.

    Returns task IDs grouped by execution level based on dependency graph.
    Tasks at the same level can execute in parallel. Each subsequent level
    depends on completion of all previous levels.

    Uses topological sort to determine optimal execution order while
    respecting task dependencies.

    Returns:
        List[List[str]]: Nested list where each inner list contains task IDs
            for that execution level. Level 0 contains tasks with no dependencies,
            Level 1 contains tasks depending only on Level 0, etc.

    Raises:
        ProfileError: If circular dependencies are detected in the task graph.

    Example:
        >>> plan = profile.get_execution_plan()
        >>> print(f"Level 0 (parallel): {plan[0]}")
        >>> print(f"Level 1 (after L0): {plan[1]}")
        >>> print(f"Total levels: {len(plan)}")

        Output:
        Level 0 (parallel): ['assertion_a', 'assertion_b', 'judge_x']
        Level 1 (after L0): ['judge_y', 'assertion_c']
        Total levels: 2
    """

has_assertions

has_assertions() -> bool

Check if profile contains assertion tasks.

Returns:

Name Type Description
bool bool

True if assertion_tasks is non-empty, False otherwise.

Example

if profile.has_assertions(): ... print("Profile includes fast assertion checks")

Source code in python/scouter/stubs.pyi
def has_assertions(self) -> bool:
    """Check if profile contains assertion tasks.

    Returns:
        bool: True if assertion_tasks is non-empty, False otherwise.

    Example:
        >>> if profile.has_assertions():
        ...     print("Profile includes fast assertion checks")
    """

has_llm_tasks

has_llm_tasks() -> bool

Check if profile contains LLM judge tasks.

Returns:

Name Type Description
bool bool

True if llm_judge_tasks is non-empty, False otherwise.

Example

if profile.has_llm_tasks(): ... print("Profile uses LLM judges (additional cost/latency)")

Source code in python/scouter/stubs.pyi
def has_llm_tasks(self) -> bool:
    """Check if profile contains LLM judge tasks.

    Returns:
        bool: True if llm_judge_tasks is non-empty, False otherwise.

    Example:
        >>> if profile.has_llm_tasks():
        ...     print("Profile uses LLM judges (additional cost/latency)")
    """

model_dump

model_dump() -> Dict[str, Any]

Serialize profile to dictionary.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Dictionary representation of the profile.

Example

data = profile.model_dump() print(data["config"]["space"]) print(f"Task count: {len(data['assertion_tasks'])}")

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Serialize profile to dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of the profile.

    Example:
        >>> data = profile.model_dump()
        >>> print(data["config"]["space"])
        >>> print(f"Task count: {len(data['assertion_tasks'])}")
    """

model_dump_json

model_dump_json() -> str

Serialize profile to JSON string.

Returns:

Name Type Description
str str

JSON string representation of the profile including config, tasks, workflow (if present), and metadata.

Example

json_str = profile.model_dump_json()

Save to file, send to API, etc.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Serialize profile to JSON string.

    Returns:
        str: JSON string representation of the profile including config,
            tasks, workflow (if present), and metadata.

    Example:
        >>> json_str = profile.model_dump_json()
        >>> # Save to file, send to API, etc.
    """

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> GenAIEvalProfile

Load profile from dictionary.

Parameters:

Name Type Description Default
data Dict[str, Any]

Dictionary representation of the profile.

required

Returns:

Name Type Description
GenAIEvalProfile GenAIEvalProfile

Reconstructed profile instance.

Raises:

Type Description
ProfileError

If dictionary structure is invalid or missing required fields.

Example

data = {"config": {...}, "assertion_tasks": [...]} profile = GenAIEvalProfile.model_validate(data)

Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "GenAIEvalProfile":
    """Load profile from dictionary.

    Args:
        data (Dict[str, Any]):
            Dictionary representation of the profile.

    Returns:
        GenAIEvalProfile: Reconstructed profile instance.

    Raises:
        ProfileError: If dictionary structure is invalid or missing required fields.

    Example:
        >>> data = {"config": {...}, "assertion_tasks": [...]}
        >>> profile = GenAIEvalProfile.model_validate(data)
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> GenAIEvalProfile

Load profile from JSON string.

Parameters:

Name Type Description Default
json_string str

JSON string representation of the profile.

required

Returns:

Name Type Description
GenAIEvalProfile GenAIEvalProfile

Reconstructed profile instance.

Raises:

Type Description
ProfileError

If JSON is malformed or invalid.

Example

json_str = '{"config": {...}, "assertion_tasks": [...]}' profile = GenAIEvalProfile.model_validate_json(json_str)

Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "GenAIEvalProfile":
    """Load profile from JSON string.

    Args:
        json_string (str):
            JSON string representation of the profile.

    Returns:
        GenAIEvalProfile: Reconstructed profile instance.

    Raises:
        ProfileError: If JSON is malformed or invalid.

    Example:
        >>> json_str = '{"config": {...}, "assertion_tasks": [...]}'
        >>> profile = GenAIEvalProfile.model_validate_json(json_str)
    """

print_execution_plan

print_execution_plan() -> None

Print the execution plan for all tasks.

Source code in python/scouter/stubs.pyi
def print_execution_plan(self) -> None:
    """Print the execution plan for all tasks."""

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save profile to JSON file.

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the profile. If None, saves to "genai_eval_profile.json" in the current directory.

None

Returns:

Name Type Description
Path Path

Path where the profile was saved.

Example

path = profile.save_to_json(Path("my_profile.json")) print(f"Saved to: {path}")

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save profile to JSON file.

    Args:
        path (Optional[Path]):
            Optional path to save the profile. If None, saves to
            "genai_eval_profile.json" in the current directory.

    Returns:
        Path: Path where the profile was saved.

    Example:
        >>> path = profile.save_to_json(Path("my_profile.json"))
        >>> print(f"Saved to: {path}")
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    uid: Optional[str] = None,
    alert_config: Optional[GenAIAlertConfig] = None,
) -> None

Update profile configuration in-place.

Modifies the profile's config without recreating the entire profile. Useful for adjusting space/name/version after initial creation or updating alert settings.

Parameters:

Name Type Description Default
space Optional[str]

New model space. If None, keeps existing value.

None
name Optional[str]

New model name. If None, keeps existing value.

None
version Optional[str]

New model version. If None, keeps existing value.

None
uid Optional[str]

New unique identifier. If None, keeps existing value.

None
alert_config Optional[GenAIAlertConfig]

New alert configuration. If None, keeps existing value.

None
Example

profile.update_config_args( ... space="production", ... alert_config=GenAIAlertConfig(schedule="0 /6 * * ") ... )

Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    uid: Optional[str] = None,
    alert_config: Optional[GenAIAlertConfig] = None,
) -> None:
    """Update profile configuration in-place.

    Modifies the profile's config without recreating the entire profile.
    Useful for adjusting space/name/version after initial creation or
    updating alert settings.

    Args:
        space (Optional[str]):
            New model space. If None, keeps existing value.
        name (Optional[str]):
            New model name. If None, keeps existing value.
        version (Optional[str]):
            New model version. If None, keeps existing value.
        uid (Optional[str]):
            New unique identifier. If None, keeps existing value.
        alert_config (Optional[GenAIAlertConfig]):
            New alert configuration. If None, keeps existing value.

    Example:
        >>> profile.update_config_args(
        ...     space="production",
        ...     alert_config=GenAIAlertConfig(schedule="0 */6 * * *")
        ... )
    """

GenAIEvalRecord

GenAIEvalRecord(
    context: Context,
    id: Optional[str] = None,
    session_id: Optional[str] = None,
)

LLM record containing context tied to a Large Language Model interaction that is used to evaluate drift in LLM responses.

Examples:

>>> record = GenAIEvalRecord(
...     context={
...         "input": "What is the capital of France?",
...         "response": "Paris is the capital of France."
...     },
... )
>>> print(record.context["input"])
"What is the capital of France?"

then used to inject context into the evaluation prompts.

Parameters:

Name Type Description Default
context Dict[str, Any] | BaseModel

Additional context information as a dictionary or a pydantic BaseModel. During evaluation, this will be merged with the input and response data and passed to the assigned evaluation prompts. So if you're evaluation prompts expect additional context via bound variables (e.g., ${foo}), you can pass that here as key value pairs.

required
id Optional[str]

Optional unique identifier for the record.

None
session_id Optional[str]

Optional session identifier to group related records.

None

Raises:

Type Description
TypeError

If context is not a dict or a pydantic BaseModel.

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    context: Context,
    id: Optional[str] = None,
    session_id: Optional[str] = None,
) -> None:
    """Creates a new LLM record to associate with an `GenAIEvalProfile`.
    The record is sent to the `Scouter` server via the `ScouterQueue` and is
    then used to inject context into the evaluation prompts.

    Args:
        context (Dict[str, Any] | BaseModel):
            Additional context information as a dictionary or a pydantic BaseModel. During evaluation,
            this will be merged with the input and response data and passed to the assigned
            evaluation prompts. So if you're evaluation prompts expect additional context via
            bound variables (e.g., `${foo}`), you can pass that here as key value pairs.
            {"foo": "bar"}
        id (Optional[str], optional):
            Optional unique identifier for the record.
        session_id (Optional[str], optional):
            Optional session identifier to group related records.

    Raises:
        TypeError: If context is not a dict or a pydantic BaseModel.

    """

context property

context: Dict[str, Any]

Get the contextual information.

Returns:

Type Description
Dict[str, Any]

The context data as a Python object (deserialized from JSON).

Raises:

Type Description
TypeError

If the stored JSON cannot be converted to a Python object.

created_at property

created_at: datetime

Get the created at timestamp.

record_id property writable

record_id: Optional[str]

Get the record ID.

session_id property writable

session_id: str

Get the session ID.

uid property

uid: str

Get the unique identifier for the record.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

update_context_field

update_context_field(key: str, value: Any) -> None

Update a specific field in the context. If the key does not exist, it will be added.

Parameters:

Name Type Description Default
key str

The key of the context field to update.

required
value Any

The new value for the context field.

required
Source code in python/scouter/stubs.pyi
def update_context_field(self, key: str, value: Any) -> None:
    """Update a specific field in the context.
    If the key does not exist, it will be added.

    Args:
        key (str):
            The key of the context field to update.
        value (Any):
            The new value for the context field.
    """

GenAIEvalResultSet

Defines the results of a specific evaluation run

records property

records: List[GenAIEvalSet]

Get the list of evaluation sets in this result set

GenAIEvalResults

Defines the results of an LLM eval metric

errored_tasks property

errored_tasks: List[str]

Get a list of record IDs that had errors during evaluation

failed_count property

failed_count: int

Get the count of failed evaluations

histograms property

histograms: Optional[Dict[str, Histogram]]

Get histograms for all calculated features (metrics, embeddings, similarities)

successful_count property

successful_count: int

Get the count of successful evaluations

as_table

as_table(show_tasks: bool = False) -> str

Pretty print the workflow or task results as a table

Parameters:

Name Type Description Default
show_tasks bool

Whether to show individual task results or just the workflow summary. Default is False meaning only the workflow summary is shown.

False
Source code in python/scouter/stubs.pyi
def as_table(self, show_tasks: bool = False) -> str:
    """Pretty print the workflow or task results as a table

    Args:
        show_tasks (bool):
            Whether to show individual task results or just the workflow summary. Default is False
            meaning only the workflow summary is shown.

    """

compare_to

compare_to(
    baseline: GenAIEvalResults, regression_threshold: float
) -> ComparisonResults

Compare the current evaluation results to a baseline with a regression threshold.

Parameters:

Name Type Description Default
baseline GenAIEvalResults

The baseline evaluation results to compare against.

required
regression_threshold float

The threshold for considering a regression significant.

required

Returns:

Type Description
ComparisonResults

ComparisonResults

Source code in python/scouter/stubs.pyi
def compare_to(self, baseline: "GenAIEvalResults", regression_threshold: float) -> ComparisonResults:
    """Compare the current evaluation results to a baseline with a regression threshold.

    Args:
        baseline (GenAIEvalResults):
            The baseline evaluation results to compare against.
        regression_threshold (float):
            The threshold for considering a regression significant.

    Returns:
        ComparisonResults
    """

model_dump_json

model_dump_json() -> str

Dump the results as a JSON string

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Dump the results as a JSON string"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> GenAIEvalResults

Validate and create an GenAIEvalResults instance from a JSON string

Parameters:

Name Type Description Default
json_string str

JSON string to validate and create the GenAIEvalResults instance from.

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "GenAIEvalResults":
    """Validate and create an GenAIEvalResults instance from a JSON string

    Args:
        json_string (str):
            JSON string to validate and create the GenAIEvalResults instance from.
    """

to_dataframe

to_dataframe(polars: bool = False) -> Any

Convert the results to a Pandas or Polars DataFrame.

Parameters:

Name Type Description Default
polars bool

Whether to return a Polars DataFrame. If False, a Pandas DataFrame will be returned.

False

Returns:

Name Type Description
DataFrame Any

A Pandas or Polars DataFrame containing the results.

Source code in python/scouter/stubs.pyi
def to_dataframe(self, polars: bool = False) -> Any:
    """
    Convert the results to a Pandas or Polars DataFrame.

    Args:
        polars (bool):
            Whether to return a Polars DataFrame. If False, a Pandas DataFrame will be returned.

    Returns:
        DataFrame:
            A Pandas or Polars DataFrame containing the results.
    """

GenAIEvalSet

Evaluation set for a specific evaluation run

created_at property

created_at: datetime

Get the creation timestamp of this evaluation set

duration_ms property

duration_ms: int

Get the duration of the evaluation set in milliseconds

failed_tasks property

failed_tasks: int

Get the number of tasks that failed in this evaluation set

pass_rate property

pass_rate: float

Get the pass rate (percentage of passed tasks) in this evaluation set

passed_tasks property

passed_tasks: int

Get the number of tasks that passed in this evaluation set

record_uid property

record_uid: str

Get the unique identifier for the records in this evaluation set

records property

records: List[GenAIEvalTaskResult]

Get the list of task results in this evaluation set

total_tasks property

total_tasks: int

Get the total number of tasks evaluated in this set

as_table

as_table(show_tasks: bool = False) -> str

Pretty print the evaluation workflow or task results as a table

Parameters:

Name Type Description Default
show_tasks bool

Whether to show individual task results or just the summary. Default is False meaning only the workflow summary is shown.

False
Source code in python/scouter/stubs.pyi
def as_table(self, show_tasks: bool = False) -> str:
    """Pretty print the evaluation workflow or task results as a table

    Args:
        show_tasks (bool):
            Whether to show individual task results or just the summary. Default is False
            meaning only the workflow summary is shown.
    """

GenAIEvalTaskResult

Individual task result from an LLM evaluation run

actual property

actual: Any

Get the actual value that was evaluated.

Returns:

Type Description
Any

The actual value as a Python object (deserialized from JSON).

created_at property

created_at: datetime

Get the creation timestamp of this task result

expected property

expected: Any

Get the expected value for comparison.

Returns:

Type Description
Any

The expected value as a Python object (deserialized from JSON).

field_path property

field_path: Optional[str]

Get the field path used for value extraction, if any

message property

message: str

Get the evaluation result message

operator property

operator: ComparisonOperator

Get the comparison operator used in the evaluation

passed property

passed: bool

Check if the task evaluation passed

record_uid property

record_uid: str

Get the unique identifier for the record associated with this task result

task_id property

task_id: str

Get the unique identifier for the evaluation task

task_type property

task_type: EvaluationTaskType

Get the type of evaluation task (Assertion, LLMJudge, or HumanValidation)

value property

value: float

Get the evaluated value from the task

model_dump_json

model_dump_json() -> str

Serialize the task result to JSON string

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Serialize the task result to JSON string"""

GenerateContentResponse

Response from content generation.

Complete response including candidates, usage, and feedback.

Examples:

>>> response = GenerateContentResponse(
...     candidates=[Candidate(...)],
...     usage_metadata=UsageMetadata(...),
...     model_version="gemini-1.5-pro-002"
... )

candidates property

candidates: List[Candidate]

Generated candidates.

create_time property

create_time: Optional[str]

Request timestamp.

model_version property

model_version: Optional[str]

Model version used.

prompt_feedback property

prompt_feedback: Optional[PromptFeedback]

Prompt feedback (if blocked).

response_id property

response_id: Optional[str]

Response identifier.

usage_metadata property

usage_metadata: Optional[UsageMetadata]

Token usage metadata.

GenerationConfig

GenerationConfig(
    stop_sequences: Optional[List[str]] = None,
    response_mime_type: Optional[str] = None,
    response_json_schema: Optional[Any] = None,
    response_modalities: Optional[List[Modality]] = None,
    thinking_config: Optional[GeminiThinkingConfig] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    candidate_count: Optional[int] = None,
    max_output_tokens: Optional[int] = None,
    response_logprobs: Optional[bool] = None,
    logprobs: Optional[int] = None,
    presence_penalty: Optional[float] = None,
    frequency_penalty: Optional[float] = None,
    seed: Optional[int] = None,
    audio_timestamp: Optional[bool] = None,
    media_resolution: Optional[MediaResolution] = None,
    speech_config: Optional[SpeechConfig] = None,
    enable_affective_dialog: Optional[bool] = None,
    enable_enhanced_civic_answers: Optional[bool] = None,
    image_config: Optional[ImageConfig] = None,
)

Configuration for content generation behavior.

Controls all aspects of how the model generates responses including sampling parameters, output format, modalities, and more.

Examples:

>>> # Basic text generation
>>> config = GenerationConfig(
...     temperature=0.7,
...     max_output_tokens=1024,
...     top_p=0.95
... )
>>> # Structured JSON output
>>> config = GenerationConfig(
...     response_mime_type="application/json",
...     response_json_schema={"type": "object", ...},
...     temperature=0.3
... )
>>> # Multi-modal with thinking
>>> config = GenerationConfig(
...     response_modalities=[Modality.Text, Modality.Image],
...     thinking_config=ThinkingConfig(
...         include_thoughts=True,
...         thinking_level=ThinkingLevel.High
...     ),
...     temperature=0.5
... )

Parameters:

Name Type Description Default
stop_sequences Optional[List[str]]

Sequences that stop generation

None
response_mime_type Optional[str]

MIME type for response (e.g., "application/json")

None
response_json_schema Optional[Any]

JSON schema for structured output

None
response_modalities Optional[List[Modality]]

Output modalities to include

None
thinking_config Optional[GeminiThinkingConfig]

Configuration for thinking/reasoning

None
temperature Optional[float]

Sampling temperature (0.0-2.0)

None
top_p Optional[float]

Nucleus sampling threshold

None
top_k Optional[int]

Top-k sampling threshold

None
candidate_count Optional[int]

Number of candidates to generate

None
max_output_tokens Optional[int]

Maximum tokens to generate

None
response_logprobs Optional[bool]

Whether to return log probabilities

None
logprobs Optional[int]

Number of top log probabilities to return

None
presence_penalty Optional[float]

Penalty for token presence (-2.0 to 2.0)

None
frequency_penalty Optional[float]

Penalty for token frequency (-2.0 to 2.0)

None
seed Optional[int]

Random seed for deterministic generation

None
audio_timestamp Optional[bool]

Whether to include audio timestamps

None
media_resolution Optional[MediaResolution]

Resolution for media processing

None
speech_config Optional[SpeechConfig]

Configuration for speech synthesis

None
enable_affective_dialog Optional[bool]

Enable emotion detection/adaptation

None
enable_enhanced_civic_answers Optional[bool]

Enable enhanced civic answers

None
image_config Optional[ImageConfig]

Configuration for image generation

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    stop_sequences: Optional[List[str]] = None,
    response_mime_type: Optional[str] = None,
    response_json_schema: Optional[Any] = None,
    response_modalities: Optional[List[Modality]] = None,
    thinking_config: Optional[GeminiThinkingConfig] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    candidate_count: Optional[int] = None,
    max_output_tokens: Optional[int] = None,
    response_logprobs: Optional[bool] = None,
    logprobs: Optional[int] = None,
    presence_penalty: Optional[float] = None,
    frequency_penalty: Optional[float] = None,
    seed: Optional[int] = None,
    audio_timestamp: Optional[bool] = None,
    media_resolution: Optional[MediaResolution] = None,
    speech_config: Optional[SpeechConfig] = None,
    enable_affective_dialog: Optional[bool] = None,
    enable_enhanced_civic_answers: Optional[bool] = None,
    image_config: Optional[ImageConfig] = None,
) -> None:
    """Initialize generation configuration.

    Args:
        stop_sequences (Optional[List[str]]):
            Sequences that stop generation
        response_mime_type (Optional[str]):
            MIME type for response (e.g., "application/json")
        response_json_schema (Optional[Any]):
            JSON schema for structured output
        response_modalities (Optional[List[Modality]]):
            Output modalities to include
        thinking_config (Optional[GeminiThinkingConfig]):
            Configuration for thinking/reasoning
        temperature (Optional[float]):
            Sampling temperature (0.0-2.0)
        top_p (Optional[float]):
            Nucleus sampling threshold
        top_k (Optional[int]):
            Top-k sampling threshold
        candidate_count (Optional[int]):
            Number of candidates to generate
        max_output_tokens (Optional[int]):
            Maximum tokens to generate
        response_logprobs (Optional[bool]):
            Whether to return log probabilities
        logprobs (Optional[int]):
            Number of top log probabilities to return
        presence_penalty (Optional[float]):
            Penalty for token presence (-2.0 to 2.0)
        frequency_penalty (Optional[float]):
            Penalty for token frequency (-2.0 to 2.0)
        seed (Optional[int]):
            Random seed for deterministic generation
        audio_timestamp (Optional[bool]):
            Whether to include audio timestamps
        media_resolution (Optional[MediaResolution]):
            Resolution for media processing
        speech_config (Optional[SpeechConfig]):
            Configuration for speech synthesis
        enable_affective_dialog (Optional[bool]):
            Enable emotion detection/adaptation
        enable_enhanced_civic_answers (Optional[bool]):
            Enable enhanced civic answers
        image_config (Optional[ImageConfig]):
            Configuration for image generation
    """

audio_timestamp property

audio_timestamp: Optional[bool]

Whether to include audio timestamps.

candidate_count property

candidate_count: Optional[int]

Number of candidates to generate.

enable_affective_dialog property

enable_affective_dialog: Optional[bool]

Whether affective dialog is enabled.

frequency_penalty property

frequency_penalty: Optional[float]

Frequency penalty.

image_config property

image_config: Optional[ImageConfig]

Image configuration.

logprobs property

logprobs: Optional[int]

Number of top log probabilities.

max_output_tokens property

max_output_tokens: Optional[int]

Maximum output tokens.

media_resolution property

media_resolution: Optional[MediaResolution]

Media resolution.

presence_penalty property

presence_penalty: Optional[float]

Presence penalty.

response_json_schema property

response_json_schema: Optional[Any]

JSON schema for structured output.

response_logprobs property

response_logprobs: Optional[bool]

Whether to return log probabilities.

response_mime_type property

response_mime_type: Optional[str]

The response MIME type.

response_modalities property

response_modalities: Optional[List[Modality]]

Output modalities.

seed property

seed: Optional[int]

Random seed.

speech_config property

speech_config: Optional[SpeechConfig]

Speech configuration.

stop_sequences property

stop_sequences: Optional[List[str]]

Stop sequences that halt generation.

temperature property

temperature: Optional[float]

Sampling temperature.

thinking_config property

thinking_config: Optional[GeminiThinkingConfig]

Thinking configuration.

top_k property

top_k: Optional[int]

Top-k sampling threshold.

top_p property

top_p: Optional[float]

Nucleus sampling threshold.

GetProfileRequest

GetProfileRequest(
    name: str,
    space: str,
    version: str,
    drift_type: DriftType,
)

Parameters:

Name Type Description Default
name str

Profile name

required
space str

Profile space

required
version str

Profile version

required
drift_type DriftType

Profile drift type. A (repo/name/version can be associated with more than one drift type)

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, space: str, version: str, drift_type: DriftType) -> None:
    """Initialize get profile request

    Args:
        name:
            Profile name
        space:
            Profile space
        version:
            Profile version
        drift_type:
            Profile drift type. A (repo/name/version can be associated with more than one drift type)
    """

GoogleDate

Date representation.

Simple date with year, month, and day.

Examples:

>>> date = GoogleDate(year=2024, month=12, day=25)

day property

day: Optional[int]

Day of month.

month property

month: Optional[int]

Month (1-12).

year property

year: Optional[int]

Year.

GoogleMaps

GoogleMaps(enable_widget: bool = False)

Google Maps tool configuration.

Configures Google Maps integration.

Examples:

>>> maps = GoogleMaps(enable_widget=True)

Parameters:

Name Type Description Default
enable_widget bool

Whether to enable widget context token

False
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    enable_widget: bool = False,
) -> None:
    """Initialize Google Maps configuration.

    Args:
        enable_widget (bool):
            Whether to enable widget context token
    """

enable_widget property

enable_widget: bool

Whether widget is enabled.

GoogleSearch

Google Search tool configuration (Gemini API).

Configures Google Search with time range filtering.

Examples:

>>> search = GoogleSearch(
...     time_range_filter=Interval(
...         start_time="2024-01-01T00:00:00Z",
...         end_time="2024-12-31T23:59:59Z"
...     )
... )

time_range_filter property

time_range_filter: Interval

The time range filter.

GoogleSearchNum

GoogleSearchNum(
    gemini_search: Optional[GoogleSearch] = None,
    vertex_search: Optional[VertexGoogleSearch] = None,
)

Union type for Google Search configurations.

Represents either Gemini or Vertex Google Search configuration.

Examples:

>>> # Gemini search
>>> search = GoogleSearchNum(
...     gemini_search=GoogleSearch(...)
... )
>>> # Vertex search
>>> search = GoogleSearchNum(
...     vertex_search=VertexGoogleSearch(...)
... )

Exactly one of gemini_search or vertex_search must be provided.

Parameters:

Name Type Description Default
gemini_search Optional[GoogleSearch]

Gemini API search configuration

None
vertex_search Optional[VertexGoogleSearch]

Vertex API search configuration

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    gemini_search: Optional[GoogleSearch] = None,
    vertex_search: Optional[VertexGoogleSearch] = None,
) -> None:
    """Initialize Google Search configuration.

    Exactly one of gemini_search or vertex_search must be provided.

    Args:
        gemini_search (Optional[GoogleSearch]):
            Gemini API search configuration
        vertex_search (Optional[VertexGoogleSearch]):
            Vertex API search configuration
    """

GoogleSearchRetrieval

GoogleSearchRetrieval(
    dynamic_retrieval_config: Optional[
        DynamicRetrievalConfig
    ] = None,
)

Google Search retrieval tool configuration.

Configures Google Search with dynamic retrieval.

Examples:

>>> retrieval = GoogleSearchRetrieval(
...     dynamic_retrieval_config=DynamicRetrievalConfig(
...         mode=DynamicRetrievalMode.ModeDynamic
...     )
... )

Parameters:

Name Type Description Default
dynamic_retrieval_config Optional[DynamicRetrievalConfig]

Dynamic retrieval configuration

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    dynamic_retrieval_config: Optional[DynamicRetrievalConfig] = None,
) -> None:
    """Initialize Google Search retrieval configuration.

    Args:
        dynamic_retrieval_config (Optional[DynamicRetrievalConfig]):
            Dynamic retrieval configuration
    """

dynamic_retrieval_config property

dynamic_retrieval_config: Optional[DynamicRetrievalConfig]

The dynamic retrieval configuration.

GoogleServiceAccountConfig

GoogleServiceAccountConfig(
    service_account: Optional[str] = None,
)

Google Service Account authentication configuration.

Configures service account authentication.

Examples:

>>> config = GoogleServiceAccountConfig(
...     service_account="[email protected]"
... )

Parameters:

Name Type Description Default
service_account Optional[str]

Service account email

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    service_account: Optional[str] = None,
) -> None:
    """Initialize service account configuration.

    Args:
        service_account (Optional[str]):
            Service account email
    """

service_account property

service_account: Optional[str]

The service account email.

Grammar

Grammar(definition: str, syntax: str)

Grammar definition for structured custom tool outputs.

This class defines a grammar that constrains custom tool outputs to follow specific syntax rules.

Examples:

>>> grammar = Grammar(
...     definition="number: /[0-9]+/",
...     syntax="lark"
... )
>>> grammar.syntax
'lark'

Parameters:

Name Type Description Default
definition str

The grammar definition

required
syntax str

Grammar syntax type ("lark" or "regex")

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    definition: str,
    syntax: str,
) -> None:
    """Initialize grammar definition.

    Args:
        definition (str):
            The grammar definition
        syntax (str):
            Grammar syntax type ("lark" or "regex")
    """

definition property

definition: str

The grammar definition.

syntax property

syntax: str

The grammar syntax type.

GrammarFormat

GrammarFormat(grammar: Grammar, type: str)

Grammar-based format for custom tool outputs.

This class wraps a grammar definition to create a structured output format for custom tools.

Examples:

>>> grammar = Grammar(definition="...", syntax="lark")
>>> format = GrammarFormat(grammar=grammar, type="grammar")

Parameters:

Name Type Description Default
grammar Grammar

The grammar definition

required
type str

Format type (typically "grammar")

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    grammar: Grammar,
    type: str,
) -> None:
    """Initialize grammar format.

    Args:
        grammar (Grammar):
            The grammar definition
        type (str):
            Format type (typically "grammar")
    """

grammar property

grammar: Grammar

The grammar definition.

type property

type: str

The format type.

GroundingChunk

Grounding chunk wrapper.

Wraps a grounding chunk source.

Examples:

>>> chunk = GroundingChunk(
...     chunk_type=GroundingChunkType.Web(Web(...))
... )

chunk_type property

chunk_type: GroundingChunkType

The chunk type.

GroundingChunkType

Union type for grounding chunk sources.

Represents different types of grounding sources.

Examples:

>>> # Web source
>>> chunk = GroundingChunkType.Web(Web(...))
>>> # Retrieved context
>>> chunk = GroundingChunkType.RetrievedContext(RetrievedContext(...))
>>> # Maps source
>>> chunk = GroundingChunkType.Maps(Maps(...))

GroundingMetadata

Grounding metadata for a response.

Contains all grounding information including sources, supports, and search queries.

Examples:

>>> metadata = GroundingMetadata(
...     web_search_queries=["query1", "query2"],
...     grounding_chunks=[GroundingChunk(...)],
...     grounding_supports=[GroundingSupport(...)]
... )

google_maps_widget_context_token property

google_maps_widget_context_token: Optional[str]

Maps widget context token.

grounding_chunks property

grounding_chunks: Optional[List[GroundingChunk]]

Grounding source chunks.

grounding_supports property

grounding_supports: Optional[List[GroundingSupport]]

Grounding support information.

retrieval_metadata property

retrieval_metadata: Optional[RetrievalMetadata]

Retrieval metadata.

search_entry_point property

search_entry_point: Optional[SearchEntryPoint]

Search entry point.

source_flagging_uris property

source_flagging_uris: Optional[List[SourceFlaggingUri]]

Flagged source URIs.

web_search_queries property

web_search_queries: Optional[List[str]]

Web search queries used.

GroundingSupport

Grounding support information.

Links generated content to source materials with confidence scores.

Examples:

>>> support = GroundingSupport(
...     grounding_chunk_indices=[0, 1, 2],
...     confidence_scores=[0.9, 0.85, 0.8],
...     segment=Segment(...)
... )

confidence_scores property

confidence_scores: Optional[List[float]]

Confidence scores for citations.

grounding_chunk_indices property

grounding_chunk_indices: Optional[List[int]]

Indices into grounding chunks.

segment property

segment: Optional[Segment]

Content segment being supported.

GrpcConfig

GrpcConfig(
    server_uri: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
)

Parameters:

Name Type Description Default
server_uri Optional[str]

URL of the gRPC server to publish messages to. If not provided, the value of the SCOUTER_GRPC_URI environment variable is used.

None
username Optional[str]

Username for basic authentication. If not provided, the value of the SCOUTER_USERNAME environment variable is used.

None
password Optional[str]

Password for basic authentication. If not provided, the value of the SCOUTER_PASSWORD environment variable is used.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    server_uri: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
) -> None:
    """gRPC configuration to use with the GrpcProducer.

    Args:
        server_uri:
            URL of the gRPC server to publish messages to.
            If not provided, the value of the SCOUTER_GRPC_URI environment variable is used.

        username:
            Username for basic authentication.
            If not provided, the value of the SCOUTER_USERNAME environment variable is used.

        password:
            Password for basic authentication.
            If not provided, the value of the SCOUTER_PASSWORD environment variable is used.
    """

GrpcSpanExporter

GrpcSpanExporter(
    batch_export: bool = True,
    export_config: Optional[OtelExportConfig] = None,
    sample_ratio: Optional[float] = None,
)

Exporter that sends spans to a gRPC endpoint.

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to True.

True
export_config Optional[OtelExportConfig]

Configuration for exporting spans.

None
sample_ratio Optional[float]

The sampling ratio for traces. If None, defaults to always sample.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    batch_export: bool = True,
    export_config: Optional[OtelExportConfig] = None,
    sample_ratio: Optional[float] = None,
) -> None:
    """Initialize the GrpcSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to True.
        export_config (Optional[OtelExportConfig]):
            Configuration for exporting spans.
        sample_ratio (Optional[float]):
            The sampling ratio for traces. If None, defaults to always sample.
    """

batch_export property

batch_export: bool

Get whether batch exporting is enabled.

compression property

compression: Optional[CompressionType]

Get the compression type used for exporting spans.

endpoint property

endpoint: Optional[str]

Get the gRPC endpoint for exporting spans.

protocol property

protocol: OtelProtocol

Get the protocol used for exporting spans.

sample_ratio property

sample_ratio: Optional[float]

Get the sampling ratio.

timeout property

timeout: Optional[int]

Get the timeout for gRPC requests in seconds.

HarmBlockMethod

Method for blocking harmful content.

Specifies whether blocking decisions use probability or severity scores.

Examples:

>>> method = HarmBlockMethod.Probability
>>> method.value
'PROBABILITY'

HarmBlockMethodUnspecified class-attribute instance-attribute

HarmBlockMethodUnspecified = 'HarmBlockMethod'

Unspecified blocking method

Probability class-attribute instance-attribute

Probability = 'HarmBlockMethod'

Use probability scores for blocking decisions

Severity class-attribute instance-attribute

Severity = 'HarmBlockMethod'

Use severity scores for blocking decisions

HarmBlockThreshold

Thresholds for blocking harmful content.

Defines sensitivity levels for blocking content based on harm probability.

Examples:

>>> threshold = HarmBlockThreshold.BlockMediumAndAbove
>>> threshold.value
'BLOCK_MEDIUM_AND_ABOVE'

BlockLowAndAbove class-attribute instance-attribute

BlockLowAndAbove = 'HarmBlockThreshold'

Block content with low or higher harm probability

BlockMediumAndAbove class-attribute instance-attribute

BlockMediumAndAbove = 'HarmBlockThreshold'

Block content with medium or higher harm probability

BlockNone class-attribute instance-attribute

BlockNone = 'HarmBlockThreshold'

Do not block any content

BlockOnlyHigh class-attribute instance-attribute

BlockOnlyHigh = 'HarmBlockThreshold'

Block only high harm probability content

HarmBlockThresholdUnspecified class-attribute instance-attribute

HarmBlockThresholdUnspecified = 'HarmBlockThreshold'

Unspecified threshold

Off class-attribute instance-attribute

Off = 'HarmBlockThreshold'

Turn off safety filtering entirely

HarmCategory

Harm categories for safety filtering in Google/Gemini API.

Defines categories of potentially harmful content that can be detected and filtered by the model's safety systems.

Examples:

>>> category = HarmCategory.HarmCategoryHateSpeech
>>> category.value
'HARM_CATEGORY_HATE_SPEECH'

HarmCategoryDangerous class-attribute instance-attribute

HarmCategoryDangerous = 'HarmCategory'

Dangerous content

HarmCategoryDangerousContent class-attribute instance-attribute

HarmCategoryDangerousContent = 'HarmCategory'

Dangerous content (alternative)

HarmCategoryDerogatory class-attribute instance-attribute

HarmCategoryDerogatory = 'HarmCategory'

Derogatory content

HarmCategoryHarassment class-attribute instance-attribute

HarmCategoryHarassment = 'HarmCategory'

Harassment content

HarmCategoryHateSpeech class-attribute instance-attribute

HarmCategoryHateSpeech = 'HarmCategory'

Hate speech content

HarmCategoryMedical class-attribute instance-attribute

HarmCategoryMedical = 'HarmCategory'

Medical misinformation

HarmCategorySexual class-attribute instance-attribute

HarmCategorySexual = 'HarmCategory'

Sexual content

HarmCategorySexuallyExplicit class-attribute instance-attribute

HarmCategorySexuallyExplicit = 'HarmCategory'

Sexually explicit content

HarmCategoryToxicity class-attribute instance-attribute

HarmCategoryToxicity = 'HarmCategory'

Toxic content

HarmCategoryUnspecified class-attribute instance-attribute

HarmCategoryUnspecified = 'HarmCategory'

Unspecified harm category

HarmCategoryViolence class-attribute instance-attribute

HarmCategoryViolence = 'HarmCategory'

Violent content

HarmProbability

Probability level of harmful content.

Indicates the likelihood that content contains harmful material.

Examples:

>>> prob = HarmProbability.Medium
>>> prob.value
'MEDIUM'

HarmProbabilityUnspecified class-attribute instance-attribute

HarmProbabilityUnspecified = 'HarmProbability'

Unspecified probability

High class-attribute instance-attribute

High = 'HarmProbability'

High harm probability

Low class-attribute instance-attribute

Low = 'HarmProbability'

Low harm probability

Medium class-attribute instance-attribute

Medium = 'HarmProbability'

Medium harm probability

Negligible class-attribute instance-attribute

Negligible = 'HarmProbability'

Negligible harm probability

HarmSeverity

Severity level of harmful content.

Indicates the severity of potentially harmful content.

Examples:

>>> severity = HarmSeverity.HarmSeverityMedium
>>> severity.value
'HARM_SEVERITY_MEDIUM'

HarmSeverityHigh class-attribute instance-attribute

HarmSeverityHigh = 'HarmSeverity'

High severity

HarmSeverityLow class-attribute instance-attribute

HarmSeverityLow = 'HarmSeverity'

Low severity

HarmSeverityMedium class-attribute instance-attribute

HarmSeverityMedium = 'HarmSeverity'

Medium severity

HarmSeverityNegligible class-attribute instance-attribute

HarmSeverityNegligible = 'HarmSeverity'

Negligible severity

HarmSeverityUnspecified class-attribute instance-attribute

HarmSeverityUnspecified = 'HarmSeverity'

Unspecified severity

Histogram

bin_counts property

bin_counts: List[int]

Bin counts

bins property

bins: List[float]

Bin values

HttpBasicAuthConfig

HttpBasicAuthConfig(credential_secret: str)

HTTP Basic authentication configuration.

Configures HTTP Basic authentication for external APIs.

Examples:

>>> config = HttpBasicAuthConfig(
...     credential_secret="projects/my-project/secrets/credentials"
... )

Parameters:

Name Type Description Default
credential_secret str

Secret manager resource name for credentials

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    credential_secret: str,
) -> None:
    """Initialize HTTP Basic auth configuration.

    Args:
        credential_secret (str):
            Secret manager resource name for credentials
    """

credential_secret property

credential_secret: str

The credential secret resource name.

HttpConfig

HttpConfig(
    server_uri: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    auth_token: Optional[str] = None,
)

Parameters:

Name Type Description Default
server_uri Optional[str]

URL of the HTTP server to publish messages to. If not provided, the value of the HTTP_server_uri environment variable is used.

None
username Optional[str]

Username for basic authentication.

None
password Optional[str]

Password for basic authentication.

None
auth_token Optional[str]

Authorization token to use for authentication.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    server_uri: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    auth_token: Optional[str] = None,
) -> None:
    """HTTP configuration to use with the HTTPProducer.

    Args:
        server_uri:
            URL of the HTTP server to publish messages to.
            If not provided, the value of the HTTP_server_uri environment variable is used.

        username:
            Username for basic authentication.

        password:
            Password for basic authentication.

        auth_token:
            Authorization token to use for authentication.

    """

HttpElementLocation

Location of HTTP authentication element.

Specifies where authentication information appears in HTTP requests.

Examples:

>>> location = HttpElementLocation.HttpInHeader
>>> location.value
'HTTP_IN_HEADER'

HttpInBody class-attribute instance-attribute

HttpInBody = 'HttpElementLocation'

In request body

HttpInCookie class-attribute instance-attribute

HttpInCookie = 'HttpElementLocation'

In cookies

HttpInHeader class-attribute instance-attribute

HttpInHeader = 'HttpElementLocation'

In HTTP headers

HttpInPath class-attribute instance-attribute

HttpInPath = 'HttpElementLocation'

In URL path

HttpInQuery class-attribute instance-attribute

HttpInQuery = 'HttpElementLocation'

In query parameters

HttpInUnspecified class-attribute instance-attribute

HttpInUnspecified = 'HttpElementLocation'

Unspecified location

HttpSpanExporter

HttpSpanExporter(
    batch_export: bool = True,
    export_config: Optional[OtelExportConfig] = None,
    sample_ratio: Optional[float] = None,
)

Exporter that sends spans to an HTTP endpoint.

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to True.

True
export_config Optional[OtelExportConfig]

Configuration for exporting spans.

None
sample_ratio Optional[float]

The sampling ratio for traces. If None, defaults to always sample.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    batch_export: bool = True,
    export_config: Optional[OtelExportConfig] = None,
    sample_ratio: Optional[float] = None,
) -> None:
    """Initialize the HttpSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to True.
        export_config (Optional[OtelExportConfig]):
            Configuration for exporting spans.
        sample_ratio (Optional[float]):
            The sampling ratio for traces. If None, defaults to always sample.
    """

batch_export property

batch_export: bool

Get whether batch exporting is enabled.

compression property

compression: Optional[CompressionType]

Get the compression type used for exporting spans.

endpoint property

endpoint: Optional[str]

Get the HTTP endpoint for exporting spans.

headers property

headers: Optional[dict[str, str]]

Get the HTTP headers used for exporting spans.

protocol property

protocol: OtelProtocol

Get the protocol used for exporting spans.

sample_ratio property

sample_ratio: Optional[float]

Get the sampling ratio.

timeout property

timeout: Optional[int]

Get the timeout for HTTP requests in seconds.

ImageBlockParam

ImageBlockParam(
    source: Any,
    cache_control: Optional[CacheControl] = None,
)

Image content block parameter.

Image content with source and optional cache control.

Examples:

>>> # Base64 image
>>> source = Base64ImageSource(media_type="image/jpeg", data="...")
>>> block = ImageBlockParam(source=source, cache_control=None)
>>>
>>> # URL image
>>> source = UrlImageSource(url="https://example.com/image.jpg")
>>> block = ImageBlockParam(source=source, cache_control=None)

Parameters:

Name Type Description Default
source Any

Image source (Base64ImageSource or UrlImageSource)

required
cache_control Optional[CacheControl]

Cache control settings

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    source: Any,
    cache_control: Optional["CacheControl"] = None,
) -> None:
    """Initialize image block parameter.

    Args:
        source (Any):
            Image source (Base64ImageSource or UrlImageSource)
        cache_control (Optional[CacheControl]):
            Cache control settings
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

source property

source: Any

Image source.

type property

type: str

Content type (always 'image').

ImageConfig

ImageConfig(
    aspect_ratio: Optional[str] = None,
    image_size: Optional[str] = None,
)

Configuration for image generation features.

Controls aspect ratio and size for generated images.

Examples:

>>> # Generate widescreen 4K image
>>> config = ImageConfig(
...     aspect_ratio="16:9",
...     image_size="4K"
... )
>>> # Generate square 1K image
>>> config = ImageConfig(
...     aspect_ratio="1:1",
...     image_size="1K"
... )

Parameters:

Name Type Description Default
aspect_ratio Optional[str]

Desired aspect ratio (e.g., "16:9", "1:1")

None
image_size Optional[str]

Image size ("1K", "2K", "4K")

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    aspect_ratio: Optional[str] = None,
    image_size: Optional[str] = None,
) -> None:
    """Initialize image configuration.

    Args:
        aspect_ratio (Optional[str]):
            Desired aspect ratio (e.g., "16:9", "1:1")
        image_size (Optional[str]):
            Image size ("1K", "2K", "4K")
    """

aspect_ratio property

aspect_ratio: Optional[str]

The image aspect ratio.

image_size property

image_size: Optional[str]

The image size.

ImageContentPart

ImageContentPart(url: str, detail: Optional[str] = None)

Image content part for OpenAI chat messages.

This class represents an image as part of a message's content.

Examples:

>>> image_part = ImageContentPart(
...     url="https://example.com/image.jpg",
...     detail="high"
... )
>>> image_part.type
'image_url'

Parameters:

Name Type Description Default
url str

Image URL (can be HTTP URL or data URL)

required
detail Optional[str]

Detail level ("low", "high", or "auto")

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    url: str,
    detail: Optional[str] = None,
) -> None:
    """Initialize image content part.

    Args:
        url (str):
            Image URL (can be HTTP URL or data URL)
        detail (Optional[str]):
            Detail level ("low", "high", or "auto")
    """

image_url property

image_url: ImageUrl

The image URL reference.

type property

type: str

The content part type (always 'image_url').

ImageUrl

ImageUrl(url: str, detail: Optional[str] = None)

Image URL reference for OpenAI chat messages.

This class represents an image by URL with optional detail level.

Examples:

>>> # Standard detail
>>> image = ImageUrl(url="https://example.com/image.jpg")
>>>
>>> # High detail
>>> image = ImageUrl(
...     url="https://example.com/image.jpg",
...     detail="high"
... )

Parameters:

Name Type Description Default
url str

Image URL (can be HTTP URL or data URL)

required
detail Optional[str]

Detail level ("low", "high", or "auto")

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    url: str,
    detail: Optional[str] = None,
) -> None:
    """Initialize image URL.

    Args:
        url (str):
            Image URL (can be HTTP URL or data URL)
        detail (Optional[str]):
            Detail level ("low", "high", or "auto")
    """

detail property

detail: Optional[str]

The detail level.

url property

url: str

The image URL.

InnerAllowedTools

Inner configuration for allowed tools.

This class contains the actual list of allowed tools and the mode.

Examples:

>>> tools = [ToolDefinition("get_weather")]
>>> inner = InnerAllowedTools(mode=AllowedToolsMode.Auto, tools=tools)

mode property

mode: AllowedToolsMode

The mode for allowed tools.

tools property

tools: List[ToolDefinition]

The list of allowed tools.

InputAudioContentPart

InputAudioContentPart(data: str, format: str)

Audio content part for OpenAI chat messages.

This class represents audio input as part of a message's content.

Examples:

>>> audio_part = InputAudioContentPart(
...     data="base64_encoded_audio",
...     format="wav"
... )
>>> audio_part.type
'input_audio'

Parameters:

Name Type Description Default
data str

Base64 encoded audio data

required
format str

Audio format (e.g., "wav", "mp3")

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    data: str,
    format: str,
) -> None:
    """Initialize audio content part.

    Args:
        data (str):
            Base64 encoded audio data
        format (str):
            Audio format (e.g., "wav", "mp3")
    """

input_audio property

input_audio: InputAudioData

The audio data.

type property

type: str

The content part type (always 'input_audio').

InputAudioData

InputAudioData(data: str, format: str)

Audio data for input in OpenAI chat messages.

This class represents audio input data with format specification.

Examples:

>>> audio_data = InputAudioData(
...     data="base64_encoded_audio",
...     format="wav"
... )
>>> audio_data.format
'wav'

Parameters:

Name Type Description Default
data str

Base64 encoded audio data

required
format str

Audio format (e.g., "wav", "mp3")

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    data: str,
    format: str,
) -> None:
    """Initialize input audio data.

    Args:
        data (str):
            Base64 encoded audio data
        format (str):
            Audio format (e.g., "wav", "mp3")
    """

data property

data: str

The base64 encoded audio data.

format property

format: str

The audio format.

Interval

Time interval specification.

Represents a time range with start and end times.

Examples:

>>> interval = Interval(
...     start_time="2024-01-01T00:00:00Z",
...     end_time="2024-12-31T23:59:59Z"
... )

end_time property

end_time: str

The end time.

start_time property

start_time: str

The start time.

KafkaConfig

KafkaConfig(
    username: Optional[str] = None,
    password: Optional[str] = None,
    brokers: Optional[str] = None,
    topic: Optional[str] = None,
    compression_type: Optional[str] = None,
    message_timeout_ms: int = 600000,
    message_max_bytes: int = 2097164,
    log_level: LogLevel = LogLevel.Info,
    config: Dict[str, str] = {},
    max_retries: int = 3,
)

This configuration supports both authenticated (SASL) and unauthenticated connections. When credentials are provided, SASL authentication is automatically enabled with secure defaults.

Authentication Priority (first match wins): 1. Direct parameters (username/password) 2. Environment variables (KAFKA_USERNAME/KAFKA_PASSWORD) 3. Configuration dictionary (sasl.username/sasl.password)

SASL Security Defaults
  • security.protocol: "SASL_SSL" (override via KAFKA_SECURITY_PROTOCOL env var)
  • sasl.mechanism: "PLAIN" (override via KAFKA_SASL_MECHANISM env var)

Parameters:

Name Type Description Default
username Optional[str]

SASL username for authentication. Fallback: KAFKA_USERNAME environment variable.

None
password Optional[str]

SASL password for authentication. Fallback: KAFKA_PASSWORD environment variable.

None
brokers Optional[str]

Comma-separated list of Kafka broker addresses (host:port). Fallback: KAFKA_BROKERS environment variable. Default: "localhost:9092"

None
topic Optional[str]

Target Kafka topic for message publishing. Fallback: KAFKA_TOPIC environment variable. Default: "scouter_monitoring"

None
compression_type Optional[str]

Message compression algorithm. Options: "none", "gzip", "snappy", "lz4", "zstd" Default: "gzip"

None
message_timeout_ms int

Maximum time to wait for message delivery (milliseconds). Default: 600000 (10 minutes)

600000
message_max_bytes int

Maximum message size in bytes. Default: 2097164 (~2MB)

2097164
log_level LogLevel

Logging verbosity for the Kafka producer. Default: LogLevel.Info

Info
config Dict[str, str]

Additional Kafka producer configuration parameters. See: https://kafka.apache.org/documentation/#producerconfigs Note: Direct parameters take precedence over config dictionary values.

{}
max_retries int

Maximum number of retry attempts for failed message deliveries. Default: 3

3

Examples:

Basic usage (unauthenticated):

config = KafkaConfig(
    brokers="kafka1:9092,kafka2:9092",
    topic="my_topic"
)

SASL authentication:

config = KafkaConfig(
    username="my_user",
    password="my_password",
    brokers="secure-kafka:9093",
    topic="secure_topic"
)

Advanced configuration:

config = KafkaConfig(
    brokers="kafka:9092",
    compression_type="lz4",
    config={
        "acks": "all",
        "batch.size": "32768",
        "linger.ms": "10"
    }
)

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    username: Optional[str] = None,
    password: Optional[str] = None,
    brokers: Optional[str] = None,
    topic: Optional[str] = None,
    compression_type: Optional[str] = None,
    message_timeout_ms: int = 600_000,
    message_max_bytes: int = 2097164,
    log_level: LogLevel = LogLevel.Info,
    config: Dict[str, str] = {},
    max_retries: int = 3,
) -> None:
    """Kafka configuration for connecting to and publishing messages to Kafka brokers.

    This configuration supports both authenticated (SASL) and unauthenticated connections.
    When credentials are provided, SASL authentication is automatically enabled with
    secure defaults.

    Authentication Priority (first match wins):
        1. Direct parameters (username/password)
        2. Environment variables (KAFKA_USERNAME/KAFKA_PASSWORD)
        3. Configuration dictionary (sasl.username/sasl.password)

    SASL Security Defaults:
        - security.protocol: "SASL_SSL" (override via KAFKA_SECURITY_PROTOCOL env var)
        - sasl.mechanism: "PLAIN" (override via KAFKA_SASL_MECHANISM env var)

    Args:
        username:
            SASL username for authentication.
            Fallback: KAFKA_USERNAME environment variable.
        password:
            SASL password for authentication.
            Fallback: KAFKA_PASSWORD environment variable.
        brokers:
            Comma-separated list of Kafka broker addresses (host:port).
            Fallback: KAFKA_BROKERS environment variable.
            Default: "localhost:9092"
        topic:
            Target Kafka topic for message publishing.
            Fallback: KAFKA_TOPIC environment variable.
            Default: "scouter_monitoring"
        compression_type:
            Message compression algorithm.
            Options: "none", "gzip", "snappy", "lz4", "zstd"
            Default: "gzip"
        message_timeout_ms:
            Maximum time to wait for message delivery (milliseconds).
            Default: 600000 (10 minutes)
        message_max_bytes:
            Maximum message size in bytes.
            Default: 2097164 (~2MB)
        log_level:
            Logging verbosity for the Kafka producer.
            Default: LogLevel.Info
        config:
            Additional Kafka producer configuration parameters.
            See: https://kafka.apache.org/documentation/#producerconfigs
            Note: Direct parameters take precedence over config dictionary values.
        max_retries:
            Maximum number of retry attempts for failed message deliveries.
            Default: 3

    Examples:
        Basic usage (unauthenticated):
        ```python
        config = KafkaConfig(
            brokers="kafka1:9092,kafka2:9092",
            topic="my_topic"
        )
        ```

        SASL authentication:
        ```python
        config = KafkaConfig(
            username="my_user",
            password="my_password",
            brokers="secure-kafka:9093",
            topic="secure_topic"
        )
        ```

        Advanced configuration:
        ```python
        config = KafkaConfig(
            brokers="kafka:9092",
            compression_type="lz4",
            config={
                "acks": "all",
                "batch.size": "32768",
                "linger.ms": "10"
            }
        )
        ```
    """

LLMJudgeTask

LLMJudgeTask(
    id: str,
    prompt: Prompt,
    expected_value: Any,
    field_path: Optional[str],
    operator: ComparisonOperator,
    description: Optional[str] = None,
    depends_on: Optional[List[str]] = None,
    max_retries: Optional[int] = None,
    condition: bool = False,
)

LLM-powered evaluation task for complex assessments.

Uses an additional LLM call to evaluate responses based on sophisticated criteria that require reasoning, context understanding, or subjective judgment. LLM judges are ideal for evaluations that cannot be captured by deterministic rules, such as semantic similarity, quality assessment, or nuanced criteria.

Unlike AssertionTask which provides efficient, deterministic rule-based evaluation, LLMJudgeTask leverages an LLM's reasoning capabilities for: - Semantic similarity and relevance assessment - Quality, coherence, and fluency evaluation - Factual accuracy and hallucination detection - Tone, sentiment, and style analysis - Custom evaluation criteria requiring judgment - Complex reasoning over multiple context elements

The LLM judge executes a prompt that receives context (either raw or from dependencies) and returns a response that is then compared against the expected value using the specified operator.

Common Use Cases
  • Evaluate semantic similarity between generated and reference answers
  • Assess response quality on subjective criteria (helpfulness, clarity)
  • Detect factual inconsistencies or hallucinations
  • Score tone appropriateness for different audiences
  • Judge whether responses meet complex, nuanced requirements

Examples:

Basic relevance check using LLM judge:

>>> # Define a prompt that evaluates relevance
>>> relevance_prompt = Prompt(
...     system_instructions="Evaluate if the response is relevant to the query",
...     messages="Given the query '{{query}}' and response '{{response}}', rate the relevance from 0 to 10 as an integer.",
...     model="gpt-4",
...     provider= Provider.OpenAI,
...     output_type=Score # returns a structured output with schema {"score": float, "reason": str}
... )
>>> # Context at runtime: {"query": "What is AI?", "response": "AI is..."}
>>> task = LLMJudgeTask(
...     id="relevance_judge",
...     prompt=relevance_prompt,
...     expected_value=8,
...     field_path="score",
...     operator=ComparisonOperator.GreaterThanOrEqual,
...     description="Ensure response relevance score >= 8"
... )

Factuality check with structured output:

>>> # Prompt returns a Pydantic model with factuality assessment
>>> from pydantic import BaseModel
>>> class FactCheckResult(BaseModel):
...     is_factual: bool
...     confidence: float
>>> fact_check_prompt = Prompt(
...     system_instructions="Verify factual claims in the response",
...     messages="Assess the factual accuracy of the response: '{{response}}'. Provide a JSON with fields 'is_factual' (bool) and 'confidence' (float).", # pylint: disable=line-too-long
...     model="gpt-4",
...     provider= Provider.OpenAI,
...     output_type=FactCheckResult
... )
>>> # Context: {"response": "Paris is the capital of France"}
>>> task = LLMJudgeTask(
...     id="fact_checker",
...     prompt=fact_check_prompt,
...     expected_value={"is_factual": True, "confidence": 0.95},
...     field_path="response",
...     operator=ComparisonOperator.Contains
... )

Quality assessment with dependencies:

>>> # This judge depends on previous relevance check
>>> quality_prompt = Prompt(
...     system_instructions="Assess the overall quality of the response",
...     messages="Given the response '{{response}}', rate its quality from 0 to 5",
...     model="gemini-3.0-flash",
...     provider= Provider.Google,
...     output_type=Score
... )
>>> task = LLMJudgeTask(
...     id="quality_judge",
...     prompt=quality_prompt,
...     expected_value=0.7,
...     field_path=None,
...     operator=ComparisonOperator.GreaterThan,
...     depends_on=["relevance_judge"],
...     description="Evaluate overall quality after relevance check"
... )

Note: - LLM judge tasks incur additional latency and cost vs assertions - Scouter does not auto-inject any additional prompts or context apart from what is defined in the Prompt object - For tasks that contain dependencies, upstream results are passed as context to downstream tasks. - Use dependencies to chain evaluations and pass results between tasks - max_retries helps handle transient LLM failures (defaults to 3) - Field paths work the same as AssertionTask (dot-notation for nested access) - Consider cost/latency tradeoffs when designing judge evaluations

Creates an evaluation task that uses an LLM to assess responses based on sophisticated criteria requiring reasoning or subjective judgment. The LLM receives context (raw or from dependencies) and returns a response that is compared against the expected value.

Parameters:

Name Type Description Default
id str

Unique identifier for the task. Will be converted to lowercase. Used to reference this task in dependencies and results.

required
prompt Prompt

Prompt configuration defining the LLM evaluation task.

required
expected_value Any

The expected value to compare against the LLM's response. Type depends on prompt response type. Can be any JSON-serializable type: str, int, float, bool, list, dict, or None.

required
field_path Optional[str]

Optional dot-notation path to extract value from context before passing to the LLM prompt (e.g., "response.text"), the entire response will be evaluated.

required
operator ComparisonOperator

Comparison operator to apply between LLM response and expected_value

required
description Optional[str]

Optional human-readable description of what this judge evaluates.

None
depends_on Optional[List[str]]

Optional list of task IDs that must complete successfully before this task executes. Results from dependencies are passed to the LLM prompt as additional context parameters. Empty list if not provided.

None
max_retries Optional[int]

Optional maximum number of retry attempts if the LLM call fails (network errors, rate limits, etc.). Defaults to 3 if not provided. Set to 0 to disable retries.

None
condition bool

If True, this judge task acts as a condition for subsequent tasks. If the judge fails, dependent tasks will be skipped and this task will be excluded from final results.

False
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    id: str,
    prompt: Prompt,
    expected_value: Any,
    field_path: Optional[str],
    operator: ComparisonOperator,
    description: Optional[str] = None,
    depends_on: Optional[List[str]] = None,
    max_retries: Optional[int] = None,
    condition: bool = False,
):
    """Initialize an LLM judge task for advanced evaluation.

    Creates an evaluation task that uses an LLM to assess responses based on
    sophisticated criteria requiring reasoning or subjective judgment. The LLM
    receives context (raw or from dependencies) and returns a response that
    is compared against the expected value.

    Args:
        id (str):
            Unique identifier for the task. Will be converted to lowercase.
            Used to reference this task in dependencies and results.
        prompt (Prompt):
            Prompt configuration defining the LLM evaluation task.
        expected_value (Any):
            The expected value to compare against the LLM's response. Type depends
            on prompt response type. Can be any JSON-serializable type: str, int,
            float, bool, list, dict, or None.
        field_path (Optional[str]):
            Optional dot-notation path to extract value from context before passing
            to the LLM prompt (e.g., "response.text"), the entire response will be
            evaluated.
        operator (ComparisonOperator):
            Comparison operator to apply between LLM response and expected_value
        description (Optional[str]):
            Optional human-readable description of what this judge evaluates.
        depends_on (Optional[List[str]]):
            Optional list of task IDs that must complete successfully before this
            task executes. Results from dependencies are passed to the LLM prompt
            as additional context parameters. Empty list if not provided.
        max_retries (Optional[int]):
            Optional maximum number of retry attempts if the LLM call fails
            (network errors, rate limits, etc.). Defaults to 3 if not provided.
            Set to 0 to disable retries.
        condition (bool):
            If True, this judge task acts as a condition for subsequent tasks.
            If the judge fails, dependent tasks will be skipped and this task
            will be excluded from final results.
    """

depends_on property writable

depends_on: List[str]

List of task IDs this task depends on.

Dependency results are passed to the LLM prompt as additional context parameters, enabling chained evaluations.

expected_value property

expected_value: Any

Expected value to compare against LLM response.

Returns:

Type Description
Any

The expected value as a Python object (deserialized from internal

Any

JSON representation).

field_path property

field_path: Optional[str]

Dot-notation path to extract value from context before LLM evaluation.

If specified, extracts nested value from context (e.g., "response.text") and passes it to the LLM prompt. If None, the entire context or dependency results are passed.

id property writable

id: str

Unique task identifier (lowercase).

max_retries property writable

max_retries: Optional[int]

Maximum number of retry attempts for LLM call failures.

Handles transient failures like network errors or rate limits. Defaults to 3 if not specified during initialization.

operator property

operator: ComparisonOperator

Comparison operator for evaluating LLM response against expected value.

For Score responses: use numeric operators (GreaterThan, Equals, etc.) For Pydantic responses: use structural operators (Contains, Equals, etc.)

prompt property

prompt: Prompt

Prompt configuration for the LLM evaluation task.

Defines the LLM model, evaluation instructions, and response format. The prompt must have response_type of Score or Pydantic.

LLMTestServer

LLMTestServer()

Mock server for OpenAI API. This class is used to simulate the OpenAI API for testing purposes.

Source code in python/scouter/stubs.pyi
def __init__(self): ...

Language

Programming language for executable code.

Specifies the language used when the model generates executable code.

Examples:

>>> lang = Language.Python
>>> lang.value
'PYTHON'

LanguageUnspecified class-attribute instance-attribute

LanguageUnspecified = 'Language'

Unspecified language

Python class-attribute instance-attribute

Python = 'Language'

Python programming language

LatLng

LatLng(latitude: float, longitude: float)

Geographic coordinates.

Represents a latitude/longitude pair for location-based features.

Examples:

>>> # New York City coordinates
>>> coords = LatLng(latitude=40.7128, longitude=-74.0060)

Parameters:

Name Type Description Default
latitude float

Latitude in degrees

required
longitude float

Longitude in degrees

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    latitude: float,
    longitude: float,
) -> None:
    """Initialize coordinates.

    Args:
        latitude (float):
            Latitude in degrees
        longitude (float):
            Longitude in degrees
    """

latitude property

latitude: float

The latitude.

longitude property

longitude: float

The longitude.

LatencyMetrics

p25 property

p25: float

25th percentile

p5 property

p5: float

5th percentile

p50 property

p50: float

50th percentile

p95 property

p95: float

95th percentile

p99 property

p99: float

99th percentile

LlmRanker

LlmRanker(model_name: Optional[str] = None)

LLM-based ranker configuration.

Uses an LLM to rank RAG results.

Examples:

>>> ranker = LlmRanker(model_name="gemini-1.5-flash")

Parameters:

Name Type Description Default
model_name Optional[str]

Model name for ranking

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model_name: Optional[str] = None,
) -> None:
    """Initialize LLM ranker.

    Args:
        model_name (Optional[str]):
            Model name for ranking
    """

model_name property

model_name: Optional[str]

The ranking model name.

LogContent

Log probability content for a single token.

This class contains detailed probability information for a token generated by the model.

Examples:

>>> # Analyzing token probabilities
>>> choice = response.choices[0]
>>> if choice.logprobs and choice.logprobs.content:
...     for log_content in choice.logprobs.content:
...         print(f"Token: {log_content.token}")
...         print(f"Log prob: {log_content.logprob}")

bytes property

bytes: Optional[List[int]]

UTF-8 bytes of the token.

logprob property

logprob: float

Log probability of the token.

token property

token: str

The token.

top_logprobs property

top_logprobs: Optional[List[TopLogProbs]]

Top alternative tokens.

LogProbs

Log probability information for OpenAI responses.

This class contains log probability data for both generated content and refusals.

Examples:

>>> # Checking log probabilities
>>> choice = response.choices[0]
>>> if choice.logprobs:
...     if choice.logprobs.content:
...         print(f"Content tokens: {len(choice.logprobs.content)}")
...     if choice.logprobs.refusal:
...         print("Refusal log probs available")

content property

content: Optional[List[LogContent]]

Log probabilities for content tokens.

refusal property

refusal: Optional[List[LogContent]]

Log probabilities for refusal tokens.

LoggingConfig

LoggingConfig(
    show_threads: bool = True,
    log_level: LogLevel = LogLevel.Info,
    write_level: WriteLevel = WriteLevel.Stdout,
    use_json: bool = False,
)

Parameters:

Name Type Description Default
show_threads bool

Whether to include thread information in log messages. Default is True.

True
log_level LogLevel

Log level for the logger. Default is LogLevel.Info.

Info
write_level WriteLevel

Write level for the logger. Default is WriteLevel.Stdout.

Stdout
use_json bool

Whether to write log messages in JSON format. Default is False.

False
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    show_threads: bool = True,
    log_level: LogLevel = LogLevel.Info,
    write_level: WriteLevel = WriteLevel.Stdout,
    use_json: bool = False,
) -> None:
    """
    Logging configuration options.

    Args:
        show_threads:
            Whether to include thread information in log messages.
            Default is True.

        log_level:
            Log level for the logger.
            Default is LogLevel.Info.

        write_level:
            Write level for the logger.
            Default is WriteLevel.Stdout.

        use_json:
            Whether to write log messages in JSON format.
            Default is False.
    """

LogprobsCandidate

Log probability information for a token.

Contains token string, ID, and log probability.

Examples:

>>> candidate = LogprobsCandidate(
...     token="hello",
...     token_id=12345,
...     log_probability=-0.5
... )

log_probability property

log_probability: Optional[float]

Log probability.

token property

token: Optional[str]

Token string.

token_id property

token_id: Optional[int]

Token ID.

LogprobsResult

Complete log probability result.

Contains both top candidates and chosen tokens with probabilities.

Examples:

>>> result = LogprobsResult(
...     top_candidates=[TopCandidates(...)],
...     chosen_candidates=[LogprobsCandidate(...)]
... )

chosen_candidates property

chosen_candidates: Optional[List[LogprobsCandidate]]

Actually chosen tokens.

top_candidates property

top_candidates: Optional[List[TopCandidates]]

Top candidates per step.

Manual

Manual(num_bins: int)

Divides the feature range into a fixed number of equally sized bins.

Parameters:

Name Type Description Default
num_bins int

The exact number of bins to create.

required
Source code in python/scouter/stubs.pyi
def __init__(self, num_bins: int):
    """Manual equal-width binning strategy.

    Divides the feature range into a fixed number of equally sized bins.

    Args:
        num_bins:
            The exact number of bins to create.
    """

num_bins property writable

num_bins: int

The number of bins you want created

ManualRoutingMode

ManualRoutingMode(model_name: str)

Configuration for manual model routing.

Explicitly specifies which model to use instead of automatic selection.

Examples:

>>> mode = ManualRoutingMode(model_name="gemini-2.0-flash-exp")

Parameters:

Name Type Description Default
model_name str

Name of the model to use

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model_name: str,
) -> None:
    """Initialize manual routing configuration.

    Args:
        model_name (str):
            Name of the model to use
    """

model_name property

model_name: str

The model name.

Maps

Google Maps source information.

Information about a Maps location used for grounding.

Examples:

>>> maps = Maps(
...     uri="https://maps.google.com/...",
...     title="Statue of Liberty",
...     place_id="ChIJPTacEpBQwokRKwIlDbbNLlE"
... )

place_id property

place_id: Optional[str]

Google Maps place ID.

text property

text: Optional[str]

Location description.

title property

title: Optional[str]

Location title.

uri property

uri: Optional[str]

Maps URI.

MediaResolution

Media resolution levels for input processing.

Controls the token resolution at which media content is sampled, affecting quality and token usage.

Examples:

>>> resolution = MediaResolution.MediaResolutionHigh
>>> resolution.value
'MEDIA_RESOLUTION_HIGH'

MediaResolutionHigh class-attribute instance-attribute

MediaResolutionHigh = 'MediaResolution'

High resolution with zoomed reframing (256 tokens)

MediaResolutionLow class-attribute instance-attribute

MediaResolutionLow = 'MediaResolution'

Low resolution (64 tokens)

MediaResolutionMedium class-attribute instance-attribute

MediaResolutionMedium = 'MediaResolution'

Medium resolution (256 tokens)

MediaResolutionUnspecified class-attribute instance-attribute

MediaResolutionUnspecified = 'MediaResolution'

Unspecified resolution

MessageParam

MessageParam(content: _ContentType, role: str)

Message parameter for chat completion requests.

Input message with role and content.

Examples:

>>> # Simple text message
>>> msg = MessageParam(content="Hello, Claude!", role="user")
>>>
>>> # Message with mixed content
>>> text_block = TextBlockParam(text="Describe this:", cache_control=None, citations=None)
>>> image_source = UrlImageSource(url="https://example.com/image.jpg")
>>> image_block = ImageBlockParam(source=image_source, cache_control=None)
>>> msg = MessageParam(content=[text_block, image_block], role="user")

Parameters:

Name Type Description Default
content _ContentType

Message content (string, content block or list of content blocks)

required
role str

Message role ("user" or "assistant")

required
Source code in python/scouter/stubs.pyi
def __init__(self, content: _ContentType, role: str) -> None:
    """Initialize message parameter.

    Args:
        content (_ContentType):
            Message content (string, content block or list of content blocks)
        role (str):
            Message role ("user" or "assistant")
    """

content property

content: List[_ParamType]

Message content blocks.

role property

role: str

Message role.

bind

bind(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> MessageParam

Bind variables to the message content. Args: name (Optional[str]): The variable name to bind. value (Optional[Union[str, int, float, bool, list]]): The variable value to bind. Returns: MessageParam: A new MessageParam instance with bound variables.

Source code in python/scouter/stubs.pyi
def bind(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> "MessageParam":
    """Bind variables to the message content.
    Args:
        name (Optional[str]):
            The variable name to bind.
        value (Optional[Union[str, int, float, bool, list]]):
            The variable value to bind.
    Returns:
        MessageParam: A new MessageParam instance with bound variables.
    """

bind_mut

bind_mut(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> None

Bind variables to the message content in place. Args: name (Optional[str]): The variable name to bind. value (Optional[Union[str, int, float, bool, list]]): The variable value to bind. Returns: None

Source code in python/scouter/stubs.pyi
def bind_mut(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
) -> None:
    """Bind variables to the message content in place.
    Args:
        name (Optional[str]):
            The variable name to bind.
        value (Optional[Union[str, int, float, bool, list]]):
            The variable value to bind.
    Returns:
        None
    """

model_dump

model_dump() -> dict

Dump the message to a dictionary.

Source code in python/scouter/stubs.pyi
def model_dump(self) -> dict:
    """Dump the message to a dictionary."""

text

text() -> str

Get the text content of the first part, if available. Returns an empty string if the first part is not text. This is meant for convenience when working with simple text messages.

Source code in python/scouter/stubs.pyi
def text(self) -> str:
    """Get the text content of the first part, if available. Returns
    an empty string if the first part is not text.
    This is meant for convenience when working with simple text messages.
    """

Metadata

Metadata(user_id: Optional[str] = None)

Request metadata.

Metadata associated with the API request.

Examples:

>>> metadata = Metadata(user_id="user_123")

Parameters:

Name Type Description Default
user_id Optional[str]

External user identifier

None
Source code in python/scouter/stubs.pyi
def __init__(self, user_id: Optional[str] = None) -> None:
    """Initialize metadata.

    Args:
        user_id (Optional[str]):
            External user identifier
    """

Metric

Metric(name: str, value: float | int)

Parameters:

Name Type Description Default
name str

Name of the metric

required
value float | int

Value to assign to the metric. Can be an int or float but will be converted to float.

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, value: float | int) -> None:
    """Initialize metric

    Args:
        name:
            Name of the metric
        value:
            Value to assign to the metric. Can be an int or float but will be converted to float.
    """

Metrics

Metrics(
    metrics: List[Metric] | Dict[str, Union[int, float]]
)

Parameters:

Name Type Description Default
metrics List[Metric] | Dict[str, Union[int, float]]

List of metrics or a dictionary of key-value pairs. If a list, each item must be an instance of Metric. If a dictionary, each key is the metric name and each value is the metric value.

required
Example

```python

Passing a list of metrics

metrics = Metrics( metrics=[ Metric("metric_1", 1.0), Metric("metric_2", 2.5), Metric("metric_3", 3), ] )

Passing a dictionary (pydantic model) of metrics

class MyMetrics(BaseModel): metric1: float metric2: int

my_metrics = MyMetrics( metric1=1.0, metric2=2, )

metrics = Metrics(my_metrics.model_dump())

Source code in python/scouter/stubs.pyi
def __init__(self, metrics: List[Metric] | Dict[str, Union[int, float]]) -> None:
    """Initialize metrics

    Args:
        metrics:
            List of metrics or a dictionary of key-value pairs.
            If a list, each item must be an instance of Metric.
            If a dictionary, each key is the metric name and each value is the metric value.


    Example:
        ```python

        # Passing a list of metrics
        metrics = Metrics(
            metrics=[
                Metric("metric_1", 1.0),
                Metric("metric_2", 2.5),
                Metric("metric_3", 3),
            ]
        )

        # Passing a dictionary (pydantic model) of metrics
        class MyMetrics(BaseModel):
            metric1: float
            metric2: int

        my_metrics = MyMetrics(
            metric1=1.0,
            metric2=2,
        )

        metrics = Metrics(my_metrics.model_dump())
    """

entity_type property

entity_type: EntityType

Return the entity type

metrics property

metrics: List[Metric]

Return the list of metrics

MissingTask

Represents a task that exists in only one of the compared evaluations

present_in property

present_in: str

Get which evaluation contains this task ('baseline_only' or 'comparison_only')

task_id property

task_id: str

Get the task identifier

MockConfig

MockConfig(**kwargs)

Parameters:

Name Type Description Default
**kwargs

Arbitrary keyword arguments to set as attributes.

{}
Source code in python/scouter/stubs.pyi
def __init__(self, **kwargs) -> None:
    """Mock configuration for the ScouterQueue

    Args:
        **kwargs: Arbitrary keyword arguments to set as attributes.
    """

Modality

Content modality types supported by the model.

Defines the types of content (text, image, audio, etc.) that can be included in requests and responses.

Examples:

>>> modality = Modality.Text
>>> modality.value
'TEXT'

Audio class-attribute instance-attribute

Audio = 'Modality'

Audio content

Document class-attribute instance-attribute

Document = 'Modality'

Document content

Image class-attribute instance-attribute

Image = 'Modality'

Image content

ModalityUnspecified class-attribute instance-attribute

ModalityUnspecified = 'Modality'

Unspecified modality

Text class-attribute instance-attribute

Text = 'Modality'

Text content

Video class-attribute instance-attribute

Video = 'Modality'

Video content

ModalityTokenCount

Token count by modality.

Breaks down token usage by content type (text, image, audio, etc.).

Examples:

>>> count = ModalityTokenCount(
...     modality=Modality.Text,
...     token_count=150
... )

modality property

modality: Optional[Modality]

The content modality.

token_count property

token_count: Optional[int]

Token count for this modality.

Mode

Function calling mode for tool usage.

Controls how the model handles function/tool calls during generation.

Examples:

>>> mode = Mode.Auto
>>> mode.value
'AUTO'

Any class-attribute instance-attribute

Any = 'Mode'

Model must call a function

Auto class-attribute instance-attribute

Auto = 'Mode'

Model decides whether to call functions or respond naturally

ModeUnspecified class-attribute instance-attribute

ModeUnspecified = 'Mode'

Unspecified mode

None_Mode class-attribute instance-attribute

None_Mode = 'Mode'

Model will not call any functions

Validated class-attribute instance-attribute

Validated = 'Mode'

Model may call functions or respond naturally, validated

ModelArmorConfig

ModelArmorConfig(
    prompt_template_name: Optional[str] = None,
    response_template_name: Optional[str] = None,
)

Configuration for Model Armor security filtering.

Model Armor provides safety and security filtering for prompts and responses using customized templates.

Examples:

>>> config = ModelArmorConfig(
...     prompt_template_name="projects/my-project/locations/us/templates/strict",
...     response_template_name="projects/my-project/locations/us/templates/moderate"
... )

Parameters:

Name Type Description Default
prompt_template_name Optional[str]

Template for prompt screening

None
response_template_name Optional[str]

Template for response screening

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    prompt_template_name: Optional[str] = None,
    response_template_name: Optional[str] = None,
) -> None:
    """Initialize Model Armor configuration.

    Args:
        prompt_template_name (Optional[str]):
            Template for prompt screening
        response_template_name (Optional[str]):
            Template for response screening
    """

prompt_template_name property

prompt_template_name: Optional[str]

The prompt template name.

response_template_name property

response_template_name: Optional[str]

The response template name.

ModelRoutingPreference

Preference for automatic model routing.

Controls how models are selected when using automatic routing, balancing quality, cost, and performance.

Examples:

>>> preference = ModelRoutingPreference.Balanced
>>> preference.value
'BALANCED'

Balanced class-attribute instance-attribute

Balanced = 'ModelRoutingPreference'

Balance quality and cost

PrioritizeCost class-attribute instance-attribute

PrioritizeCost = 'ModelRoutingPreference'

Prioritize lower cost

PrioritizeQuality class-attribute instance-attribute

PrioritizeQuality = 'ModelRoutingPreference'

Prioritize response quality

Unknown class-attribute instance-attribute

Unknown = 'ModelRoutingPreference'

Unknown preference

ModelSettings

Bases: Generic[T]

Configuration settings for LLM models.

Unified interface for provider-specific model settings.

Examples:

>>> from potato_head.openai import OpenAIChatSettings
>>> settings = OpenAIChatSettings(temperature=0.7, max_tokens=1000)
>>> model_settings = ModelSettings(settings)
>>>
>>> # Or extract from existing settings
>>> openai_settings = model_settings.settings

settings property

settings: T

Provider-specific settings object.

model_dump

model_dump() -> Dict[str, Any]

Serialize settings to dictionary.

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Serialize settings to dictionary."""

model_dump_json

model_dump_json() -> str

Serialize settings to JSON string.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Serialize settings to JSON string."""

settings_type

settings_type() -> Any

Return the settings type.

Source code in python/scouter/stubs.pyi
def settings_type(self) -> Any:
    """Return the settings type."""

MultiSpeakerVoiceConfig

MultiSpeakerVoiceConfig(
    speaker_voice_configs: List[SpeakerVoiceConfig],
)

Configuration for multi-speaker text-to-speech.

Configures voices for multiple speakers in a conversation or dialogue.

Examples:

>>> config = MultiSpeakerVoiceConfig(
...     speaker_voice_configs=[
...         SpeakerVoiceConfig(
...             speaker="Alice",
...             voice_config=VoiceConfig(
...                 prebuilt_voice_config=PrebuiltVoiceConfig(voice_name="Puck")
...             )
...         ),
...         SpeakerVoiceConfig(
...             speaker="Bob",
...             voice_config=VoiceConfig(
...                 prebuilt_voice_config=PrebuiltVoiceConfig(voice_name="Charon")
...             )
...         )
...     ]
... )

Parameters:

Name Type Description Default
speaker_voice_configs List[SpeakerVoiceConfig]

List of speaker voice configurations

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    speaker_voice_configs: List[SpeakerVoiceConfig],
) -> None:
    """Initialize multi-speaker configuration.

    Args:
        speaker_voice_configs (List[SpeakerVoiceConfig]):
            List of speaker voice configurations
    """

speaker_voice_configs property

speaker_voice_configs: List[SpeakerVoiceConfig]

The speaker voice configurations.

NumericStats

distinct property

distinct: Distinct

Distinct value counts

histogram property

histogram: Histogram

Value histograms

max property

max: float

Return the max.

mean property

mean: float

Return the mean.

min property

min: float

Return the min.

quantiles property

quantiles: Quantiles

Value quantiles

stddev property

stddev: float

Return the stddev.

OauthConfig

OauthConfig(
    access_token: Optional[str] = None,
    service_account: Optional[str] = None,
)

OAuth authentication configuration.

Configures OAuth authentication for external APIs.

Examples:

>>> config = OauthConfig(access_token="ya29....")

Parameters:

Name Type Description Default
access_token Optional[str]

OAuth access token

None
service_account Optional[str]

Service account email

None

Raises:

Type Description
TypeError

If configuration is invalid

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    access_token: Optional[str] = None,
    service_account: Optional[str] = None,
) -> None:
    """Initialize OAuth configuration.

    Args:
        access_token (Optional[str]):
            OAuth access token
        service_account (Optional[str]):
            Service account email

    Raises:
        TypeError: If configuration is invalid
    """

oauth_config property

oauth_config: OauthConfigValue

The OAuth configuration value.

OauthConfigValue

OauthConfigValue(
    access_token: Optional[str] = None,
    service_account: Optional[str] = None,
)

Union type for OAuth configuration.

Represents either an access token or service account OAuth configuration.

Examples:

>>> # Using access token
>>> config = OauthConfigValue(access_token="ya29....")
>>> # Using service account
>>> config = OauthConfigValue(
...     service_account="[email protected]"
... )

Exactly one of access_token or service_account must be provided.

Parameters:

Name Type Description Default
access_token Optional[str]

OAuth access token

None
service_account Optional[str]

Service account email

None

Raises:

Type Description
TypeError

If both or neither are provided

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    access_token: Optional[str] = None,
    service_account: Optional[str] = None,
) -> None:
    """Initialize OAuth configuration value.

    Exactly one of access_token or service_account must be provided.

    Args:
        access_token (Optional[str]):
            OAuth access token
        service_account (Optional[str]):
            Service account email

    Raises:
        TypeError: If both or neither are provided
    """

ObservabilityMetrics

error_count property

error_count: int

Error count

name property

name: str

Return the name

request_count property

request_count: int

Request count

route_metrics property

route_metrics: List[RouteMetrics]

Route metrics object

space property

space: str

Return the space

version property

version: str

Return the version

model_dump_json

model_dump_json() -> str

Return the json representation of the observability metrics

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the observability metrics"""

Observer

Observer(uid: str)

Parameters:

Name Type Description Default
uid str

Unique identifier for the observer

required
Source code in python/scouter/stubs.pyi
def __init__(self, uid: str) -> None:
    """Initializes an api metric observer

    Args:
        uid:
            Unique identifier for the observer
    """

collect_metrics

collect_metrics() -> Optional[ServerRecords]

Collect metrics from observer

Source code in python/scouter/stubs.pyi
def collect_metrics(self) -> Optional[ServerRecords]:
    """Collect metrics from observer"""

increment

increment(
    route: str, latency: float, status_code: int
) -> None

Increment the feature value

Parameters:

Name Type Description Default
route str

Route name

required
latency float

Latency of request

required
status_code int

Status code of request

required
Source code in python/scouter/stubs.pyi
def increment(self, route: str, latency: float, status_code: int) -> None:
    """Increment the feature value

    Args:
        route:
            Route name
        latency:
            Latency of request
        status_code:
            Status code of request
    """

reset_metrics

reset_metrics() -> None

Reset the observer metrics

Source code in python/scouter/stubs.pyi
def reset_metrics(self) -> None:
    """Reset the observer metrics"""

OidcConfig

OidcConfig(
    id_token: Optional[str] = None,
    service_account: Optional[str] = None,
)

OIDC authentication configuration.

Configures OIDC authentication for external APIs.

Examples:

>>> config = OidcConfig(id_token="eyJhbGc...")

Parameters:

Name Type Description Default
id_token Optional[str]

OIDC ID token

None
service_account Optional[str]

Service account email

None

Raises:

Type Description
TypeError

If configuration is invalid

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    id_token: Optional[str] = None,
    service_account: Optional[str] = None,
) -> None:
    """Initialize OIDC configuration.

    Args:
        id_token (Optional[str]):
            OIDC ID token
        service_account (Optional[str]):
            Service account email

    Raises:
        TypeError: If configuration is invalid
    """

oidc_config property

oidc_config: Any

The OIDC configuration value.

OpenAIChatResponse

Response from OpenAI chat completion API.

This class represents a complete response from the chat completion API, including all choices, usage statistics, and metadata.

Examples:

>>> # Basic usage
>>> response = OpenAIChatResponse(...)
>>> print(response.choices[0].message.content)
>>>
>>> # Accessing metadata
>>> print(f"Model: {response.model}")
>>> print(f"ID: {response.id}")
>>> print(f"Created: {response.created}")
>>>
>>> # Usage statistics
>>> print(f"Total tokens: {response.usage.total_tokens}")

choices property

choices: List[Choice]

List of completion choices.

created property

created: int

Unix timestamp of creation.

id property

id: str

Unique completion ID.

model property

model: str

Model used for completion.

object property

object: str

Object type (always 'chat.completion').

service_tier property

service_tier: Optional[str]

Service tier used.

system_fingerprint property

system_fingerprint: Optional[str]

System fingerprint for backend configuration.

usage property

usage: Usage

Token usage statistics.

OpenAIChatSettings

OpenAIChatSettings(
    max_completion_tokens: Optional[int] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    frequency_penalty: Optional[float] = None,
    timeout: Optional[float] = None,
    parallel_tool_calls: Optional[bool] = None,
    seed: Optional[int] = None,
    logit_bias: Optional[Dict[str, int]] = None,
    stop_sequences: Optional[List[str]] = None,
    logprobs: Optional[bool] = None,
    audio: Optional[AudioParam] = None,
    metadata: Optional[Dict[str, str]] = None,
    modalities: Optional[List[str]] = None,
    n: Optional[int] = None,
    prediction: Optional[Prediction] = None,
    presence_penalty: Optional[float] = None,
    prompt_cache_key: Optional[str] = None,
    reasoning_effort: Optional[str] = None,
    safety_identifier: Optional[str] = None,
    service_tier: Optional[str] = None,
    store: Optional[bool] = None,
    stream: Optional[bool] = None,
    stream_options: Optional[StreamOptions] = None,
    tool_choice: Optional[OpenAIToolChoice] = None,
    tools: Optional[List[OpenAITool]] = None,
    top_logprobs: Optional[int] = None,
    verbosity: Optional[str] = None,
    extra_body: Optional[Any] = None,
)

Settings for OpenAI chat completion requests.

This class provides comprehensive configuration options for OpenAI chat completions, including sampling parameters, tool usage, audio output, caching, and more.

Examples:

>>> # Basic settings
>>> settings = OpenAIChatSettings(
...     max_completion_tokens=1000,
...     temperature=0.7,
...     top_p=0.9
... )
>>>
>>> # With tools
>>> func = FunctionDefinition(name="get_weather")
>>> tool = Tool(function=FunctionTool(function=func, type="function"))
>>> settings = OpenAIChatSettings(
...     tools=[tool],
...     tool_choice=ToolChoice.from_mode(ToolChoiceMode.Auto)
... )
>>>
>>> # With audio output
>>> audio = AudioParam(format="mp3", voice="alloy")
>>> settings = OpenAIChatSettings(
...     audio=audio,
...     modalities=["text", "audio"]
... )

Parameters:

Name Type Description Default
max_completion_tokens Optional[int]

Maximum tokens for completion (including reasoning tokens)

None
temperature Optional[float]

Sampling temperature (0.0 to 2.0)

None
top_p Optional[float]

Nucleus sampling parameter (0.0 to 1.0)

None
top_k Optional[int]

Top-k sampling parameter

None
frequency_penalty Optional[float]

Frequency penalty (-2.0 to 2.0)

None
timeout Optional[float]

Request timeout in seconds

None
parallel_tool_calls Optional[bool]

Enable parallel function calling

None
seed Optional[int]

Random seed for deterministic sampling

None
logit_bias Optional[Dict[str, int]]

Token bias map (-100 to 100)

None
stop_sequences Optional[List[str]]

Stop sequences (max 4)

None
logprobs Optional[bool]

Return log probabilities

None
audio Optional[AudioParam]

Audio output configuration

None
metadata Optional[Dict[str, str]]

Request metadata (max 16 key-value pairs)

None
modalities Optional[List[str]]

Output modalities (e.g., ["text", "audio"])

None
n Optional[int]

Number of completions to generate

None
prediction Optional[Prediction]

Predicted output configuration

None
presence_penalty Optional[float]

Presence penalty (-2.0 to 2.0)

None
prompt_cache_key Optional[str]

Cache key for prompt caching

None
reasoning_effort Optional[str]

Reasoning effort level (e.g., "low", "medium", "high")

None
safety_identifier Optional[str]

User identifier for safety checks

None
service_tier Optional[str]

Service tier ("auto", "default", "flex", "priority")

None
store Optional[bool]

Store completion for later retrieval

None
stream Optional[bool]

Stream response with SSE

None
stream_options Optional[StreamOptions]

Streaming configuration

None
tool_choice Optional[ToolChoice]

Tool choice configuration

None
tools Optional[List[Tool]]

Available tools

None
top_logprobs Optional[int]

Number of top log probs to return (0-20)

None
verbosity Optional[str]

Response verbosity ("low", "medium", "high")

None
extra_body Optional[Any]

Additional request parameters

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    max_completion_tokens: Optional[int] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    frequency_penalty: Optional[float] = None,
    timeout: Optional[float] = None,
    parallel_tool_calls: Optional[bool] = None,
    seed: Optional[int] = None,
    logit_bias: Optional[Dict[str, int]] = None,
    stop_sequences: Optional[List[str]] = None,
    logprobs: Optional[bool] = None,
    audio: Optional[AudioParam] = None,
    metadata: Optional[Dict[str, str]] = None,
    modalities: Optional[List[str]] = None,
    n: Optional[int] = None,
    prediction: Optional[Prediction] = None,
    presence_penalty: Optional[float] = None,
    prompt_cache_key: Optional[str] = None,
    reasoning_effort: Optional[str] = None,
    safety_identifier: Optional[str] = None,
    service_tier: Optional[str] = None,
    store: Optional[bool] = None,
    stream: Optional[bool] = None,
    stream_options: Optional[StreamOptions] = None,
    tool_choice: Optional[OpenAIToolChoice] = None,
    tools: Optional[List[OpenAITool]] = None,
    top_logprobs: Optional[int] = None,
    verbosity: Optional[str] = None,
    extra_body: Optional[Any] = None,
) -> None:
    """Initialize OpenAI chat settings.

    Args:
        max_completion_tokens (Optional[int]):
            Maximum tokens for completion (including reasoning tokens)
        temperature (Optional[float]):
            Sampling temperature (0.0 to 2.0)
        top_p (Optional[float]):
            Nucleus sampling parameter (0.0 to 1.0)
        top_k (Optional[int]):
            Top-k sampling parameter
        frequency_penalty (Optional[float]):
            Frequency penalty (-2.0 to 2.0)
        timeout (Optional[float]):
            Request timeout in seconds
        parallel_tool_calls (Optional[bool]):
            Enable parallel function calling
        seed (Optional[int]):
            Random seed for deterministic sampling
        logit_bias (Optional[Dict[str, int]]):
            Token bias map (-100 to 100)
        stop_sequences (Optional[List[str]]):
            Stop sequences (max 4)
        logprobs (Optional[bool]):
            Return log probabilities
        audio (Optional[AudioParam]):
            Audio output configuration
        metadata (Optional[Dict[str, str]]):
            Request metadata (max 16 key-value pairs)
        modalities (Optional[List[str]]):
            Output modalities (e.g., ["text", "audio"])
        n (Optional[int]):
            Number of completions to generate
        prediction (Optional[Prediction]):
            Predicted output configuration
        presence_penalty (Optional[float]):
            Presence penalty (-2.0 to 2.0)
        prompt_cache_key (Optional[str]):
            Cache key for prompt caching
        reasoning_effort (Optional[str]):
            Reasoning effort level (e.g., "low", "medium", "high")
        safety_identifier (Optional[str]):
            User identifier for safety checks
        service_tier (Optional[str]):
            Service tier ("auto", "default", "flex", "priority")
        store (Optional[bool]):
            Store completion for later retrieval
        stream (Optional[bool]):
            Stream response with SSE
        stream_options (Optional[StreamOptions]):
            Streaming configuration
        tool_choice (Optional[ToolChoice]):
            Tool choice configuration
        tools (Optional[List[Tool]]):
            Available tools
        top_logprobs (Optional[int]):
            Number of top log probs to return (0-20)
        verbosity (Optional[str]):
            Response verbosity ("low", "medium", "high")
        extra_body (Optional[Any]):
            Additional request parameters
    """

audio property

audio: Optional[AudioParam]

Audio output configuration.

extra_body property

extra_body: Optional[Dict[str, Any]]

Additional request parameters.

frequency_penalty property

frequency_penalty: Optional[float]

Frequency penalty.

logit_bias property

logit_bias: Optional[Dict[str, int]]

Token bias map.

logprobs property

logprobs: Optional[bool]

Whether to return log probabilities.

max_completion_tokens property

max_completion_tokens: Optional[int]

Maximum completion tokens.

metadata property

metadata: Optional[Dict[str, str]]

Request metadata.

modalities property

modalities: Optional[List[str]]

Output modalities.

n property

n: Optional[int]

Number of completions.

parallel_tool_calls property

parallel_tool_calls: Optional[bool]

Whether parallel tool calls are enabled.

prediction property

prediction: Optional[Prediction]

Predicted output configuration.

presence_penalty property

presence_penalty: Optional[float]

Presence penalty.

prompt_cache_key property

prompt_cache_key: Optional[str]

Prompt cache key.

reasoning_effort property

reasoning_effort: Optional[str]

Reasoning effort level.

safety_identifier property

safety_identifier: Optional[str]

Safety identifier.

seed property

seed: Optional[int]

Random seed.

service_tier property

service_tier: Optional[str]

Service tier.

stop_sequences property

stop_sequences: Optional[List[str]]

Stop sequences.

store property

store: Optional[bool]

Whether to store completion.

stream property

stream: Optional[bool]

Whether to stream response.

stream_options property

stream_options: Optional[StreamOptions]

Stream options.

temperature property

temperature: Optional[float]

Sampling temperature.

timeout property

timeout: Optional[float]

Request timeout.

tool_choice property

tool_choice: Optional[OpenAIToolChoice]

Tool choice configuration.

tools property

tools: Optional[List[OpenAITool]]

Available tools.

top_k property

top_k: Optional[int]

Top-k sampling parameter.

top_logprobs property

top_logprobs: Optional[int]

Number of top log probabilities.

top_p property

top_p: Optional[float]

Nucleus sampling parameter.

verbosity property

verbosity: Optional[str]

Response verbosity.

model_dump

model_dump() -> Dict[str, Any]

Convert settings to a dictionary.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Dictionary representation of settings

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Convert settings to a dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of settings
    """

settings_type

settings_type() -> str

Get the settings type identifier.

Returns:

Name Type Description
str str

Settings type ("OpenAIChat")

Source code in python/scouter/stubs.pyi
def settings_type(self) -> str:
    """Get the settings type identifier.

    Returns:
        str: Settings type ("OpenAIChat")
    """

OpenAIEmbeddingConfig

OpenAIEmbeddingConfig(
    model: str,
    dimensions: Optional[int] = None,
    encoding_format: Optional[str] = None,
    user: Optional[str] = None,
)

Configuration for OpenAI embedding requests.

This class provides settings for embedding generation, including model selection, dimensions, and encoding format.

Examples:

>>> # Standard configuration
>>> config = OpenAIEmbeddingConfig(
...     model="text-embedding-3-small"
... )
>>>
>>> # Custom dimensions
>>> config = OpenAIEmbeddingConfig(
...     model="text-embedding-3-large",
...     dimensions=512
... )

Parameters:

Name Type Description Default
model str

Model ID for embeddings

required
dimensions Optional[int]

Number of dimensions for output embeddings

None
encoding_format Optional[str]

Format for embeddings ("float" or "base64")

None
user Optional[str]

User identifier for tracking

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model: str,
    dimensions: Optional[int] = None,
    encoding_format: Optional[str] = None,
    user: Optional[str] = None,
) -> None:
    """Initialize embedding configuration.

    Args:
        model (str):
            Model ID for embeddings
        dimensions (Optional[int]):
            Number of dimensions for output embeddings
        encoding_format (Optional[str]):
            Format for embeddings ("float" or "base64")
        user (Optional[str]):
            User identifier for tracking
    """

dimensions property

dimensions: Optional[int]

Number of embedding dimensions.

encoding_format property

encoding_format: Optional[str]

Encoding format for embeddings.

model property

model: str

The embedding model ID.

user property

user: Optional[str]

User identifier.

OpenAIEmbeddingResponse

Response from OpenAI embedding API.

This class represents a complete response from the embedding API, including all generated embeddings and usage statistics.

Examples:

>>> # Accessing embeddings
>>> response = OpenAIEmbeddingResponse(...)
>>> for embedding in response.data:
...     vector = embedding.embedding
...     # Use embedding vector
>>>
>>> # Usage information
>>> print(f"Tokens used: {response.usage.total_tokens}")

data property

data: List[EmbeddingObject]

List of embedding objects.

model property

model: str

Model used for embeddings.

object property

object: str

Object type (always 'list').

usage property

usage: UsageObject

Token usage statistics.

OpenAITool

OpenAITool(
    function: Optional[FunctionTool] = None,
    custom: Optional[CustomTool] = None,
)

Tool for OpenAI chat completions.

This class represents either a function tool or custom tool that can be called by the model.

Examples:

>>> # Function tool
>>> func = FunctionDefinition(name="get_weather")
>>> func_tool = FunctionTool(function=func, type="function")
>>> tool = Tool(function=func_tool)
>>>
>>> # Custom tool
>>> custom = CustomDefinition(name="analyzer")
>>> custom_tool = CustomTool(custom=custom, type="custom")
>>> tool = Tool(custom=custom_tool)

Parameters:

Name Type Description Default
function Optional[FunctionTool]

Function tool definition

None
custom Optional[CustomTool]

Custom tool definition

None

Raises:

Type Description
TypeError

If both or neither tool types are provided

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    function: Optional[FunctionTool] = None,
    custom: Optional[CustomTool] = None,
) -> None:
    """Initialize tool.

    Args:
        function (Optional[FunctionTool]):
            Function tool definition
        custom (Optional[CustomTool]):
            Custom tool definition

    Raises:
        TypeError: If both or neither tool types are provided
    """

OpenAIToolChoice

Tool choice configuration for chat completions.

This class configures how the model should handle tool calling, supporting multiple modes including simple mode selection, specific tool choice, and allowed tools constraints.

Examples:

>>> # Simple mode
>>> choice = ToolChoice.from_mode(ToolChoiceMode.Auto)
>>>
>>> # Specific function
>>> choice = ToolChoice.from_function("get_weather")
>>>
>>> # Custom tool
>>> choice = ToolChoice.from_custom("custom_tool")
>>>
>>> # Allowed tools
>>> allowed = AllowedTools.from_function_names(
...     AllowedToolsMode.Auto,
...     ["get_weather"]
... )
>>> choice = ToolChoice.from_allowed_tools(allowed)

from_allowed_tools staticmethod

from_allowed_tools(
    allowed_tools: AllowedTools,
) -> OpenAIToolChoice

Create tool choice from allowed tools.

Parameters:

Name Type Description Default
allowed_tools AllowedTools

Allowed tools configuration

required

Returns:

Name Type Description
ToolChoice OpenAIToolChoice

Tool choice configured with allowed tools

Source code in python/scouter/stubs.pyi
@staticmethod
def from_allowed_tools(allowed_tools: AllowedTools) -> "OpenAIToolChoice":
    """Create tool choice from allowed tools.

    Args:
        allowed_tools (AllowedTools):
            Allowed tools configuration

    Returns:
        ToolChoice: Tool choice configured with allowed tools
    """

from_custom staticmethod

from_custom(custom_name: str) -> OpenAIToolChoice

Create tool choice for custom tool.

Parameters:

Name Type Description Default
custom_name str

Name of the custom tool to call

required

Returns:

Name Type Description
ToolChoice OpenAIToolChoice

Tool choice configured for custom tool

Source code in python/scouter/stubs.pyi
@staticmethod
def from_custom(custom_name: str) -> "OpenAIToolChoice":
    """Create tool choice for custom tool.

    Args:
        custom_name (str):
            Name of the custom tool to call

    Returns:
        ToolChoice: Tool choice configured for custom tool
    """

from_function staticmethod

from_function(function_name: str) -> OpenAIToolChoice

Create tool choice for specific function.

Parameters:

Name Type Description Default
function_name str

Name of the function to call

required

Returns:

Name Type Description
ToolChoice OpenAIToolChoice

Tool choice configured for function

Source code in python/scouter/stubs.pyi
@staticmethod
def from_function(function_name: str) -> "OpenAIToolChoice":
    """Create tool choice for specific function.

    Args:
        function_name (str):
            Name of the function to call

    Returns:
        ToolChoice: Tool choice configured for function
    """

from_mode staticmethod

from_mode(mode: ToolChoiceMode) -> OpenAIToolChoice

Create tool choice from mode.

Parameters:

Name Type Description Default
mode ToolChoiceMode

The tool choice mode

required

Returns:

Name Type Description
ToolChoice OpenAIToolChoice

Tool choice configured with mode

Source code in python/scouter/stubs.pyi
@staticmethod
def from_mode(mode: ToolChoiceMode) -> "OpenAIToolChoice":
    """Create tool choice from mode.

    Args:
        mode (ToolChoiceMode):
            The tool choice mode

    Returns:
        ToolChoice: Tool choice configured with mode
    """

OpsGenieDispatchConfig

OpsGenieDispatchConfig(team: str)

Parameters:

Name Type Description Default
team str

Opsegenie team to be notified in the event of drift

required
Source code in python/scouter/stubs.pyi
def __init__(self, team: str):
    """Initialize alert config

    Args:
        team:
            Opsegenie team to be notified in the event of drift
    """

team property writable

team: str

Return the opesgenie team name

OtelExportConfig

OtelExportConfig(
    endpoint: Optional[str],
    protocol: OtelProtocol = OtelProtocol.HttpBinary,
    timeout: Optional[int] = None,
    compression: Optional[CompressionType] = None,
    headers: Optional[dict[str, str]] = None,
)

Configuration for exporting spans.

Parameters:

Name Type Description Default
endpoint Optional[str]

The endpoint for exporting spans. Can be either an HTTP or gRPC endpoint.

required
protocol Protocol

The protocol to use for exporting spans. Defaults to HttpBinary.

HttpBinary
timeout Optional[int]

The timeout for requests in seconds.

None
compression Optional[CompressionType]

The compression type for requests.

None
headers Optional[dict[str, str]]

Optional HTTP headers to include in requests.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    endpoint: Optional[str],
    protocol: OtelProtocol = OtelProtocol.HttpBinary,
    timeout: Optional[int] = None,
    compression: Optional[CompressionType] = None,
    headers: Optional[dict[str, str]] = None,
) -> None:
    """Initialize the ExportConfig.

    Args:
        endpoint (Optional[str]):
            The endpoint for exporting spans. Can be either an HTTP or gRPC endpoint.
        protocol (Protocol):
            The protocol to use for exporting spans. Defaults to HttpBinary.
        timeout (Optional[int]):
            The timeout for requests in seconds.
        compression (Optional[CompressionType]):
            The compression type for requests.
        headers (Optional[dict[str, str]]):
            Optional HTTP headers to include in requests.
    """

compression property

compression: Optional[CompressionType]

Get the compression type used for exporting spans.

endpoint property

endpoint: Optional[str]

Get the HTTP endpoint for exporting spans.

headers property

headers: Optional[dict[str, str]]

Get the HTTP headers used for exporting spans.

protocol property

protocol: OtelProtocol

Get the protocol used for exporting spans.

timeout property

timeout: Optional[int]

Get the timeout for requests in seconds.

OtelProtocol

Enumeration of protocols for HTTP exporting.

Outcome

Code execution outcome status.

Indicates the result of executing generated code.

Examples:

>>> outcome = Outcome.OutcomeOk
>>> outcome.value
'OUTCOME_OK'

OutcomeDeadlineExceeded class-attribute instance-attribute

OutcomeDeadlineExceeded = 'Outcome'

Execution exceeded time limit

OutcomeFailed class-attribute instance-attribute

OutcomeFailed = 'Outcome'

Execution failed

OutcomeOk class-attribute instance-attribute

OutcomeOk = 'Outcome'

Execution completed successfully

OutcomeUnspecified class-attribute instance-attribute

OutcomeUnspecified = 'Outcome'

Unspecified outcome

PageSpan

Page range in a document.

Specifies a range of pages in a document.

Examples:

>>> span = PageSpan(first_page=1, last_page=5)

first_page property

first_page: int

First page number.

last_page property

last_page: int

Last page number.

ParallelAiSearch

ParallelAiSearch(
    api_key: Optional[str] = None,
    custom_configs: Optional[Dict[str, Any]] = None,
)

Parallel.ai search tool configuration.

Configures search using the Parallel.ai engine.

Examples:

>>> search = ParallelAiSearch(
...     api_key="my-api-key",
...     custom_configs={
...         "source_policy": {"include_domains": ["google.com"]},
...         "maxResults": 10
...     }
... )

Parameters:

Name Type Description Default
api_key Optional[str]

Parallel.ai API key

None
custom_configs Optional[Dict[str, Any]]

Custom configuration parameters

None

Raises:

Type Description
TypeError

If configuration is invalid

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    api_key: Optional[str] = None,
    custom_configs: Optional[Dict[str, Any]] = None,
) -> None:
    """Initialize Parallel.ai search configuration.

    Args:
        api_key (Optional[str]):
            Parallel.ai API key
        custom_configs (Optional[Dict[str, Any]]):
            Custom configuration parameters

    Raises:
        TypeError: If configuration is invalid
    """

api_key property

api_key: Optional[str]

The API key.

custom_configs property

custom_configs: Optional[Dict[str, Any]]

Custom configuration parameters.

Part

Part(
    data: Union[
        str,
        Blob,
        FileData,
        FunctionCall,
        FunctionResponse,
        ExecutableCode,
        CodeExecutionResult,
    ],
    thought: Optional[bool] = None,
    thought_signature: Optional[str] = None,
    part_metadata: Optional[PartMetadata] = None,
    media_resolution: Optional[MediaResolution] = None,
    video_metadata: Optional[VideoMetadata] = None,
)

A part of a multi-part message.

Represents a single piece of content which can be text, media, function calls, or other data types.

Examples:

>>> # Text part
>>> part = Part(data="Hello, world!")
>>> # Image part
>>> part = Part(
...     data=Blob(
...         mime_type="image/png",
...         data=base64_encoded_data
...     )
... )
>>> # Function call part
>>> part = Part(
...     data=FunctionCall(
...         name="get_weather",
...         args={"location": "NYC"}
...     )
... )
>>> # Part with metadata
>>> part = Part(
...     data="Analyze this carefully",
...     thought=True,
...     media_resolution=MediaResolution.MediaResolutionHigh
... )

Parameters:

Name Type Description Default
data Union[str, Blob, FileData, FunctionCall, FunctionResponse, ExecutableCode, CodeExecutionResult]

The content data (text, blob, function call, etc.)

required
thought Optional[bool]

Whether this is part of the model's reasoning

None
thought_signature Optional[str]

Signature for reusing thoughts

None
part_metadata Optional[PartMetadata]

Custom metadata

None
media_resolution Optional[MediaResolution]

Media resolution level

None
video_metadata Optional[VideoMetadata]

Video-specific metadata

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    data: Union[
        str,
        Blob,
        FileData,
        FunctionCall,
        FunctionResponse,
        ExecutableCode,
        CodeExecutionResult,
    ],
    thought: Optional[bool] = None,
    thought_signature: Optional[str] = None,
    part_metadata: Optional[PartMetadata] = None,
    media_resolution: Optional[MediaResolution] = None,
    video_metadata: Optional[VideoMetadata] = None,
) -> None:
    """Initialize a content part.

    Args:
        data (Union[str, Blob, FileData, FunctionCall, FunctionResponse, ExecutableCode, CodeExecutionResult]):
            The content data (text, blob, function call, etc.)
        thought (Optional[bool]):
            Whether this is part of the model's reasoning
        thought_signature (Optional[str]):
            Signature for reusing thoughts
        part_metadata (Optional[PartMetadata]):
            Custom metadata
        media_resolution (Optional[MediaResolution]):
            Media resolution level
        video_metadata (Optional[VideoMetadata]):
            Video-specific metadata
    """

data property

data: Union[
    str,
    Blob,
    FileData,
    FunctionCall,
    FunctionResponse,
    ExecutableCode,
    CodeExecutionResult,
]

The content data.

media_resolution property

media_resolution: Optional[MediaResolution]

Media resolution.

part_metadata property

part_metadata: Optional[PartMetadata]

Custom metadata.

thought property

thought: Optional[bool]

Whether this is a thought/reasoning part.

thought_signature property

thought_signature: Optional[str]

The thought signature.

video_metadata property

video_metadata: Optional[VideoMetadata]

Video metadata.

PartMetadata

PartMetadata(struct_: Optional[Dict[str, Any]] = None)

Custom metadata for content parts.

Allows arbitrary structured metadata to be attached to parts.

Examples:

>>> metadata = PartMetadata(
...     struct_={"custom_field": "value", "priority": 1}
... )

Parameters:

Name Type Description Default
struct_ Optional[Dict[str, Any]]

Arbitrary metadata dictionary

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    struct_: Optional[Dict[str, Any]] = None,
) -> None:
    """Initialize part metadata.

    Args:
        struct_: Arbitrary metadata dictionary
    """

PartialArgs

PartialArgs(
    json_path: str,
    will_continue: Optional[bool] = None,
    null_value: Optional[bool] = None,
    number_value: Optional[float] = None,
    string_value: Optional[str] = None,
    bool_value: Optional[bool] = None,
)

Partial function call arguments for streaming.

Represents incrementally streamed function call arguments.

Examples:

>>> args = PartialArgs(
...     json_path="$.location",
...     string_value="New York",
...     will_continue=True
... )

Parameters:

Name Type Description Default
json_path str

JSON Path (RFC 9535) to the argument

required
will_continue Optional[bool]

Whether more parts follow for this path

None
null_value Optional[bool]

Null value

None
number_value Optional[float]

Numeric value

None
string_value Optional[str]

String value

None
bool_value Optional[bool]

Boolean value

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    json_path: str,
    will_continue: Optional[bool] = None,
    null_value: Optional[bool] = None,
    number_value: Optional[float] = None,
    string_value: Optional[str] = None,
    bool_value: Optional[bool] = None,
) -> None:
    """Initialize partial arguments.

    Args:
        json_path (str):
            JSON Path (RFC 9535) to the argument
        will_continue (Optional[bool]):
            Whether more parts follow for this path
        null_value (Optional[bool]):
            Null value
        number_value (Optional[float]):
            Numeric value
        string_value (Optional[str]):
            String value
        bool_value (Optional[bool]):
            Boolean value
    """

bool_value property

bool_value: Optional[bool]

Boolean value.

json_path property

json_path: str

The JSON path.

null_value property

null_value: Optional[bool]

Null value indicator.

number_value property

number_value: Optional[float]

Numeric value.

string_value property

string_value: Optional[str]

String value.

will_continue property

will_continue: Optional[bool]

Whether more parts follow.

PhishBlockThreshold

Phishing/malicious URL blocking threshold.

Controls the confidence level required to block potentially malicious URLs.

Examples:

>>> threshold = PhishBlockThreshold.BlockMediumAndAbove
>>> threshold.value
'BLOCK_MEDIUM_AND_ABOVE'

BlockHighAndAbove class-attribute instance-attribute

BlockHighAndAbove = 'PhishBlockThreshold'

Block high confidence and above

BlockHigherAndAbove class-attribute instance-attribute

BlockHigherAndAbove = 'PhishBlockThreshold'

Block higher confidence and above

BlockLowAndAbove class-attribute instance-attribute

BlockLowAndAbove = 'PhishBlockThreshold'

Block low confidence and above

BlockMediumAndAbove class-attribute instance-attribute

BlockMediumAndAbove = 'PhishBlockThreshold'

Block medium confidence and above

BlockOnlyExtremelyHigh class-attribute instance-attribute

BlockOnlyExtremelyHigh = 'PhishBlockThreshold'

Block only extremely high confidence

BlockVeryHighAndAbove class-attribute instance-attribute

BlockVeryHighAndAbove = 'PhishBlockThreshold'

Block very high confidence and above

PhishBlockThresholdUnspecified class-attribute instance-attribute

PhishBlockThresholdUnspecified = 'PhishBlockThreshold'

Unspecified threshold

PlainTextSource

PlainTextSource(data: str)

Plain text document source.

Plain text document data.

Examples:

>>> source = PlainTextSource(data="Plain text content")

Parameters:

Name Type Description Default
data str

Plain text content

required
Source code in python/scouter/stubs.pyi
def __init__(self, data: str) -> None:
    """Initialize plain text source.

    Args:
        data (str):
            Plain text content
    """

data property

data: str

Text content.

media_type property

media_type: str

Media type (always 'text/plain').

type property

type: str

Source type (always 'text').

PrebuiltVoiceConfig

PrebuiltVoiceConfig(voice_name: str)

Configuration for prebuilt voice selection.

Selects a prebuilt voice for text-to-speech generation.

Examples:

>>> config = PrebuiltVoiceConfig(voice_name="Puck")

Parameters:

Name Type Description Default
voice_name str

Name of the prebuilt voice

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    voice_name: str,
) -> None:
    """Initialize prebuilt voice configuration.

    Args:
        voice_name (str):
            Name of the prebuilt voice
    """

voice_name property

voice_name: str

The voice name.

PredictRequest

PredictRequest(
    instances: Any, parameters: Optional[Any] = None
)

Prediction API request.

Generic prediction request for embedding and other endpoints.

Examples:

>>> request = PredictRequest(
...     instances=[{"content": {"parts": [{"text": "Hello"}]}}],
...     parameters={"outputDimensionality": 768}
... )

Parameters:

Name Type Description Default
instances Any

Input instances

required
parameters Optional[Any]

Request parameters

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    instances: Any,
    parameters: Optional[Any] = None,
) -> None:
    """Initialize prediction request.

    Args:
        instances (Any):
            Input instances
        parameters (Optional[Any]):
            Request parameters
    """

instances property

instances: Any

Input instances.

parameters property

parameters: Any

Request parameters.

PredictResponse

Prediction API response.

Generic prediction response containing predictions and metadata.

Examples:

>>> response = PredictResponse(
...     predictions=[{"embedding": {"values": [0.1, 0.2, ...]}}],
...     deployed_model_id="12345",
...     model="embedding-001"
... )

deployed_model_id property

deployed_model_id: str

Deployed model ID.

metadata property

metadata: Any

Response metadata.

model property

model: str

Model name.

model_display_name property

model_display_name: str

Model display name.

model_version_id property

model_version_id: str

Model version ID.

predictions property

predictions: Any

Predictions.

Prediction

Prediction(type: str, content: Content)

Configuration for predicted outputs in OpenAI requests.

This class provides configuration for predicted outputs, which can greatly improve response times when large parts of the model response are known ahead of time.

Examples:

>>> content = Content(text="Expected response")
>>> prediction = Prediction(type="content", content=content)
>>> prediction.type
'content'

Parameters:

Name Type Description Default
type str

Type of prediction (typically "content")

required
content Content

The predicted content

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    type: str,
    content: Content,
) -> None:
    """Initialize prediction configuration.

    Args:
        type (str):
            Type of prediction (typically "content")
        content (Content):
            The predicted content
    """

content property

content: Content

The predicted content.

type property

type: str

The prediction type.

PredictionContentPart

PredictionContentPart(type: str, text: str)

Content part for predicted outputs in OpenAI requests.

This class represents a single content part within a predicted output, used to improve response times when large parts of the response are known.

Examples:

>>> part = PredictionContentPart(type="text", text="Hello, world!")
>>> part.type
'text'
>>> part.text
'Hello, world!'

Parameters:

Name Type Description Default
type str

Type of content (typically "text")

required
text str

The predicted text content

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    type: str,
    text: str,
) -> None:
    """Initialize prediction content part.

    Args:
        type (str):
            Type of content (typically "text")
        text (str):
            The predicted text content
    """

text property

text: str

The predicted text content.

type property

type: str

The content type.

model_dump

model_dump() -> Dict[str, Any]

Convert content part to a dictionary.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Dictionary representation of content part

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Convert content part to a dictionary.

    Returns:
        Dict[str, Any]: Dictionary representation of content part
    """

ProfileStatusRequest

ProfileStatusRequest(
    name: str,
    space: str,
    version: str,
    drift_type: DriftType,
    active: bool,
)

Parameters:

Name Type Description Default
name str

Model name

required
space str

Model space

required
version str

Model version

required
drift_type DriftType

Profile drift type. A (repo/name/version can be associated with more than one drift type)

required
active bool

Whether to set the profile as active or inactive

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, space: str, version: str, drift_type: DriftType, active: bool) -> None:
    """Initialize profile status request

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        drift_type:
            Profile drift type. A (repo/name/version can be associated with more than one drift type)
        active:
            Whether to set the profile as active or inactive
    """

Prompt

Prompt(
    messages: PromptMessage,
    model: str,
    provider: Provider | str,
    system_instructions: Optional[PromptMessage] = None,
    model_settings: Optional[
        ModelSettings
        | OpenAIChatSettings
        | GeminiSettings
        | AnthropicSettings
    ] = None,
    output_type: Optional[Any] = None,
)

Prompt for interacting with an LLM API.

The Prompt class handles message parsing, provider-specific formatting, and structured output configuration for LLM interactions.

Main parsing logic: 1. Extract model settings if provided, otherwise use provider default settings 2. Messages and system instructions are parsed into provider-specific formats (OpenAIChatMessage, AnthropicMessage, or GeminiContent) 3. String messages are automatically converted to appropriate message types based on provider 4. Lists of messages are parsed with each item checked and converted accordingly 5. After parsing, a complete provider request structure is built

Parameters:

Name Type Description Default
message PromptMessage

The user message(s) to use in the prompt

required
model str

The model identifier to use (e.g., "gpt-4o", "claude-3-5-sonnet-20241022")

required
provider Provider | str

The provider to use for the prompt (e.g., "openai", "anthropic", "google")

required
system_instruction Optional[PromptMessage]

Optional system instruction(s). Can be:

required
model_settings Optional[ModelSettings | OpenAIChatSettings | GeminiSettings | AnthropicSettings]

Optional model-specific settings (temperature, max_tokens, etc.) If None, provider default settings will be used

None
output_type Optional[Pydantic BaseModel | Score]

Optional structured output format.The provided format will be parsed into a JSON schema for structured outputs

None

Raises:

Type Description
TypeError

If message types are invalid or incompatible with the provider

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    messages: PromptMessage,
    model: str,
    provider: Provider | str,
    system_instructions: Optional[PromptMessage] = None,
    model_settings: Optional[ModelSettings | OpenAIChatSettings | GeminiSettings | AnthropicSettings] = None,
    output_type: Optional[Any] = None,
) -> None:
    """Initialize a Prompt object.

    Main parsing logic:
    1. Extract model settings if provided, otherwise use provider default settings
    2. Messages and system instructions are parsed into provider-specific formats
       (OpenAIChatMessage, AnthropicMessage, or GeminiContent)
    3. String messages are automatically converted to appropriate message types based on provider
    4. Lists of messages are parsed with each item checked and converted accordingly
    5. After parsing, a complete provider request structure is built

    Args:
        message (PromptMessage):
            The user message(s) to use in the prompt
        model (str):
            The model identifier to use (e.g., "gpt-4o", "claude-3-5-sonnet-20241022")
        provider (Provider | str):
            The provider to use for the prompt (e.g., "openai", "anthropic", "google")
        system_instruction (Optional[PromptMessage]):
            Optional system instruction(s). Can be:
        model_settings (Optional[ModelSettings | OpenAIChatSettings | GeminiSettings | AnthropicSettings]):
            Optional model-specific settings (temperature, max_tokens, etc.)
            If None, provider default settings will be used
        output_type (Optional[Pydantic BaseModel | Score]):
            Optional structured output format.The provided format will be parsed into a JSON schema for structured outputs

    Raises:
        TypeError: If message types are invalid or incompatible with the provider
    """

all_messages property

all_messages: List[
    ChatMessage | MessageParam | GeminiContent
]

All messages in the prompt, including system instructions, user messages, tools, etc. This is helpful for accessing the complete set of messages in the prompt.

anthropic_message property

anthropic_message: MessageParam

The last user message as an Anthropic MessageParam object. Returns the last user message converted to Anthropic MessageParam format. Raises: TypeError: If the provider is not Anthropic

anthropic_messages property

anthropic_messages: List[MessageParam]

The user messages as Anthropic MessageParam objects. Returns the user messages converted to Anthropic MessageParam format. Raises: TypeError: If the provider is not Anthropic

gemini_message property

gemini_message: GeminiContent

The last user message as a Google GeminiContent object. Returns the last user message converted to Google GeminiContent format. Raises: TypeError: If the provider is not Google/Gemini

gemini_messages property

gemini_messages: List[GeminiContent]

The user messages as Google GeminiContent objects. Returns the user messages converted to Google GeminiContent format. Raises: TypeError: If the provider is not Google/Gemini

message property

message: ChatMessage | MessageParam | GeminiContent

The last user message in the prompt.

Returns a list of provider-specific message objects that were parsed from the input during initialization.

messages property

messages: List[ChatMessage | MessageParam | GeminiContent]

The user message(s) in the prompt.

Returns a list of provider-specific message objects that were parsed from the input during initialization.

model property

model: str

The model identifier to use for the prompt (e.g., "gpt-4o").

model_identifier property

model_identifier: str

Concatenation of provider and model for identifying the model.

This is commonly used with frameworks like pydantic_ai to identify which model to use for an agent.

Returns:

Name Type Description
str str

Format is "{provider}:{model}" (e.g., "openai:gpt-4o")

Example
prompt = Prompt(
    model="gpt-4o",
    messages="My prompt variable is ${variable}",
    system_instructions="You are a helpful assistant",
    provider="openai",
)

# Use with pydantic_ai
agent = Agent(
    prompt.model_identifier,  # "openai:gpt-4o"
    system_prompt=prompt.system_instructions[0].content,
)

model_settings property

model_settings: ModelSettings

The model settings used for the prompt.

Returns the provider-specific settings (OpenAIChatSettings, GeminiSettings, or AnthropicSettings) wrapped in a ModelSettings union type.

openai_message property

openai_message: ChatMessage

The last user message as an OpenAI ChatMessage object. Returns the last user message converted to OpenAI ChatMessage format. Raises: TypeError: If the provider is not OpenAI

openai_messages property

openai_messages: List[ChatMessage]

The user messages as OpenAI ChatMessage objects. Returns the user messages converted to OpenAI ChatMessage format. Raises: TypeError: If the provider is not OpenAI

parameters property

parameters: List[str]

Extracted named parameters from the prompt messages.

Returns a list of all variable placeholders found in the prompt using the ${variable_name} syntax. These can be bound to values using the bind() or bind_mut() methods.

Example
prompt = Prompt(
    messages="Hello ${name}, your score is ${score}",
    model="gpt-4o",
    provider="openai",
)
print(prompt.parameters)  # ["name", "score"]

provider property

provider: Provider

The provider to use for the prompt (e.g., Provider.OpenAI).

response_json_schema property

response_json_schema: Optional[str]

The JSON schema for structured output responses if provided.

Returns the raw JSON schema string that was generated from the output_type parameter during initialization. Returns None if no response format was specified.

response_json_schema_pretty property

response_json_schema_pretty: Optional[str]

The pretty-printed JSON schema for structured output responses if provided.

system_instructions property

system_instructions: List[
    ChatMessage | GeminiContent | MessageParam
]

The system instruction message(s) in the prompt.

Returns a list of provider-specific message objects for system instructions. Returns an empty list if no system instructions were provided.

bind

bind(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any
) -> Prompt

Bind variables in the prompt (immutable operation).

Creates a new Prompt object with variables bound to values. This iterates over all user messages and replaces ${variable_name} placeholders with the provided values.

Parameters:

Name Type Description Default
name Optional[str]

The name of a single variable to bind (without ${} syntax)

None
value Optional[str | int | float | bool | list]

The value to bind the variable to. Must be JSON serializable.

None
**kwargs Any

Additional variables to bind. Keys are variable names, values are the values to bind.

{}

Returns:

Name Type Description
Prompt Prompt

A new Prompt object with variables bound.

Raises:

Type Description
TypeError

If no binding arguments are provided or if values are not JSON serializable.

Example
prompt = Prompt(
    messages="Hello ${name}, you scored ${score}/100",
    model="gpt-4o",
    provider="openai",
)

# Single variable binding
bound = prompt.bind("name", "Alice")

# Multiple variable binding
bound = prompt.bind(name="Alice", score=95)

# Original prompt is unchanged
print(prompt.parameters)  # ["name", "score"]
print(bound.parameters)   # []
Source code in python/scouter/stubs.pyi
def bind(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any,
) -> "Prompt":
    """Bind variables in the prompt (immutable operation).

    Creates a new Prompt object with variables bound to values. This iterates
    over all user messages and replaces ${variable_name} placeholders with
    the provided values.

    Args:
        name (Optional[str]):
            The name of a single variable to bind (without ${} syntax)
        value (Optional[str | int | float | bool | list]):
            The value to bind the variable to. Must be JSON serializable.
        **kwargs:
            Additional variables to bind. Keys are variable names,
            values are the values to bind.

    Returns:
        Prompt: A new Prompt object with variables bound.

    Raises:
        TypeError: If no binding arguments are provided or if values are not
            JSON serializable.

    Example:
        ```python
        prompt = Prompt(
            messages="Hello ${name}, you scored ${score}/100",
            model="gpt-4o",
            provider="openai",
        )

        # Single variable binding
        bound = prompt.bind("name", "Alice")

        # Multiple variable binding
        bound = prompt.bind(name="Alice", score=95)

        # Original prompt is unchanged
        print(prompt.parameters)  # ["name", "score"]
        print(bound.parameters)   # []
        ```
    """

bind_mut

bind_mut(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any
) -> None

Bind variables in the prompt (mutable operation).

Modifies the current Prompt object by binding variables to values. This iterates over all user messages and replaces ${variable_name} placeholders with the provided values.

Parameters:

Name Type Description Default
name Optional[str]

The name of a single variable to bind (without ${} syntax)

None
value Optional[str | int | float | bool | list]

The value to bind the variable to. Must be JSON serializable.

None
**kwargs Any

Additional variables to bind. Keys are variable names, values are the values to bind.

{}

Raises:

Type Description
TypeError

If no binding arguments are provided or if values are not JSON serializable.

Example
prompt = Prompt(
    messages="Hello ${name}, you scored ${score}/100",
    model="gpt-4o",
    provider="openai",
)

# Mutate in place
prompt.bind_mut(name="Bob", score=87)

# Prompt is now modified
print(prompt.parameters)  # []
Source code in python/scouter/stubs.pyi
def bind_mut(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any,
) -> None:
    """Bind variables in the prompt (mutable operation).

    Modifies the current Prompt object by binding variables to values. This
    iterates over all user messages and replaces ${variable_name} placeholders
    with the provided values.

    Args:
        name (Optional[str]):
            The name of a single variable to bind (without ${} syntax)
        value (Optional[str | int | float | bool | list]):
            The value to bind the variable to. Must be JSON serializable.
        **kwargs:
            Additional variables to bind. Keys are variable names,
            values are the values to bind.

    Raises:
        TypeError: If no binding arguments are provided or if values are not
            JSON serializable.

    Example:
        ```python
        prompt = Prompt(
            messages="Hello ${name}, you scored ${score}/100",
            model="gpt-4o",
            provider="openai",
        )

        # Mutate in place
        prompt.bind_mut(name="Bob", score=87)

        # Prompt is now modified
        print(prompt.parameters)  # []
        ```
    """

from_path staticmethod

from_path(path: Path) -> Prompt

Load a prompt from a JSON file.

Parameters:

Name Type Description Default
path Path

The path to the prompt JSON file.

required

Returns:

Name Type Description
Prompt Prompt

The loaded prompt object.

Raises:

Type Description
IOError

If the file cannot be read

ValueError

If the JSON is invalid or cannot be parsed into a Prompt

Example
prompt = Prompt.from_path(Path("my_prompt.json"))
Source code in python/scouter/stubs.pyi
@staticmethod
def from_path(path: Path) -> "Prompt":
    """Load a prompt from a JSON file.

    Args:
        path (Path):
            The path to the prompt JSON file.

    Returns:
        Prompt: The loaded prompt object.

    Raises:
        IOError: If the file cannot be read
        ValueError: If the JSON is invalid or cannot be parsed into a Prompt

    Example:
        ```python
        prompt = Prompt.from_path(Path("my_prompt.json"))
        ```
    """

model_dump

model_dump() -> Dict[str, Any]

Returns the Prompt request object as a dictionary. For instance, if Provider is OpenAI, this will return the OpenAIChatRequest as a dict that can be passed to the OpenAI SDK.

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Returns the Prompt request object as a dictionary.
    For instance, if Provider is OpenAI, this will return the OpenAIChatRequest as a dict
    that can be passed to the OpenAI SDK.
    """

model_dump_json

model_dump_json() -> str

Serialize the Prompt to a JSON string.

Returns:

Name Type Description
str str

JSON string representation of the Prompt.

Example
prompt = Prompt(messages="Hello!", model="gpt-4o", provider="openai")
json_str = prompt.model_dump_json()
Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Serialize the Prompt to a JSON string.

    Returns:
        str: JSON string representation of the Prompt.

    Example:
        ```python
        prompt = Prompt(messages="Hello!", model="gpt-4o", provider="openai")
        json_str = prompt.model_dump_json()
        ```
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> Prompt

Validate and parse a Prompt from a JSON string.

Parameters:

Name Type Description Default
json_string str

A JSON string representation of a Prompt object.

required

Returns:

Name Type Description
Prompt Prompt

The parsed Prompt object.

Raises:

Type Description
ValueError

If the JSON is invalid or cannot be parsed into a Prompt

Example
json_str = '{"model": "gpt-4o", "provider": "openai", ...}'
prompt = Prompt.model_validate_json(json_str)
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "Prompt":
    """Validate and parse a Prompt from a JSON string.

    Args:
        json_string (str):
            A JSON string representation of a Prompt object.

    Returns:
        Prompt:
            The parsed Prompt object.

    Raises:
        ValueError: If the JSON is invalid or cannot be parsed into a Prompt

    Example:
        ```python
        json_str = '{"model": "gpt-4o", "provider": "openai", ...}'
        prompt = Prompt.model_validate_json(json_str)
        ```
    """

save_prompt

save_prompt(path: Optional[Path] = None) -> Path

Save the prompt to a JSON file.

Parameters:

Name Type Description Default
path Optional[Path]

The path to save the prompt to. If None, saves to the current working directory with default filename "prompt.json".

None

Returns:

Name Type Description
Path Path

The path where the prompt was saved.

Example
prompt = Prompt(messages="Hello!", model="gpt-4o", provider="openai")
saved_path = prompt.save_prompt(Path("my_prompt.json"))
Source code in python/scouter/stubs.pyi
def save_prompt(self, path: Optional[Path] = None) -> Path:
    """Save the prompt to a JSON file.

    Args:
        path (Optional[Path]):
            The path to save the prompt to. If None, saves to the current
            working directory with default filename "prompt.json".

    Returns:
        Path: The path where the prompt was saved.

    Example:
        ```python
        prompt = Prompt(messages="Hello!", model="gpt-4o", provider="openai")
        saved_path = prompt.save_prompt(Path("my_prompt.json"))
        ```
    """

PromptFeedback

Feedback about prompt blocking.

Indicates why a prompt was blocked by content filters.

Examples:

>>> feedback = PromptFeedback(
...     block_reason=BlockedReason.Safety,
...     safety_ratings=[...],
...     block_reason_messages="Prompt contains unsafe content"
... )

block_reason property

block_reason: Optional[BlockedReason]

Why the prompt was blocked.

block_reason_message property

block_reason_message: Optional[str]

Human-readable block reason.

safety_ratings property

safety_ratings: Optional[List[SafetyRating]]

Safety ratings for the prompt.

PromptTokenDetails

Detailed token usage for input prompt.

This class provides information about tokens used in the prompt, including cached tokens and audio tokens.

Examples:

>>> # Accessing prompt token details
>>> usage = response.usage
>>> details = usage.prompt_tokens_details
>>> print(f"Cached tokens: {details.cached_tokens}")
>>> print(f"Audio tokens: {details.audio_tokens}")

audio_tokens property

audio_tokens: int

Number of audio tokens.

cached_tokens property

cached_tokens: int

Number of cached tokens.

Provider

Provider enumeration for LLM services.

Specifies which LLM provider to use for prompts, agents, and workflows.

Examples:

>>> provider = Provider.OpenAI
>>> agent = Agent(provider=provider)

Anthropic instance-attribute

Anthropic: Provider

Anthropic provider

Gemini instance-attribute

Gemini: Provider

Google Gemini provider

Google instance-attribute

Google: Provider

Google provider (alias for Gemini)

OpenAI instance-attribute

OpenAI: Provider

OpenAI provider

Undefined instance-attribute

Undefined: Provider

Undefined provider

Vertex instance-attribute

Vertex: Provider

Google Vertex AI provider

PsiAlertConfig

PsiAlertConfig(
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
    threshold: Optional[
        PsiThresholdType
    ] = PsiChiSquareThreshold(),
)

Parameters:

Name Type Description Default
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch configuration to use. Defaults to an internal "Console" type where the alerts will be logged to the console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
features_to_monitor List[str]

List of features to monitor. Defaults to empty list, which means all features

[]
threshold Optional[PsiThresholdType]

Configuration that helps determine how to calculate PSI critical values. Defaults to PsiChiSquareThreshold, which uses the chi-square distribution.

PsiChiSquareThreshold()
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
    threshold: Optional[PsiThresholdType] = PsiChiSquareThreshold(),
):
    """Initialize alert config

    Args:
        dispatch_config:
            Alert dispatch configuration to use. Defaults to an internal "Console" type where
            the alerts will be logged to the console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight
        features_to_monitor:
            List of features to monitor. Defaults to empty list, which means all features
        threshold:
            Configuration that helps determine how to calculate PSI critical values.
            Defaults to PsiChiSquareThreshold, which uses the chi-square distribution.
    """

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

features_to_monitor property writable

features_to_monitor: List[str]

Return the features to monitor

schedule property writable

schedule: str

Return the schedule

threshold property

threshold: PsiThresholdType

Return the threshold config

PsiChiSquareThreshold

PsiChiSquareThreshold(alpha: float = 0.05)

Uses the asymptotic chi-square distribution of PSI.

The chi-square method is generally more statistically rigorous than normal approximation, especially for smaller sample sizes.

Parameters:

Name Type Description Default
alpha float

Significance level (0.0 to 1.0, exclusive). Common values: 0.05 (95% confidence), 0.01 (99% confidence)

0.05

Raises:

Type Description
ValueError

If alpha not in range (0.0, 1.0)

Source code in python/scouter/stubs.pyi
def __init__(self, alpha: float = 0.05):
    """Initialize PSI threshold using chi-square approximation.

    Uses the asymptotic chi-square distribution of PSI.

    The chi-square method is generally more statistically rigorous than
    normal approximation, especially for smaller sample sizes.

    Args:
        alpha: Significance level (0.0 to 1.0, exclusive). Common values:
               0.05 (95% confidence), 0.01 (99% confidence)

    Raises:
        ValueError: If alpha not in range (0.0, 1.0)
    """

alpha property writable

alpha: float

Statistical significance level for drift detection.

PsiDriftConfig

PsiDriftConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    alert_config: PsiAlertConfig = PsiAlertConfig(),
    config_path: Optional[Path] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: (
        QuantileBinning | EqualWidthBinning
    ) = QuantileBinning(num_bins=10),
)

Parameters:

Name Type Description Default
space str

Model space

'__missing__'
name str

Model name

'__missing__'
version str

Model version. Defaults to 0.1.0

'0.1.0'
alert_config PsiAlertConfig

Alert configuration

PsiAlertConfig()
config_path Optional[Path]

Optional path to load config from.

None
categorical_features Optional[list[str]]

List of features to treat as categorical for PSI calculation.

None
binning_strategy QuantileBinning | EqualWidthBinning

Strategy for binning continuous features during PSI calculation. Supports: - QuantileBinning (R-7 method, Hyndman & Fan Type 7). - EqualWidthBinning which divides the range of values into fixed-width bins. Default is QuantileBinning with 10 bins. You can also specify methods like Doane's rule with EqualWidthBinning.

QuantileBinning(num_bins=10)
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    alert_config: PsiAlertConfig = PsiAlertConfig(),
    config_path: Optional[Path] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: QuantileBinning | EqualWidthBinning = QuantileBinning(num_bins=10),
):
    """Initialize monitor config

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version. Defaults to 0.1.0
        alert_config:
            Alert configuration
        config_path:
            Optional path to load config from.
        categorical_features:
            List of features to treat as categorical for PSI calculation.
        binning_strategy:
            Strategy for binning continuous features during PSI calculation.
            Supports:
              - QuantileBinning (R-7 method, Hyndman & Fan Type 7).
              - EqualWidthBinning which divides the range of values into fixed-width bins.
            Default is QuantileBinning with 10 bins. You can also specify methods like Doane's rule with EqualWidthBinning.
    """

alert_config property writable

alert_config: PsiAlertConfig

Alert configuration

binning_strategy property writable

binning_strategy: QuantileBinning | EqualWidthBinning

binning_strategy

categorical_features property writable

categorical_features: list[str]

list of categorical features

drift_type property

drift_type: DriftType

Drift type

feature_map property

feature_map: Optional[FeatureMap]

Feature map

name property writable

name: str

Model Name

space property writable

space: str

Model space

uid property writable

uid: str

Unique identifier for the drift config

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> PsiDriftConfig

Load config from json file

Parameters:

Name Type Description Default
path Path

Path to json file to load config from.

required
Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "PsiDriftConfig":
    """Load config from json file

    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[
        QuantileBinning | EqualWidthBinning
    ] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
space Optional[str]

Model space

None
name Optional[str]

Model name

None
version Optional[str]

Model version

None
alert_config Optional[PsiAlertConfig]

Alert configuration

None
categorical_features Optional[list[str]]

Categorical features

None
binning_strategy Optional[QuantileBinning | EqualWidthBinning]

Binning strategy

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[QuantileBinning | EqualWidthBinning] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        alert_config:
            Alert configuration
        categorical_features:
            Categorical features
        binning_strategy:
            Binning strategy
    """

PsiDriftMap

Drift map of features

features property

features: Dict[str, float]

Returns dictionary of features and their reported drift, if any

name property

name: str

name to associate with drift map

space property

space: str

Space to associate with drift map

version property

version: str

Version to associate with drift map

model_dump_json

model_dump_json() -> str

Return json representation of data drift

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of data drift"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> PsiDriftMap

Load drift map from json file.

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift map

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "PsiDriftMap":
    """Load drift map from json file.

    Args:
        json_string:
            JSON string representation of the drift map
    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift map to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift map. If None, outputs to psi_drift_map.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift map to json file

    Args:
        path:
            Optional path to save the drift map. If None, outputs to `psi_drift_map.json`

    Returns:
        Path to the saved json file

    """

PsiDriftProfile

config property

config: PsiDriftConfig

Return the monitor config.

features property

features: Dict[str, PsiFeatureDriftProfile]

Return the list of features.

scouter_version property

scouter_version: str

Return scouter version used to create DriftProfile

uid property

uid: str

Return the unique identifier for the drift profile

from_file staticmethod

from_file(path: Path) -> PsiDriftProfile

Load drift profile from file

Parameters:

Name Type Description Default
path Path

Path to the file

required
Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "PsiDriftProfile":
    """Load drift profile from file

    Args:
        path: Path to the file
    """

model_dump

model_dump() -> Dict[str, Any]

Return dictionary representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Return dictionary representation of drift profile"""

model_dump_json

model_dump_json() -> str

Return json representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of drift profile"""

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> PsiDriftProfile

Load drift profile from dictionary

Parameters:

Name Type Description Default
data Dict[str, Any]

DriftProfile dictionary

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "PsiDriftProfile":
    """Load drift profile from dictionary

    Args:
        data:
            DriftProfile dictionary
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> PsiDriftProfile

Load drift profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "PsiDriftProfile":
    """Load drift profile from json

    Args:
        json_string:
            JSON string representation of the drift profile

    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift profile. If None, outputs to psi_drift_profile.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift profile to json file

    Args:
        path:
            Optional path to save the drift profile. If None, outputs to `psi_drift_profile.json`

    Returns:
        Path to the saved json file
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[
        QuantileBinning | EqualWidthBinning
    ] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
name Optional[str]

Model name

None
space Optional[str]

Model space

None
version Optional[str]

Model version

None
alert_config Optional[PsiAlertConfig]

Alert configuration

None
categorical_features Optional[list[str]]

Categorical Features

None
binning_strategy Optional[QuantileBinning | EqualWidthBinning]

Binning strategy

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[QuantileBinning | EqualWidthBinning] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        alert_config:
            Alert configuration
        categorical_features:
            Categorical Features
        binning_strategy:
            Binning strategy
    """

PsiFeatureDriftProfile

bin_type property

bin_type: BinType

Return the timestamp.

bins property

bins: List[Bin]

Return the bins

id property

id: str

Return the feature name

timestamp property

timestamp: str

Return the timestamp.

PsiFixedThreshold

PsiFixedThreshold(threshold: float = 0.25)

Uses a predetermined PSI threshold value, similar to traditional "rule of thumb" approaches (e.g., 0.10 for moderate drift, 0.25 for significant drift).

Parameters:

Name Type Description Default
threshold float

Fixed PSI threshold value (must be positive). Common industry values: 0.10, 0.25

0.25

Raises:

Type Description
ValueError

If threshold is not positive

Source code in python/scouter/stubs.pyi
def __init__(self, threshold: float = 0.25):
    """Initialize PSI threshold using a fixed value.

    Uses a predetermined PSI threshold value, similar to traditional
    "rule of thumb" approaches (e.g., 0.10 for moderate drift, 0.25
    for significant drift).

    Args:
        threshold: Fixed PSI threshold value (must be positive).
                  Common industry values: 0.10, 0.25

    Raises:
        ValueError: If threshold is not positive
    """

threshold property writable

threshold: float

Fixed PSI threshold value for drift detection.

PsiNormalThreshold

PsiNormalThreshold(alpha: float = 0.05)

Uses the asymptotic normal distribution of PSI to calculate critical values for population drift detection.

Parameters:

Name Type Description Default
alpha float

Significance level (0.0 to 1.0, exclusive). Common values: 0.05 (95% confidence), 0.01 (99% confidence)

0.05

Raises:

Type Description
ValueError

If alpha not in range (0.0, 1.0)

Source code in python/scouter/stubs.pyi
def __init__(self, alpha: float = 0.05):
    """Initialize PSI threshold using normal approximation.

    Uses the asymptotic normal distribution of PSI to calculate critical values
    for population drift detection.

    Args:
        alpha: Significance level (0.0 to 1.0, exclusive). Common values:
               0.05 (95% confidence), 0.01 (99% confidence)

    Raises:
        ValueError: If alpha not in range (0.0, 1.0)
    """

alpha property writable

alpha: float

Statistical significance level for drift detection.

PsiRecord

PsiRecord(
    uid: str, feature: str, bin_id: int, bin_count: int
)

Parameters:

Name Type Description Default
uid str

Unique identifier for the psi record. Must correspond to an existing entity in Scouter.

required
feature str

Feature name

required
bin_id int

Bundle ID

required
bin_count int

Bundle ID

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    uid: str,
    feature: str,
    bin_id: int,
    bin_count: int,
):
    """Initialize spc drift server record

    Args:
        uid:
            Unique identifier for the psi record.
            Must correspond to an existing entity in Scouter.
        feature:
            Feature name
        bin_id:
            Bundle ID
        bin_count:
            Bundle ID
    """

bin_count property

bin_count: int

Return the sample value.

bin_id property

bin_id: int

Return the bin id.

created_at property

created_at: datetime

Return the created at timestamp.

feature property

feature: str

Return the feature.

uid property

uid: str

Returns the unique identifier.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

to_dict

to_dict() -> Dict[str, str]

Return the dictionary representation of the record.

Source code in python/scouter/stubs.pyi
def to_dict(self) -> Dict[str, str]:
    """Return the dictionary representation of the record."""

QuantileBinning

QuantileBinning(num_bins: int = 10)

This strategy uses the R-7 quantile method (Hyndman & Fan Type 7) to compute bin edges. It is the default quantile method in R and provides continuous, median-unbiased estimates that are approximately unbiased for normal distributions.

The R-7 method defines quantiles using
  • m = 1 - p
  • j = floor(n * p + m)
  • h = n * p + m - j
  • Q(p) = (1 - h) * x[j] + h * x[j+1]
Reference

Hyndman, R. J. & Fan, Y. (1996). "Sample quantiles in statistical packages." The American Statistician, 50(4), pp. 361–365. PDF: https://www.amherst.edu/media/view/129116/original/Sample+Quantiles.pdf

Parameters:

Name Type Description Default
num_bins int

Number of bins to compute using the R-7 quantile method.

10
Source code in python/scouter/stubs.pyi
def __init__(self, num_bins: int = 10):
    """Initialize the quantile binning strategy.

    This strategy uses the R-7 quantile method (Hyndman & Fan Type 7) to
    compute bin edges. It is the default quantile method in R and provides
    continuous, median-unbiased estimates that are approximately unbiased
    for normal distributions.

    The R-7 method defines quantiles using:
        - m = 1 - p
        - j = floor(n * p + m)
        - h = n * p + m - j
        - Q(p) = (1 - h) * x[j] + h * x[j+1]

    Reference:
        Hyndman, R. J. & Fan, Y. (1996). "Sample quantiles in statistical packages."
        The American Statistician, 50(4), pp. 361–365.
        PDF: https://www.amherst.edu/media/view/129116/original/Sample+Quantiles.pdf

    Args:
        num_bins:
            Number of bins to compute using the R-7 quantile method.
    """

num_bins property writable

num_bins: int

The number of bins you want created using the r7 quantile method

Quantiles

q25 property

q25: float

25th quantile

q50 property

q50: float

50th quantile

q75 property

q75: float

75th quantile

q99 property

q99: float

99th quantile

Queue

Individual queue associated with a drift profile

identifier property

identifier: str

Return the identifier of the queue

insert

insert(
    item: Union[Features, Metrics, GenAIEvalRecord]
) -> None

Insert a record into the queue

Parameters:

Name Type Description Default
item Union[Features, Metrics, GenAIEvalRecord]

Item to insert into the queue. Can be an instance for Features, Metrics, or GenAIEvalRecord.

required
Example
features = Features(
    features=[
        Feature("feature_1", 1),
        Feature("feature_2", 2.0),
        Feature("feature_3", "value"),
    ]
)
queue.insert(features)
Source code in python/scouter/stubs.pyi
def insert(self, item: Union[Features, Metrics, GenAIEvalRecord]) -> None:
    """Insert a record into the queue

    Args:
        item:
            Item to insert into the queue.
            Can be an instance for Features, Metrics, or GenAIEvalRecord.

    Example:
        ```python
        features = Features(
            features=[
                Feature("feature_1", 1),
                Feature("feature_2", 2.0),
                Feature("feature_3", "value"),
            ]
        )
        queue.insert(features)
        ```
    """

QueueFeature

QueueFeature(name: str, value: Any)

Parameters:

Name Type Description Default
name str

Name of the feature

required
value Any

Value of the feature. Can be an int, float, or string.

required
Example
feature = Feature("feature_1", 1) # int feature
feature = Feature("feature_2", 2.0) # float feature
feature = Feature("feature_3", "value") # string feature
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, value: Any) -> None:
    """Initialize feature. Will attempt to convert the value to it's corresponding feature type.
    Current support types are int, float, string.

    Args:
        name:
            Name of the feature
        value:
            Value of the feature. Can be an int, float, or string.

    Example:
        ```python
        feature = Feature("feature_1", 1) # int feature
        feature = Feature("feature_2", 2.0) # float feature
        feature = Feature("feature_3", "value") # string feature
        ```
    """

categorical staticmethod

categorical(name: str, value: str) -> QueueFeature

Create a categorical feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value str

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def categorical(name: str, value: str) -> "QueueFeature":
    """Create a categorical feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

float staticmethod

float(name: str, value: float) -> QueueFeature

Create a float feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value float

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def float(name: str, value: float) -> "QueueFeature":
    """Create a float feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

int staticmethod

int(name: str, value: int) -> QueueFeature

Create an integer feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value int

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def int(name: str, value: int) -> "QueueFeature":
    """Create an integer feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

string staticmethod

string(name: str, value: str) -> QueueFeature

Create a string feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value str

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def string(name: str, value: str) -> "QueueFeature":
    """Create a string feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

RabbitMQConfig

RabbitMQConfig(
    host: Optional[str] = None,
    port: Optional[int] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    queue: Optional[str] = None,
    max_retries: int = 3,
)

Parameters:

Name Type Description Default
host Optional[str]

RabbitMQ host. If not provided, the value of the RABBITMQ_HOST environment variable is used.

None
port Optional[int]

RabbitMQ port. If not provided, the value of the RABBITMQ_PORT environment variable is used.

None
username Optional[str]

RabbitMQ username. If not provided, the value of the RABBITMQ_USERNAME environment variable is used.

None
password Optional[str]

RabbitMQ password. If not provided, the value of the RABBITMQ_PASSWORD environment variable is used.

None
queue Optional[str]

RabbitMQ queue to publish messages to. If not provided, the value of the RABBITMQ_QUEUE environment variable is used.

None
max_retries int

Maximum number of retries to attempt when publishing messages. Default is 3.

3
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    host: Optional[str] = None,
    port: Optional[int] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    queue: Optional[str] = None,
    max_retries: int = 3,
) -> None:
    """RabbitMQ configuration to use with the RabbitMQProducer.

    Args:
        host:
            RabbitMQ host.
            If not provided, the value of the RABBITMQ_HOST environment variable is used.

        port:
            RabbitMQ port.
            If not provided, the value of the RABBITMQ_PORT environment variable is used.

        username:
            RabbitMQ username.
            If not provided, the value of the RABBITMQ_USERNAME environment variable is used.

        password:
            RabbitMQ password.
            If not provided, the value of the RABBITMQ_PASSWORD environment variable is used.

        queue:
            RabbitMQ queue to publish messages to.
            If not provided, the value of the RABBITMQ_QUEUE environment variable is used.

        max_retries:
            Maximum number of retries to attempt when publishing messages.
            Default is 3.
    """

RagChunk

RAG chunk information.

Text chunk from RAG retrieval with optional page information.

Examples:

>>> chunk = RagChunk(
...     text="Retrieved text content",
...     page_span=PageSpan(first_page=1, last_page=2)
... )

page_span property

page_span: Optional[PageSpan]

Page range for this chunk.

text property

text: str

The chunk text.

RagResource

RagResource(
    rag_corpus: Optional[str] = None,
    rag_file_ids: Optional[List[str]] = None,
)

RAG corpus and file specification.

Specifies which RAG corpus and optionally which files to use.

Examples:

>>> # Use entire corpus
>>> resource = RagResource(
...     rag_corpus="projects/my-project/locations/us/ragCorpora/my-corpus"
... )
>>> # Use specific files from corpus
>>> resource = RagResource(
...     rag_corpus="projects/my-project/locations/us/ragCorpora/my-corpus",
...     rag_file_ids=["file1", "file2"]
... )

Parameters:

Name Type Description Default
rag_corpus Optional[str]

RAG corpus resource name

None
rag_file_ids Optional[List[str]]

List of file IDs within the corpus

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    rag_corpus: Optional[str] = None,
    rag_file_ids: Optional[List[str]] = None,
) -> None:
    """Initialize RAG resource.

    Args:
        rag_corpus (Optional[str]):
            RAG corpus resource name
        rag_file_ids (Optional[List[str]]):
            List of file IDs within the corpus
    """

rag_corpus property

rag_corpus: Optional[str]

The RAG corpus resource name.

rag_file_ids property

rag_file_ids: Optional[List[str]]

The file IDs.

RagRetrievalConfig

RagRetrievalConfig(
    top_k: Optional[int] = None,
    filter: Optional[Filter] = None,
    ranking: Optional[Ranking] = None,
)

Configuration for RAG retrieval behavior.

Controls filtering, ranking, and other retrieval parameters.

Examples:

>>> config = RagRetrievalConfig(
...     top_k=5,
...     filter=Filter(metadata_filter="category='technical'"),
...     ranking=Ranking(
...         rank_service=RankService(model_name="semantic-ranker-512@latest")
...     )
... )

Parameters:

Name Type Description Default
top_k Optional[int]

Number of top results to retrieve

None
filter Optional[Filter]

Filtering configuration

None
ranking Optional[Ranking]

Ranking configuration

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    top_k: Optional[int] = None,
    filter: Optional[Filter] = None,
    ranking: Optional[Ranking] = None,
) -> None:
    """Initialize RAG retrieval configuration.

    Args:
        top_k (Optional[int]):
            Number of top results to retrieve
        filter (Optional[Filter]):
            Filtering configuration
        ranking (Optional[Ranking]):
            Ranking configuration
    """

filter property

filter: Optional[Filter]

Filter configuration.

ranking property

ranking: Optional[Ranking]

Ranking configuration.

top_k property

top_k: Optional[int]

Number of top results.

RankService

RankService(model_name: Optional[str] = None)

Rank service configuration.

Configures the ranking service for RAG results.

Examples:

>>> service = RankService(model_name="semantic-ranker-512@latest")

Parameters:

Name Type Description Default
model_name Optional[str]

Model name for ranking

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model_name: Optional[str] = None,
) -> None:
    """Initialize rank service.

    Args:
        model_name (Optional[str]):
            Model name for ranking
    """

model_name property

model_name: Optional[str]

The ranking model name.

Ranking

Ranking(
    rank_service: Optional[RankService] = None,
    llm_ranker: Optional[LlmRanker] = None,
)

Ranking and reranking configuration.

Configures how RAG results are ranked.

Examples:

>>> # Using rank service
>>> ranking = Ranking(
...     rank_service=RankService(model_name="semantic-ranker-512@latest")
... )
>>> # Using LLM ranker
>>> ranking = Ranking(
...     llm_ranker=LlmRanker(model_name="gemini-1.5-flash")
... )

Exactly one of rank_service or llm_ranker must be provided.

Parameters:

Name Type Description Default
rank_service Optional[RankService]

Rank service configuration

None
llm_ranker Optional[LlmRanker]

LLM ranker configuration

None

Raises:

Type Description
TypeError

If both or neither are provided

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    rank_service: Optional[RankService] = None,
    llm_ranker: Optional[LlmRanker] = None,
) -> None:
    """Initialize ranking configuration.

    Exactly one of rank_service or llm_ranker must be provided.

    Args:
        rank_service (Optional[RankService]):
            Rank service configuration
        llm_ranker (Optional[LlmRanker]):
            LLM ranker configuration

    Raises:
        TypeError: If both or neither are provided
    """

ranking_config property

ranking_config: RankingConfig

The ranking configuration.

RankingConfig

Union type for ranking configuration.

Represents either rank service or LLM ranker configuration.

Examples:

>>> # Use rank service
>>> config = RankingConfig.RankService(
...     RankService(model_name="semantic-ranker-512@latest")
... )
>>> # Use LLM ranker
>>> config = RankingConfig.LlmRanker(
...     LlmRanker(model_name="gemini-1.5-flash")
... )

RedactedThinkingBlock

Redacted thinking content block in response.

Redacted version of thinking content.

Examples:

>>> block = response.content[0]
>>> print(block.data)

data property

data: str

Redacted data.

type property

type: str

Block type.

RedactedThinkingBlockParam

RedactedThinkingBlockParam(data: str)

Redacted thinking content block parameter.

Redacted version of Claude's thinking process.

Examples:

>>> block = RedactedThinkingBlockParam(data="[REDACTED]")

Parameters:

Name Type Description Default
data str

Redacted thinking data

required
Source code in python/scouter/stubs.pyi
def __init__(self, data: str) -> None:
    """Initialize redacted thinking block parameter.

    Args:
        data (str):
            Redacted thinking data
    """

data property

data: str

Redacted data.

type property

type: str

Content type (always 'redacted_thinking').

RedisConfig

RedisConfig(
    address: Optional[str] = None,
    chanel: Optional[str] = None,
)

Parameters:

Name Type Description Default
address str

Redis address. If not provided, the value of the REDIS_ADDR environment variable is used and defaults to "redis://localhost:6379".

None
channel str

Redis channel to publish messages to.

If not provided, the value of the REDIS_CHANNEL environment variable is used and defaults to "scouter_monitoring".

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    address: Optional[str] = None,
    chanel: Optional[str] = None,
) -> None:
    """Redis configuration to use with a Redis producer

    Args:
        address (str):
            Redis address.
            If not provided, the value of the REDIS_ADDR environment variable is used and defaults to
            "redis://localhost:6379".

        channel (str):
            Redis channel to publish messages to.

            If not provided, the value of the REDIS_CHANNEL environment variable is used and defaults to "scouter_monitoring".
    """

ResponseLogProbs

tokens property

tokens: List[TokenLogProbs]

The log probabilities of the tokens in the response. This is primarily used for debugging and analysis purposes.

ResponseType

Type of structured response.

Indicates the expected response format for structured outputs.

Examples:

>>> response_type = ResponseType.Score
>>> response_type = ResponseType.Pydantic

Null instance-attribute

Null: ResponseType

No structured response type

Pydantic instance-attribute

Pydantic: ResponseType

Pydantic BaseModel response type

Score instance-attribute

Score: ResponseType

Score response type

Retrieval

Retrieval(
    source: RetrievalSource,
    disable_attribution: Optional[bool] = None,
)

Retrieval tool configuration.

Enables the model to retrieve information from external sources.

Examples:

>>> retrieval = Retrieval(
...     source=RetrievalSource(
...         vertex_ai_search=VertexAISearch(
...             datastore="projects/my-project/..."
...         )
...     ),
...     disable_attribution=False
... )

Parameters:

Name Type Description Default
source RetrievalSource

Retrieval source configuration

required
disable_attribution Optional[bool]

Whether to disable attribution

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    source: RetrievalSource,
    disable_attribution: Optional[bool] = None,
) -> None:
    """Initialize retrieval configuration.

    Args:
        source (RetrievalSource):
            Retrieval source configuration
        disable_attribution (Optional[bool]):
            Whether to disable attribution
    """

disable_attribution property

disable_attribution: Optional[bool]

Whether attribution is disabled.

source property

source: RetrievalSource

The retrieval source.

RetrievalConfig

RetrievalConfig(lat_lng: LatLng, language_code: str)

Configuration for retrieval operations.

Provides location and language context for retrieval tools.

Examples:

>>> config = RetrievalConfig(
...     lat_lng=LatLng(latitude=37.7749, longitude=-122.4194),
...     language_code="en-US"
... )

Parameters:

Name Type Description Default
lat_lng LatLng

Geographic coordinates

required
language_code str

Language code

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    lat_lng: LatLng,
    language_code: str,
) -> None:
    """Initialize retrieval configuration.

    Args:
        lat_lng (LatLng):
            Geographic coordinates
        language_code (str):
            Language code
    """

language_code property

language_code: str

The language code.

lat_lng property

lat_lng: LatLng

The geographic coordinates.

RetrievalMetadata

Metadata about retrieval operations.

Contains scores and information about retrieval behavior.

Examples:

>>> metadata = RetrievalMetadata(
...     google_search_dynamic_retrieval_score=0.85
... )

google_search_dynamic_retrieval_score property

google_search_dynamic_retrieval_score: Optional[float]

Score for dynamic retrieval likelihood.

RetrievalSource

RetrievalSource(
    vertex_ai_search: Optional[VertexAISearch] = None,
    vertex_rag_store: Optional[VertexRagStore] = None,
    external_api: Optional[ExternalApi] = None,
)

Union type for retrieval sources.

Represents one of several retrieval source types.

Examples:

>>> # Vertex AI Search
>>> source = RetrievalSource(
...     vertex_ai_search=VertexAISearch(...)
... )
>>> # RAG Store
>>> source = RetrievalSource(
...     vertex_rag_store=VertexRagStore(...)
... )
>>> # External API
>>> source = RetrievalSource(
...     external_api=ExternalApi(...)
... )

Exactly one source type must be provided.

Parameters:

Name Type Description Default
vertex_ai_search Optional[VertexAISearch]

Vertex AI Search configuration

None
vertex_rag_store Optional[VertexRagStore]

Vertex RAG Store configuration

None
external_api Optional[ExternalApi]

External API configuration

None

Raises:

Type Description
TypeError

If configuration is invalid

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    vertex_ai_search: Optional[VertexAISearch] = None,
    vertex_rag_store: Optional[VertexRagStore] = None,
    external_api: Optional[ExternalApi] = None,
) -> None:
    """Initialize retrieval source.

    Exactly one source type must be provided.

    Args:
        vertex_ai_search (Optional[VertexAISearch]):
            Vertex AI Search configuration
        vertex_rag_store (Optional[VertexRagStore]):
            Vertex RAG Store configuration
        external_api (Optional[ExternalApi]):
            External API configuration

    Raises:
        TypeError: If configuration is invalid
    """

RetrievedContext

Retrieved context information.

Context retrieved from a knowledge source.

Examples:

>>> context = RetrievedContext(
...     uri="https://example.com",
...     title="Example",
...     text="Retrieved content",
...     rag_chunk=RagChunk(...)
... )

rag_chunk property

rag_chunk: Optional[RagChunk]

RAG chunk information.

text property

text: Optional[str]

Retrieved text.

title property

title: Optional[str]

Source title.

uri property

uri: Optional[str]

Source URI.

Rice

Rice()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Rice equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

Role

Message role in conversation.

Indicates the role of a message sender in a conversation.

Examples:

>>> role = Role.User
>>> role.as_str()
'user'

Assistant instance-attribute

Assistant: Role

Assistant role

Developer instance-attribute

Developer: Role

Developer/system role

Model instance-attribute

Model: Role

Model role

System instance-attribute

System: Role

System role

Tool instance-attribute

Tool: Role

Tool role

User instance-attribute

User: Role

User role

as_str

as_str() -> str

Return string representation of role.

Source code in python/scouter/stubs.pyi
def as_str(self) -> str:
    """Return string representation of role."""

RouteMetrics

error_count property

error_count: int

Error count

error_latency property

error_latency: float

Error latency

metrics property

metrics: LatencyMetrics

Return the metrics

request_count property

request_count: int

Request count

route_name property

route_name: str

Return the route name

status_codes property

status_codes: Dict[int, int]

Dictionary of status codes and counts

RoutingConfig

RoutingConfig(routing_config: RoutingConfigMode)

Model routing configuration wrapper.

Wraps the routing mode configuration.

Examples:

>>> config = RoutingConfig(
...     routing_config=RoutingConfigMode(
...         auto_mode=AutoRoutingMode(
...             model_routing_preference=ModelRoutingPreference.Balanced
...         )
...     )
... )

Parameters:

Name Type Description Default
routing_config RoutingConfigMode

The routing mode configuration

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    routing_config: RoutingConfigMode,
) -> None:
    """Initialize routing configuration.

    Args:
        routing_config (RoutingConfigMode):
            The routing mode configuration
    """

routing_config property

routing_config: RoutingConfigMode

The routing configuration mode.

RoutingConfigMode

RoutingConfigMode(
    auto_mode: Optional[AutoRoutingMode] = None,
    manual_mode: Optional[ManualRoutingMode] = None,
)

Union type for routing configuration modes.

Represents either automatic or manual routing configuration.

Examples:

>>> # Automatic routing
>>> mode = RoutingConfigMode(
...     auto_mode=AutoRoutingMode(
...         model_routing_preference=ModelRoutingPreference.Balanced
...     )
... )
>>> # Manual routing
>>> mode = RoutingConfigMode(
...     manual_mode=ManualRoutingMode(model_name="gemini-2.0-flash-exp")
... )

Exactly one of auto_mode or manual_mode must be provided.

Parameters:

Name Type Description Default
auto_mode Optional[AutoRoutingMode]

Automatic routing configuration

None
manual_mode Optional[ManualRoutingMode]

Manual routing configuration

None

Raises:

Type Description
TypeError

If both or neither modes are provided

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    auto_mode: Optional[AutoRoutingMode] = None,
    manual_mode: Optional[ManualRoutingMode] = None,
) -> None:
    """Initialize routing mode.

    Exactly one of auto_mode or manual_mode must be provided.

    Args:
        auto_mode (Optional[AutoRoutingMode]):
            Automatic routing configuration
        manual_mode (Optional[ManualRoutingMode]):
            Manual routing configuration

    Raises:
        TypeError: If both or neither modes are provided
    """

RustyLogger

debug

debug(message: str, *args: Any) -> None

Log a debug message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def debug(self, message: str, *args: Any) -> None:
    """Log a debug message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

error

error(message: str, *args: Any) -> None

Log an error message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def error(self, message: str, *args: Any) -> None:
    """Log an error message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

get_logger staticmethod

get_logger(
    config: Optional[LoggingConfig] = None,
) -> RustyLogger

Get a logger with the provided name.

Parameters:

Name Type Description Default
config Optional[LoggingConfig]

Logging configuration options.

None
Source code in python/scouter/stubs.pyi
@staticmethod
def get_logger(config: Optional[LoggingConfig] = None) -> "RustyLogger":
    """Get a logger with the provided name.

    Args:
        config:
            Logging configuration options.
    """

info

info(message: str, *args: Any) -> None

Log an info message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def info(self, message: str, *args: Any) -> None:
    """Log an info message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

setup_logging staticmethod

setup_logging(
    config: Optional[LoggingConfig] = None,
) -> None

Setup logging with the provided configuration.

Parameters:

Name Type Description Default
config Optional[LoggingConfig]

Logging configuration options.

None
Source code in python/scouter/stubs.pyi
@staticmethod
def setup_logging(config: Optional[LoggingConfig] = None) -> None:
    """Setup logging with the provided configuration.

    Args:
        config:
            Logging configuration options.
    """

trace

trace(message: str, *args: Any) -> None

Log a trace message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def trace(self, message: str, *args: Any) -> None:
    """Log a trace message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

warn

warn(message: str, *args: Any) -> None

Log a warning message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def warn(self, message: str, *args: Any) -> None:
    """Log a warning message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

SafetyRating

Safety rating for content.

Provides detailed safety assessment including probability and severity.

Examples:

>>> rating = SafetyRating(
...     category=HarmCategory.HarmCategoryHateSpeech,
...     probability=HarmProbability.Low,
...     probability_score=0.2,
...     severity=HarmSeverity.HarmSeverityLow,
...     severity_score=0.15,
...     blocked=False
... )

blocked property

blocked: Optional[bool]

Whether content was blocked.

category property

category: HarmCategory

Harm category.

overwritten_threshold property

overwritten_threshold: Optional[HarmBlockThreshold]

Overwritten threshold for image output.

probability property

probability: Optional[HarmProbability]

Harm probability level.

probability_score property

probability_score: Optional[float]

Numeric probability score.

severity property

severity: Optional[HarmSeverity]

Harm severity level.

severity_score property

severity_score: Optional[float]

Numeric severity score.

SafetySetting

SafetySetting(
    category: HarmCategory, threshold: HarmBlockThreshold
)

Safety filtering configuration for harmful content.

Controls how the model handles potentially harmful content in specific harm categories. Each setting applies to one harm category.

Examples:

>>> # Block hate speech with medium threshold
>>> setting = SafetySetting(
...     category=HarmCategory.HarmCategoryHateSpeech,
...     threshold=HarmBlockThreshold.BlockMediumAndAbove
... )
>>> # Disable blocking for harassment
>>> setting = SafetySetting(
...     category=HarmCategory.HarmCategoryHarassment,
...     threshold=HarmBlockThreshold.BlockNone
... )

Parameters:

Name Type Description Default
category HarmCategory

The harm category to configure

required
threshold HarmBlockThreshold

The blocking threshold to apply

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    category: HarmCategory,
    threshold: HarmBlockThreshold,
) -> None:
    """Initialize a safety setting.

    Args:
        category (HarmCategory):
            The harm category to configure
        threshold (HarmBlockThreshold):
            The blocking threshold to apply
    """

category property

category: HarmCategory

The harm category.

threshold property

threshold: HarmBlockThreshold

The blocking threshold.

Schema

Schema(
    type: Optional[SchemaType] = None,
    format: Optional[str] = None,
    title: Optional[str] = None,
    description: Optional[str] = None,
    nullable: Optional[bool] = None,
    enum_: Optional[List[str]] = None,
    max_items: Optional[str] = None,
    min_items: Optional[str] = None,
    properties: Optional[Dict[str, Schema]] = None,
    required: Optional[List[str]] = None,
    min_properties: Optional[str] = None,
    max_properties: Optional[str] = None,
    min_length: Optional[str] = None,
    max_length: Optional[str] = None,
    pattern: Optional[str] = None,
    example: Optional[Any] = None,
    any_of: Optional[List[Schema]] = None,
    property_ordering: Optional[List[str]] = None,
    default: Optional[Any] = None,
    items: Optional[Schema] = None,
    minimum: Optional[float] = None,
    maximum: Optional[float] = None,
)

JSON Schema definition for structured outputs and parameters.

Defines the structure, types, and constraints for JSON data used in function parameters and structured outputs. Based on OpenAPI 3.0 schema.

Examples:

>>> # Simple string schema
>>> schema = Schema(
...     type=SchemaType.String,
...     description="User's name",
...     min_length="1",
...     max_length="100"
... )
>>> # Object schema with properties
>>> schema = Schema(
...     type=SchemaType.Object,
...     properties={
...         "name": Schema(type=SchemaType.String),
...         "age": Schema(type=SchemaType.Integer, minimum=0.0)
...     },
...     required=["name"]
... )

Parameters:

Name Type Description Default
type Optional[SchemaType]

The data type (string, number, object, etc.)

None
format Optional[str]

Format hint for the type (e.g., "date-time")

None
title Optional[str]

Human-readable title

None
description Optional[str]

Description of the schema

None
nullable Optional[bool]

Whether null values are allowed

None
enum_ Optional[List[str]]

List of allowed values

None
max_items Optional[str]

Maximum array length (for arrays)

None
min_items Optional[str]

Minimum array length (for arrays)

None
properties Optional[Dict[str, Schema]]

Object properties (for objects)

None
required Optional[List[str]]

Required property names (for objects)

None
min_properties Optional[str]

Minimum number of properties (for objects)

None
max_properties Optional[str]

Maximum number of properties (for objects)

None
min_length Optional[str]

Minimum string length (for strings)

None
max_length Optional[str]

Maximum string length (for strings)

None
pattern Optional[str]

Regular expression pattern (for strings)

None
example Optional[Any]

Example value

None
any_of Optional[List[Schema]]

List of alternative schemas

None
property_ordering Optional[List[str]]

Order of properties

None
default Optional[Any]

Default value

None
items Optional[Schema]

Schema for array items (for arrays)

None
minimum Optional[float]

Minimum numeric value (for numbers)

None
maximum Optional[float]

Maximum numeric value (for numbers)

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    type: Optional[SchemaType] = None,
    format: Optional[str] = None,
    title: Optional[str] = None,
    description: Optional[str] = None,
    nullable: Optional[bool] = None,
    enum_: Optional[List[str]] = None,
    max_items: Optional[str] = None,
    min_items: Optional[str] = None,
    properties: Optional[Dict[str, "Schema"]] = None,
    required: Optional[List[str]] = None,
    min_properties: Optional[str] = None,
    max_properties: Optional[str] = None,
    min_length: Optional[str] = None,
    max_length: Optional[str] = None,
    pattern: Optional[str] = None,
    example: Optional[Any] = None,
    any_of: Optional[List["Schema"]] = None,
    property_ordering: Optional[List[str]] = None,
    default: Optional[Any] = None,
    items: Optional["Schema"] = None,
    minimum: Optional[float] = None,
    maximum: Optional[float] = None,
) -> None:
    """Initialize a schema definition.

    Args:
        type (Optional[SchemaType]):
            The data type (string, number, object, etc.)
        format (Optional[str]):
            Format hint for the type (e.g., "date-time")
        title (Optional[str]):
            Human-readable title
        description (Optional[str]):
            Description of the schema
        nullable (Optional[bool]):
            Whether null values are allowed
        enum_ (Optional[List[str]]):
            List of allowed values
        max_items (Optional[str]):
            Maximum array length (for arrays)
        min_items (Optional[str]):
            Minimum array length (for arrays)
        properties (Optional[Dict[str, "Schema"]]):
            Object properties (for objects)
        required (Optional[List[str]]):
            Required property names (for objects)
        min_properties (Optional[str]):
            Minimum number of properties (for objects)
        max_properties (Optional[str]):
            Maximum number of properties (for objects)
        min_length (Optional[str]):
            Minimum string length (for strings)
        max_length (Optional[str]):
            Maximum string length (for strings)
        pattern (Optional[str]):
            Regular expression pattern (for strings)
        example (Optional[Any]):
            Example value
        any_of (Optional[List["Schema"]]):
            List of alternative schemas
        property_ordering (Optional[List[str]]):
            Order of properties
        default (Optional[Any]):
            Default value
        items (Optional["Schema"]):
            Schema for array items (for arrays)
        minimum (Optional[float]):
            Minimum numeric value (for numbers)
        maximum (Optional[float]):
            Maximum numeric value (for numbers)
    """

SchemaType

Schema type definitions for Google/Gemini API.

Defines the available data types that can be used in schema definitions for structured outputs and function parameters.

Examples:

>>> schema_type = SchemaType.String
>>> schema_type.value
'STRING'

Array class-attribute instance-attribute

Array = 'SchemaType'

Array/list data type

Boolean class-attribute instance-attribute

Boolean = 'SchemaType'

Boolean data type

Integer class-attribute instance-attribute

Integer = 'SchemaType'

Integer data type

Null class-attribute instance-attribute

Null = 'SchemaType'

Null data type

Number class-attribute instance-attribute

Number = 'SchemaType'

Numeric data type (floating point)

Object class-attribute instance-attribute

Object = 'SchemaType'

Object/dictionary data type

String class-attribute instance-attribute

String = 'SchemaType'

String data type

TypeUnspecified class-attribute instance-attribute

TypeUnspecified = 'SchemaType'

Unspecified type

Score

A class representing a score with a score value and a reason. This is typically used as a response type for tasks/prompts that require scoring or evaluation of results.

Example:

    Prompt(
        model="openai:gpt-4o",
        messages="What is the score of this response?",
        system_instructions="system_prompt",
        response_format=Score,
    )

reason property

reason: str

The reason for the score.

score property

score: int

The score value.

model_validate_json staticmethod

model_validate_json(json_string: str) -> Score

Validate the score JSON.

Parameters:

Name Type Description Default
json_string str

The JSON string to validate.

required

Returns:

Name Type Description
Score Score

The score object.

Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "Score":
    """Validate the score JSON.

    Args:
        json_string (str):
            The JSON string to validate.

    Returns:
        Score:
            The score object.
    """

Scott

Scott()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Scott equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

ScouterClient

ScouterClient(config: Optional[HttpConfig] = None)

Helper client for interacting with Scouter Server

Parameters:

Name Type Description Default
config Optional[HttpConfig]

HTTP configuration for interacting with the server.

None
Source code in python/scouter/stubs.pyi
def __init__(self, config: Optional[HttpConfig] = None) -> None:
    """Initialize ScouterClient

    Args:
        config:
            HTTP configuration for interacting with the server.
    """

download_profile

download_profile(
    request: GetProfileRequest, path: Optional[Path]
) -> str

Download profile

Parameters:

Name Type Description Default
request GetProfileRequest

GetProfileRequest

required
path Optional[Path]

Path to save profile

required

Returns:

Type Description
str

Path to downloaded profile

Source code in python/scouter/stubs.pyi
def download_profile(self, request: GetProfileRequest, path: Optional[Path]) -> str:
    """Download profile

    Args:
        request:
            GetProfileRequest
        path:
            Path to save profile

    Returns:
        Path to downloaded profile
    """

get_alerts

get_alerts(
    request: DriftAlertPaginationRequest,
) -> DriftAlertPaginationResponse

Get alerts

Parameters:

Name Type Description Default
request DriftAlertPaginationRequest

DriftAlertPaginationRequest

required

Returns:

Type Description
DriftAlertPaginationResponse

DriftAlertPaginationResponse

Source code in python/scouter/stubs.pyi
def get_alerts(self, request: DriftAlertPaginationRequest) -> DriftAlertPaginationResponse:
    """Get alerts

    Args:
        request:
            DriftAlertPaginationRequest

    Returns:
        DriftAlertPaginationResponse
    """

get_binned_drift

get_binned_drift(
    drift_request: DriftRequest, drift_type: DriftType
) -> Any

Get drift map from server

Parameters:

Name Type Description Default
drift_request DriftRequest

DriftRequest object

required
drift_type DriftType

Drift type for request

required

Returns:

Type Description
Any

Drift map of type BinnedMetrics | BinnedPsiFeatureMetrics | BinnedSpcFeatureMetrics

Source code in python/scouter/stubs.pyi
def get_binned_drift(
    self,
    drift_request: DriftRequest,
    drift_type: DriftType,
) -> Any:
    """Get drift map from server

    Args:
        drift_request:
            DriftRequest object
        drift_type:
            Drift type for request

    Returns:
        Drift map of type BinnedMetrics | BinnedPsiFeatureMetrics | BinnedSpcFeatureMetrics
    """

get_genai_task_binned_drift

get_genai_task_binned_drift(
    drift_request: DriftRequest,
) -> Any

Get GenAI task drift map from server Args: drift_request: DriftRequest object

Source code in python/scouter/stubs.pyi
def get_genai_task_binned_drift(self, drift_request: DriftRequest) -> Any:
    """Get GenAI task drift map from server
    Args:
        drift_request:
            DriftRequest object
    """

get_paginated_traces

get_paginated_traces(
    filters: TraceFilters,
) -> TracePaginationResponse

Get paginated traces Args: filters: TraceFilters object Returns: TracePaginationResponse

Source code in python/scouter/stubs.pyi
def get_paginated_traces(self, filters: TraceFilters) -> TracePaginationResponse:
    """Get paginated traces
    Args:
        filters:
            TraceFilters object
    Returns:
        TracePaginationResponse
    """

get_tags

get_tags(entity_type: str, entity_id: str) -> TagsResponse

Get tags for an entity

Parameters:

Name Type Description Default
entity_type str

Entity type

required
entity_id str

Entity ID

required

Returns:

Type Description
TagsResponse

TagsResponse

Source code in python/scouter/stubs.pyi
def get_tags(self, entity_type: str, entity_id: str) -> TagsResponse:
    """Get tags for an entity

    Args:
        entity_type:
            Entity type
        entity_id:
            Entity ID

    Returns:
        TagsResponse
    """

get_trace_baggage

get_trace_baggage(trace_id: str) -> TraceBaggageResponse

Get trace baggage

Parameters:

Name Type Description Default
trace_id str

Trace ID

required

Returns:

Type Description
TraceBaggageResponse

TraceBaggageResponse

Source code in python/scouter/stubs.pyi
def get_trace_baggage(self, trace_id: str) -> TraceBaggageResponse:
    """Get trace baggage

    Args:
        trace_id:
            Trace ID

    Returns:
        TraceBaggageResponse
    """

get_trace_metrics

get_trace_metrics(
    request: TraceMetricsRequest,
) -> TraceMetricsResponse

Get trace metrics

Parameters:

Name Type Description Default
request TraceMetricsRequest

TraceMetricsRequest

required

Returns:

Type Description
TraceMetricsResponse

TraceMetricsResponse

Source code in python/scouter/stubs.pyi
def get_trace_metrics(self, request: TraceMetricsRequest) -> TraceMetricsResponse:
    """Get trace metrics

    Args:
        request:
            TraceMetricsRequest

    Returns:
        TraceMetricsResponse
    """

get_trace_spans

get_trace_spans(
    trace_id: str, service_name: Optional[str] = None
) -> TraceSpansResponse

Get trace spans

Parameters:

Name Type Description Default
trace_id str

Trace ID

required
service_name Optional[str]

Service name

None

Returns:

Type Description
TraceSpansResponse

TraceSpansResponse

Source code in python/scouter/stubs.pyi
def get_trace_spans(
    self,
    trace_id: str,
    service_name: Optional[str] = None,
) -> TraceSpansResponse:
    """Get trace spans

    Args:
        trace_id:
            Trace ID
        service_name:
            Service name

    Returns:
        TraceSpansResponse
    """

register_profile

register_profile(
    profile: Any,
    set_active: bool = False,
    deactivate_others: bool = False,
) -> bool

Registers a drift profile with the server

Parameters:

Name Type Description Default
profile Any

Drift profile

required
set_active bool

Whether to set the profile as active or inactive

False
deactivate_others bool

Whether to deactivate other profiles

False

Returns:

Type Description
bool

boolean

Source code in python/scouter/stubs.pyi
def register_profile(self, profile: Any, set_active: bool = False, deactivate_others: bool = False) -> bool:
    """Registers a drift profile with the server

    Args:
        profile:
            Drift profile
        set_active:
            Whether to set the profile as active or inactive
        deactivate_others:
            Whether to deactivate other profiles

    Returns:
        boolean
    """

update_profile_status

update_profile_status(
    request: ProfileStatusRequest,
) -> bool

Update profile status

Parameters:

Name Type Description Default
request ProfileStatusRequest

ProfileStatusRequest

required

Returns:

Type Description
bool

boolean

Source code in python/scouter/stubs.pyi
def update_profile_status(self, request: ProfileStatusRequest) -> bool:
    """Update profile status

    Args:
        request:
            ProfileStatusRequest

    Returns:
        boolean
    """

ScouterQueue

Main queue class for Scouter. Publishes drift records to the configured transport

transport_config property

transport_config: Union[
    KafkaConfig,
    RabbitMQConfig,
    RedisConfig,
    HttpConfig,
    MockConfig,
]

Return the transport configuration used by the queue

from_path staticmethod

from_path(
    path: Dict[str, Path],
    transport_config: Union[
        KafkaConfig,
        RabbitMQConfig,
        RedisConfig,
        HttpConfig,
        GrpcConfig,
    ],
) -> ScouterQueue

Initializes Scouter queue from one or more drift profile paths.

╔══════════════════════════════════════════════════════════════════════════╗
║                    SCOUTER QUEUE ARCHITECTURE                            ║
╠══════════════════════════════════════════════════════════════════════════╣
║                                                                          ║
║  Python Runtime (Client)                                                 ║
║  ┌────────────────────────────────────────────────────────────────────┐  ║
║  │  ScouterQueue.from_path()                                          │  ║
║  │    • Load drift profiles (SPC, PSI, Custom, LLM)                   │  ║
║  │    • Configure transport (Kafka, RabbitMQ, Redis, HTTP, gRPC)      │  ║
║  └───────────────────────────┬────────────────────────────────────────┘  ║
║                              │                                           ║
║                              ▼                                           ║
║  ┌────────────────────────────────────────────────────────────────────┐  ║
║  │  queue["profile_alias"].insert(Features | Metrics | GenAIEvalRecord)     │  ║
║  └───────────────────────────┬────────────────────────────────────────┘  ║
║                              │                                           ║
╚══════════════════════════════╪═══════════════════════════════════════════╝
                               │  Language Boundary
╔══════════════════════════════╪═══════════════════════════════════════════╗
║  Rust Runtime (Producer)     ▼                                           ║
║  ┌────────────────────────────────────────────────────────────────────┐  ║
║  │  Queue<T> (per profile)                                            │  ║
║  │    • Buffer records in memory                                      │  ║
║  │    • Validate against drift profile schema                         │  ║
║  │    • Convert to ServerRecord format                                │  ║
║  └───────────────────────────┬────────────────────────────────────────┘  ║
║                              │                                           ║
║                              ▼                                           ║
║  ┌────────────────────────────────────────────────────────────────────┐  ║
║  │  Transport Producer                                                │  ║
║  │    • KafkaProducer    → Kafka brokers                              │  ║
║  │    • RabbitMQProducer → RabbitMQ exchange                          │  ║
║  │    • RedisProducer    → Redis pub/sub                              │  ║
║  │    • HttpProducer     → HTTP endpoint                              │  ║
║  │    • GrpcProducer     → gRPC server                                │  ║
║  └───────────────────────────┬────────────────────────────────────────┘  ║
║                              │                                           ║
╚══════════════════════════════╪═══════════════════════════════════════════╝
                               │  Network/Message Bus
╔══════════════════════════════╪═══════════════════════════════════════════╗
║  Scouter Server              ▼                                           ║
║  ┌────────────────────────────────────────────────────────────────────┐  ║
║  │  Consumer (Kafka/RabbitMQ/Redis/HTTP/gRPC)                         │  ║
║  │    • Receive drift records                                         │  ║
║  │    • Deserialize & validate                                        │  ║
║  └───────────────────────────┬────────────────────────────────────────┘  ║
║                              │                                           ║
║                              ▼                                           ║
║  ┌────────────────────────────────────────────────────────────────────┐  ║
║  │  Processing Pipeline                                               │  ║
║  │    • Calculate drift metrics (SPC, PSI)                            │  ║
║  │    • Evaluate alert conditions                                     │  ║
║  │    • Store in PostgreSQL                                           │  ║
║  │    • Dispatch alerts (Slack, OpsGenie, Console)                    │  ║
║  └────────────────────────────────────────────────────────────────────┘  ║
║                                                                          ║
╚══════════════════════════════════════════════════════════════════════════╝
Flow Summary: 1. Python Runtime: Initialize queue with drift profiles and transport config 2. Insert Records: Call queue["alias"].insert() with Features/Metrics/GenAIEvalRecord 3. Rust Queue: Buffer and validate records against profile schema 4. Transport Producer: Serialize and publish to configured transport 5. Network: Records travel via Kafka/RabbitMQ/Redis/HTTP/gRPC 6. Scouter Server: Consumer receives, processes, and stores records 7. Alerting: Evaluate drift conditions and dispatch alerts if triggered

Parameters:

Name Type Description Default
path Dict[str, Path]

Dictionary of drift profile paths. Each key is a user-defined alias for accessing a queue.

Supported profile types: • SpcDriftProfile - Statistical Process Control monitoring • PsiDriftProfile - Population Stability Index monitoring • CustomDriftProfile - Custom metric monitoring • GenAIEvalProfile - LLM evaluation monitoring

required
transport_config Union[KafkaConfig, RabbitMQConfig, RedisConfig, HttpConfig, GrpcConfig]

Transport configuration for the queue publisher.

Available transports: • KafkaConfig - Apache Kafka message bus • RabbitMQConfig - RabbitMQ message broker • RedisConfig - Redis pub/sub • HttpConfig - Direct HTTP to Scouter server • GrpcConfig - Direct gRPC to Scouter server

required

Returns:

Name Type Description
ScouterQueue ScouterQueue

Configured queue with Rust-based producers for each drift profile.

Examples:

Basic SPC monitoring with Kafka: >>> queue = ScouterQueue.from_path( ... path={"spc": Path("spc_drift_profile.json")}, ... transport_config=KafkaConfig( ... brokers="localhost:9092", ... topic="scouter_monitoring", ... ), ... ) >>> queue["spc"].insert( ... Features(features=[ ... Feature("feature_1", 1.5), ... Feature("feature_2", 2.3), ... ]) ... )

Multi-profile monitoring with HTTP: >>> queue = ScouterQueue.from_path( ... path={ ... "spc": Path("spc_profile.json"), ... "psi": Path("psi_profile.json"), ... "custom": Path("custom_profile.json"), ... }, ... transport_config=HttpConfig( ... server_uri="http://scouter-server:8000", ... ), ... ) >>> queue["psi"].insert(Features(...)) >>> queue["custom"].insert(Metrics(...))

LLM monitoring with gRPC: >>> queue = ScouterQueue.from_path( ... path={"genai_eval": Path("genai_profile.json")}, ... transport_config=GrpcConfig( ... server_uri="http://scouter-server:50051", ... username="monitoring_user", ... password="secure_password", ... ), ... ) >>> queue["genai_eval"].insert( ... GenAIEvalRecord(context={"input": "...", "response": "..."}) ... )

Source code in python/scouter/stubs.pyi
@staticmethod
def from_path(
    path: Dict[str, Path],
    transport_config: Union[
        KafkaConfig,
        RabbitMQConfig,
        RedisConfig,
        HttpConfig,
        GrpcConfig,
    ],
) -> "ScouterQueue":
    """Initializes Scouter queue from one or more drift profile paths.

    ```
    ╔══════════════════════════════════════════════════════════════════════════╗
    ║                    SCOUTER QUEUE ARCHITECTURE                            ║
    ╠══════════════════════════════════════════════════════════════════════════╣
    ║                                                                          ║
    ║  Python Runtime (Client)                                                 ║
    ║  ┌────────────────────────────────────────────────────────────────────┐  ║
    ║  │  ScouterQueue.from_path()                                          │  ║
    ║  │    • Load drift profiles (SPC, PSI, Custom, LLM)                   │  ║
    ║  │    • Configure transport (Kafka, RabbitMQ, Redis, HTTP, gRPC)      │  ║
    ║  └───────────────────────────┬────────────────────────────────────────┘  ║
    ║                              │                                           ║
    ║                              ▼                                           ║
    ║  ┌────────────────────────────────────────────────────────────────────┐  ║
    ║  │  queue["profile_alias"].insert(Features | Metrics | GenAIEvalRecord)     │  ║
    ║  └───────────────────────────┬────────────────────────────────────────┘  ║
    ║                              │                                           ║
    ╚══════════════════════════════╪═══════════════════════════════════════════╝

                                   │  Language Boundary

    ╔══════════════════════════════╪═══════════════════════════════════════════╗
    ║  Rust Runtime (Producer)     ▼                                           ║
    ║  ┌────────────────────────────────────────────────────────────────────┐  ║
    ║  │  Queue<T> (per profile)                                            │  ║
    ║  │    • Buffer records in memory                                      │  ║
    ║  │    • Validate against drift profile schema                         │  ║
    ║  │    • Convert to ServerRecord format                                │  ║
    ║  └───────────────────────────┬────────────────────────────────────────┘  ║
    ║                              │                                           ║
    ║                              ▼                                           ║
    ║  ┌────────────────────────────────────────────────────────────────────┐  ║
    ║  │  Transport Producer                                                │  ║
    ║  │    • KafkaProducer    → Kafka brokers                              │  ║
    ║  │    • RabbitMQProducer → RabbitMQ exchange                          │  ║
    ║  │    • RedisProducer    → Redis pub/sub                              │  ║
    ║  │    • HttpProducer     → HTTP endpoint                              │  ║
    ║  │    • GrpcProducer     → gRPC server                                │  ║
    ║  └───────────────────────────┬────────────────────────────────────────┘  ║
    ║                              │                                           ║
    ╚══════════════════════════════╪═══════════════════════════════════════════╝

                                   │  Network/Message Bus

    ╔══════════════════════════════╪═══════════════════════════════════════════╗
    ║  Scouter Server              ▼                                           ║
    ║  ┌────────────────────────────────────────────────────────────────────┐  ║
    ║  │  Consumer (Kafka/RabbitMQ/Redis/HTTP/gRPC)                         │  ║
    ║  │    • Receive drift records                                         │  ║
    ║  │    • Deserialize & validate                                        │  ║
    ║  └───────────────────────────┬────────────────────────────────────────┘  ║
    ║                              │                                           ║
    ║                              ▼                                           ║
    ║  ┌────────────────────────────────────────────────────────────────────┐  ║
    ║  │  Processing Pipeline                                               │  ║
    ║  │    • Calculate drift metrics (SPC, PSI)                            │  ║
    ║  │    • Evaluate alert conditions                                     │  ║
    ║  │    • Store in PostgreSQL                                           │  ║
    ║  │    • Dispatch alerts (Slack, OpsGenie, Console)                    │  ║
    ║  └────────────────────────────────────────────────────────────────────┘  ║
    ║                                                                          ║
    ╚══════════════════════════════════════════════════════════════════════════╝
    ```
    Flow Summary:
        1. **Python Runtime**: Initialize queue with drift profiles and transport config
        2. **Insert Records**: Call queue["alias"].insert() with Features/Metrics/GenAIEvalRecord
        3. **Rust Queue**: Buffer and validate records against profile schema
        4. **Transport Producer**: Serialize and publish to configured transport
        5. **Network**: Records travel via Kafka/RabbitMQ/Redis/HTTP/gRPC
        6. **Scouter Server**: Consumer receives, processes, and stores records
        7. **Alerting**: Evaluate drift conditions and dispatch alerts if triggered

    Args:
        path (Dict[str, Path]):
            Dictionary of drift profile paths.
            Each key is a user-defined alias for accessing a queue.

            Supported profile types:
                • SpcDriftProfile    - Statistical Process Control monitoring
                • PsiDriftProfile    - Population Stability Index monitoring
                • CustomDriftProfile - Custom metric monitoring
                • GenAIEvalProfile    - LLM evaluation monitoring

        transport_config (Union[KafkaConfig, RabbitMQConfig, RedisConfig, HttpConfig, GrpcConfig]):
            Transport configuration for the queue publisher.

            Available transports:
                • KafkaConfig     - Apache Kafka message bus
                • RabbitMQConfig  - RabbitMQ message broker
                • RedisConfig     - Redis pub/sub
                • HttpConfig      - Direct HTTP to Scouter server
                • GrpcConfig      - Direct gRPC to Scouter server

    Returns:
        ScouterQueue:
            Configured queue with Rust-based producers for each drift profile.

    Examples:
        Basic SPC monitoring with Kafka:
            >>> queue = ScouterQueue.from_path(
            ...     path={"spc": Path("spc_drift_profile.json")},
            ...     transport_config=KafkaConfig(
            ...         brokers="localhost:9092",
            ...         topic="scouter_monitoring",
            ...     ),
            ... )
            >>> queue["spc"].insert(
            ...     Features(features=[
            ...         Feature("feature_1", 1.5),
            ...         Feature("feature_2", 2.3),
            ...     ])
            ... )

        Multi-profile monitoring with HTTP:
            >>> queue = ScouterQueue.from_path(
            ...     path={
            ...         "spc": Path("spc_profile.json"),
            ...         "psi": Path("psi_profile.json"),
            ...         "custom": Path("custom_profile.json"),
            ...     },
            ...     transport_config=HttpConfig(
            ...         server_uri="http://scouter-server:8000",
            ...     ),
            ... )
            >>> queue["psi"].insert(Features(...))
            >>> queue["custom"].insert(Metrics(...))

        LLM monitoring with gRPC:
            >>> queue = ScouterQueue.from_path(
            ...     path={"genai_eval": Path("genai_profile.json")},
            ...     transport_config=GrpcConfig(
            ...         server_uri="http://scouter-server:50051",
            ...         username="monitoring_user",
            ...         password="secure_password",
            ...     ),
            ... )
            >>> queue["genai_eval"].insert(
            ...     GenAIEvalRecord(context={"input": "...", "response": "..."})
            ... )
    """

shutdown

shutdown() -> None

Shutdown the queue. This will close and flush all queues and transports

Source code in python/scouter/stubs.pyi
def shutdown(self) -> None:
    """Shutdown the queue. This will close and flush all queues and transports"""

ScouterTestServer

ScouterTestServer(
    cleanup: bool = True,
    rabbit_mq: bool = False,
    kafka: bool = False,
    openai: bool = False,
    base_path: Optional[Path] = None,
)

When the test server is used as a context manager, it will start the server in a background thread and set the appropriate env vars so that the client can connect to the server. The server will be stopped when the context manager exits and the env vars will be reset.

Parameters:

Name Type Description Default
cleanup bool

Whether to cleanup the server after the test. Defaults to True.

True
rabbit_mq bool

Whether to use RabbitMQ as the transport. Defaults to False.

False
kafka bool

Whether to use Kafka as the transport. Defaults to False.

False
openai bool

Whether to create a mock OpenAITest server. Defaults to False.

False
base_path Optional[Path]

The base path for the server. Defaults to None. This is primarily used for testing loading attributes from a pyproject.toml file.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    cleanup: bool = True,
    rabbit_mq: bool = False,
    kafka: bool = False,
    openai: bool = False,
    base_path: Optional[Path] = None,
) -> None:
    """Instantiates the test server.

    When the test server is used as a context manager, it will start the server
    in a background thread and set the appropriate env vars so that the client
    can connect to the server. The server will be stopped when the context manager
    exits and the env vars will be reset.

    Args:
        cleanup (bool, optional):
            Whether to cleanup the server after the test. Defaults to True.
        rabbit_mq (bool, optional):
            Whether to use RabbitMQ as the transport. Defaults to False.
        kafka (bool, optional):
            Whether to use Kafka as the transport. Defaults to False.
        openai (bool, optional):
            Whether to create a mock OpenAITest server. Defaults to False.
        base_path (Optional[Path], optional):
            The base path for the server. Defaults to None. This is primarily
            used for testing loading attributes from a pyproject.toml file.
    """

cleanup staticmethod

cleanup() -> None

Cleans up the test server.

Source code in python/scouter/stubs.pyi
@staticmethod
def cleanup() -> None:
    """Cleans up the test server."""

remove_env_vars_for_client

remove_env_vars_for_client() -> None

Removes the env vars for the client to connect to the server.

Source code in python/scouter/stubs.pyi
def remove_env_vars_for_client(self) -> None:
    """Removes the env vars for the client to connect to the server."""

set_env_vars_for_client

set_env_vars_for_client() -> None

Sets the env vars for the client to connect to the server.

Source code in python/scouter/stubs.pyi
def set_env_vars_for_client(self) -> None:
    """Sets the env vars for the client to connect to the server."""

start_server

start_server() -> None

Starts the test server.

Source code in python/scouter/stubs.pyi
def start_server(self) -> None:
    """Starts the test server."""

stop_server

stop_server() -> None

Stops the test server.

Source code in python/scouter/stubs.pyi
def stop_server(self) -> None:
    """Stops the test server."""

SearchEntryPoint

Search entry point information.

Contains embeddable search widgets and SDK data.

Examples:

>>> entry_point = SearchEntryPoint(
...     rendered_content="<div>...</div>",
...     sdk_blob="base64encodeddata"
... )

rendered_content property

rendered_content: Optional[str]

Embeddable HTML content.

sdk_blob property

sdk_blob: Optional[str]

Base64 encoded SDK data.

SearchResultBlockParam

SearchResultBlockParam(
    content: List[TextBlockParam],
    source: str,
    title: str,
    cache_control: Optional[CacheControl] = None,
    citations: Optional[CitationsConfigParams] = None,
)

Search result content block parameter.

Search result content with text blocks, source, and title.

Examples:

>>> content = [TextBlockParam(text="Result content", cache_control=None, citations=None)]
>>> block = SearchResultBlockParam(
...     content=content,
...     source="https://example.com",
...     title="Search Result",
...     cache_control=None,
...     citations=None
... )

Parameters:

Name Type Description Default
content List[TextBlockParam]

List of text content blocks

required
source str

Source URL or identifier

required
title str

Result title

required
cache_control Optional[CacheControl]

Cache control settings

None
citations Optional[CitationsConfigParams]

Citations configuration

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    content: List[TextBlockParam],
    source: str,
    title: str,
    cache_control: Optional["CacheControl"] = None,
    citations: Optional[CitationsConfigParams] = None,
) -> None:
    """Initialize search result block parameter.

    Args:
        content (List[TextBlockParam]):
            List of text content blocks
        source (str):
            Source URL or identifier
        title (str):
            Result title
        cache_control (Optional[CacheControl]):
            Cache control settings
        citations (Optional[CitationsConfigParams]):
            Citations configuration
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

citations property

citations: Optional[CitationsConfigParams]

Citations configuration.

content property

content: List[TextBlockParam]

Content blocks.

source property

source: str

Result source.

title property

title: str

Result title.

type property

type: str

Content type (always 'search_result').

Segment

Text segment within content.

Identifies a portion of generated content by part index and byte range.

Examples:

>>> segment = Segment(
...     part_index=0,
...     start_index=10,
...     end_index=50,
...     text="example text"
... )

end_index property

end_index: Optional[int]

End byte index.

part_index property

part_index: Optional[int]

Index of the Part object.

start_index property

start_index: Optional[int]

Start byte index.

text property

text: Optional[str]

The segment text.

ServerRecord

ServerRecord(record: Any)

Parameters:

Name Type Description Default
record Any

Server record to initialize

required
Source code in python/scouter/stubs.pyi
def __init__(self, record: Any) -> None:
    """Initialize server record

    Args:
        record:
            Server record to initialize
    """

record property

record: Union[
    SpcRecord,
    PsiRecord,
    CustomMetricRecord,
    ObservabilityMetrics,
]

Return the drift server record.

ServerRecords

ServerRecords(records: List[ServerRecord])

Parameters:

Name Type Description Default
records List[ServerRecord]

List of server records

required
Source code in python/scouter/stubs.pyi
def __init__(self, records: List[ServerRecord]) -> None:
    """Initialize server records

    Args:
        records:
            List of server records
    """

records property

records: List[ServerRecord]

Return the drift server records.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

ServerToolUseBlock

Server tool use content block in response.

Represents a server-side tool call from Claude.

Examples:

>>> block = response.content[0]
>>> print(f"Server tool: {block.name}")

id property

id: str

Tool call ID.

name property

name: str

Tool name.

type property

type: str

Block type.

ServerToolUseBlockParam

ServerToolUseBlockParam(
    id: str,
    name: str,
    input: Any,
    cache_control: Optional[CacheControl] = None,
)

Server tool use content block parameter.

Represents a server-side tool call made by the model.

Examples:

>>> block = ServerToolUseBlockParam(
...     id="server_tool_123",
...     name="web_search",
...     input={"query": "latest news"},
...     cache_control=None
... )

Parameters:

Name Type Description Default
id str

Tool call ID

required
name str

Tool name

required
input Any

Tool input parameters

required
cache_control Optional[CacheControl]

Cache control settings

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    id: str,
    name: str,
    input: Any,
    cache_control: Optional["CacheControl"] = None,
) -> None:
    """Initialize server tool use block parameter.

    Args:
        id (str):
            Tool call ID
        name (str):
            Tool name
        input (Any):
            Tool input parameters
        cache_control (Optional[CacheControl]):
            Cache control settings
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

id property

id: str

Tool call ID.

input property

input: Any

Tool input parameters.

name property

name: str

Tool name.

type property

type: str

Content type (always 'server_tool_use').

SimpleSearchParams

SimpleSearchParams()

Parameters for simple search API.

This type has no configuration fields.

Examples:

>>> params = SimpleSearchParams()
Source code in python/scouter/stubs.pyi
def __init__(self) -> None:
    """Initialize simple search parameters."""

SlackDispatchConfig

SlackDispatchConfig(channel: str)

Parameters:

Name Type Description Default
channel str

Slack channel name for where alerts will be reported

required
Source code in python/scouter/stubs.pyi
def __init__(self, channel: str):
    """Initialize alert config

    Args:
        channel:
            Slack channel name for where alerts will be reported
    """

channel property writable

channel: str

Return the slack channel name

SourceFlaggingUri

URI flagged as potentially problematic.

Information about a source that was flagged for review.

Examples:

>>> uri = SourceFlaggingUri(
...     source_id="source123",
...     flag_content_uri="https://example.com/flagged"
... )

flag_content_uri property

flag_content_uri: str

URI of flagged content.

source_id property

source_id: str

Source identifier.

SpanEvent

Represents an event within a span.

SpanKind

Enumeration of span kinds.

Represents a link to another span.

SpcAlert

SpcAlert(kind: SpcAlertType, zone: AlertZone)
Source code in python/scouter/stubs.pyi
def __init__(self, kind: SpcAlertType, zone: AlertZone):
    """Initialize alert"""

kind property

kind: SpcAlertType

Alert kind

zone property

zone: AlertZone

Zone associated with alert

SpcAlertConfig

SpcAlertConfig(
    rule: Optional[SpcAlertRule] = None,
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
)

Parameters:

Name Type Description Default
rule Optional[SpcAlertRule]

Alert rule to use. Defaults to Standard

None
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch config. Defaults to console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
features_to_monitor List[str]

List of features to monitor. Defaults to empty list, which means all features

[]
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    rule: Optional[SpcAlertRule] = None,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
):
    """Initialize alert config

    Args:
        rule:
            Alert rule to use. Defaults to Standard
        dispatch_config:
            Alert dispatch config. Defaults to console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight
        features_to_monitor:
            List of features to monitor. Defaults to empty list, which means all features

    """

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

features_to_monitor property writable

features_to_monitor: List[str]

Return the features to monitor

rule property writable

rule: SpcAlertRule

Return the alert rule

schedule property writable

schedule: str

Return the schedule

SpcAlertRule

SpcAlertRule(
    rule: str = "8 16 4 8 2 4 1 1",
    zones_to_monitor: List[AlertZone] = [
        AlertZone.Zone1,
        AlertZone.Zone2,
        AlertZone.Zone3,
        AlertZone.Zone4,
    ],
)

Parameters:

Name Type Description Default
rule str

Rule to use for alerting. Eight digit integer string. Defaults to '8 16 4 8 2 4 1 1'

'8 16 4 8 2 4 1 1'
zones_to_monitor List[AlertZone]

List of zones to monitor. Defaults to all zones.

[Zone1, Zone2, Zone3, Zone4]
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    rule: str = "8 16 4 8 2 4 1 1",
    zones_to_monitor: List[AlertZone] = [
        AlertZone.Zone1,
        AlertZone.Zone2,
        AlertZone.Zone3,
        AlertZone.Zone4,
    ],
) -> None:
    """Initialize alert rule

    Args:
        rule:
            Rule to use for alerting. Eight digit integer string.
            Defaults to '8 16 4 8 2 4 1 1'
        zones_to_monitor:
            List of zones to monitor. Defaults to all zones.
    """

rule property writable

rule: str

Return the alert rule

zones_to_monitor property writable

zones_to_monitor: List[AlertZone]

Return the zones to monitor

SpcDriftConfig

SpcDriftConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: SpcAlertConfig = SpcAlertConfig(),
    config_path: Optional[Path] = None,
)

Parameters:

Name Type Description Default
space str

Model space

'__missing__'
name str

Model name

'__missing__'
version str

Model version. Defaults to 0.1.0

'0.1.0'
sample_size int

Sample size

25
alert_config SpcAlertConfig

Alert configuration

SpcAlertConfig()
config_path Optional[Path]

Optional path to load config from.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: SpcAlertConfig = SpcAlertConfig(),
    config_path: Optional[Path] = None,
):
    """Initialize monitor config

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version. Defaults to 0.1.0
        sample_size:
            Sample size
        alert_config:
            Alert configuration
        config_path:
            Optional path to load config from.
    """

alert_config property writable

alert_config: SpcAlertConfig

Alert configuration

drift_type property

drift_type: DriftType

Drift type

feature_map property

feature_map: Optional[FeatureMap]

Feature map

name property writable

name: str

Model Name

sample_size property writable

sample_size: int

Return the sample size.

space property writable

space: str

Model space

uid property writable

uid: str

Unique identifier for the drift config

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> SpcDriftConfig

Load config from json file

Parameters:

Name Type Description Default
path Path

Path to json file to load config from.

required
Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "SpcDriftConfig":
    """Load config from json file

    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
space Optional[str]

Model space

None
name Optional[str]

Model name

None
version Optional[str]

Model version

None
sample_size Optional[int]

Sample size

None
alert_config Optional[SpcAlertConfig]

Alert configuration

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        sample_size:
            Sample size
        alert_config:
            Alert configuration
    """

SpcDriftMap

Drift map of features

features property

features: Dict[str, SpcFeatureDrift]

Returns dictionary of features and their data profiles

name property

name: str

name to associate with drift map

space property

space: str

Space to associate with drift map

version property

version: str

Version to associate with drift map

model_dump_json

model_dump_json() -> str

Return json representation of data drift

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of data drift"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> SpcDriftMap

Load drift map from json file.

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift map

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "SpcDriftMap":
    """Load drift map from json file.

    Args:
        json_string:
            JSON string representation of the drift map
    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift map to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift map. If None, outputs to spc_drift_map.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift map to json file

    Args:
        path:
            Optional path to save the drift map. If None, outputs to `spc_drift_map.json`

    Returns:
        Path to the saved json file

    """

to_numpy

to_numpy() -> Any

Return drift map as a tuple of sample_array, drift_array and list of features

Source code in python/scouter/stubs.pyi
def to_numpy(self) -> Any:
    """Return drift map as a tuple of sample_array, drift_array and list of features"""

SpcDriftProfile

config property

config: SpcDriftConfig

Return the monitor config.

features property

features: Dict[str, SpcFeatureDriftProfile]

Return the list of features.

scouter_version property

scouter_version: str

Return scouter version used to create DriftProfile

uid property

uid: str

Return the unique identifier for the drift profile

from_file staticmethod

from_file(path: Path) -> SpcDriftProfile

Load drift profile from file

Parameters:

Name Type Description Default
path Path

Path to the file

required
Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "SpcDriftProfile":
    """Load drift profile from file

    Args:
        path: Path to the file
    """

model_dump

model_dump() -> Dict[str, Any]

Return dictionary representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Return dictionary representation of drift profile"""

model_dump_json

model_dump_json() -> str

Return json representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of drift profile"""

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> SpcDriftProfile

Load drift profile from dictionary

Parameters:

Name Type Description Default
data Dict[str, Any]

DriftProfile dictionary

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "SpcDriftProfile":
    """Load drift profile from dictionary

    Args:
        data:
            DriftProfile dictionary
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> SpcDriftProfile

Load drift profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "SpcDriftProfile":
    """Load drift profile from json

    Args:
        json_string:
            JSON string representation of the drift profile

    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift profile. If None, outputs to spc_drift_profile.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift profile to json file

    Args:
        path:
            Optional path to save the drift profile. If None, outputs to `spc_drift_profile.json`


    Returns:
        Path to the saved json file
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
name Optional[str]

Model name

None
space Optional[str]

Model space

None
version Optional[str]

Model version

None
sample_size Optional[int]

Sample size

None
alert_config Optional[SpcAlertConfig]

Alert configuration

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        sample_size:
            Sample size
        alert_config:
            Alert configuration
    """

SpcFeatureDrift

drift property

drift: List[float]

Return list of drift values

samples property

samples: List[float]

Return list of samples

SpcFeatureDriftProfile

center property

center: float

Return the center.

id property

id: str

Return the id.

one_lcl property

one_lcl: float

Return the zone 1 lcl.

one_ucl property

one_ucl: float

Return the zone 1 ucl.

three_lcl property

three_lcl: float

Return the zone 3 lcl.

three_ucl property

three_ucl: float

Return the zone 3 ucl.

timestamp property

timestamp: str

Return the timestamp.

two_lcl property

two_lcl: float

Return the zone 2 lcl.

two_ucl property

two_ucl: float

Return the zone 2 ucl.

SpcRecord

SpcRecord(uid: str, feature: str, value: float)

Parameters:

Name Type Description Default
uid str

Unique identifier for the spc record. Must correspond to an existing entity in Scouter.

required
feature str

Feature name

required
value float

Feature value

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    uid: str,
    feature: str,
    value: float,
):
    """Initialize spc drift server record

    Args:
        uid:
            Unique identifier for the spc record.
            Must correspond to an existing entity in Scouter.
        feature:
            Feature name
        value:
            Feature value
    """

created_at property

created_at: datetime

Return the created at timestamp.

feature property

feature: str

Return the feature.

uid property

uid: str

Return the unique identifier.

value property

value: float

Return the sample value.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

to_dict

to_dict() -> Dict[str, str]

Return the dictionary representation of the record.

Source code in python/scouter/stubs.pyi
def to_dict(self) -> Dict[str, str]:
    """Return the dictionary representation of the record."""

SpeakerVoiceConfig

SpeakerVoiceConfig(speaker: str, voice_config: VoiceConfig)

Voice configuration for a specific speaker.

Maps a speaker identifier to a voice configuration for multi-speaker text-to-speech.

Examples:

>>> config = SpeakerVoiceConfig(
...     speaker="Alice",
...     voice_config=VoiceConfig(
...         prebuilt_voice_config=PrebuiltVoiceConfig(voice_name="Puck")
...     )
... )

Parameters:

Name Type Description Default
speaker str

Speaker identifier/name

required
voice_config VoiceConfig

Voice configuration for this speaker

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    speaker: str,
    voice_config: VoiceConfig,
) -> None:
    """Initialize speaker voice configuration.

    Args:
        speaker (str):
            Speaker identifier/name
        voice_config (VoiceConfig):
            Voice configuration for this speaker
    """

speaker property

speaker: str

The speaker identifier.

voice_config property

voice_config: VoiceConfig

The voice configuration.

SpeechConfig

SpeechConfig(
    voice_config: Optional[VoiceConfig] = None,
    multi_speaker_voice_config: Optional[
        MultiSpeakerVoiceConfig
    ] = None,
    language_code: Optional[str] = None,
)

Configuration for speech synthesis.

Controls text-to-speech generation including voice selection and language.

Examples:

>>> # Single speaker
>>> config = SpeechConfig(
...     voice_config=VoiceConfig(
...         prebuilt_voice_config=PrebuiltVoiceConfig(voice_name="Puck")
...     ),
...     language_code="en-US"
... )
>>> # Multiple speakers
>>> config = SpeechConfig(
...     multi_speaker_voice_config=MultiSpeakerVoiceConfig(...),
...     language_code="en-US"
... )

Parameters:

Name Type Description Default
voice_config Optional[VoiceConfig]

Single voice configuration

None
multi_speaker_voice_config Optional[MultiSpeakerVoiceConfig]

Multi-speaker configuration

None
language_code Optional[str]

ISO 639-1 language code

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    voice_config: Optional[VoiceConfig] = None,
    multi_speaker_voice_config: Optional[MultiSpeakerVoiceConfig] = None,
    language_code: Optional[str] = None,
) -> None:
    """Initialize speech configuration.

    Args:
        voice_config (Optional[VoiceConfig]):
            Single voice configuration
        multi_speaker_voice_config (Optional[MultiSpeakerVoiceConfig]):
            Multi-speaker configuration
        language_code (Optional[str]):
            ISO 639-1 language code
    """

language_code property

language_code: Optional[str]

The language code.

multi_speaker_voice_config property

multi_speaker_voice_config: Optional[
    MultiSpeakerVoiceConfig
]

The multi-speaker configuration.

voice_config property

voice_config: Optional[VoiceConfig]

The voice configuration.

SquareRoot

SquareRoot()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the SquareRoot equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

StdoutSpanExporter

StdoutSpanExporter(
    batch_export: bool = False,
    sample_ratio: Optional[float] = None,
)

Exporter that outputs spans to standard output (stdout).

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to False.

False
sample_ratio Optional[float]

The sampling ratio for traces. If None, defaults to always sample.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    batch_export: bool = False,
    sample_ratio: Optional[float] = None,
) -> None:
    """Initialize the StdoutSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to False.
        sample_ratio (Optional[float]):
            The sampling ratio for traces. If None, defaults to always sample.
    """

batch_export property

batch_export: bool

Get whether batch exporting is enabled.

sample_ratio property

sample_ratio: Optional[float]

Get the sampling ratio.

StopReason

Reason for generation stopping.

Indicates why Claude stopped generating.

Examples:

>>> reason = response.stop_reason
>>> if reason == StopReason.EndTurn:
...     print("Natural stopping point")

EndTurn class-attribute instance-attribute

EndTurn = 'StopReason'

Natural stopping point reached

MaxTokens class-attribute instance-attribute

MaxTokens = 'StopReason'

Maximum token limit reached

StopSequence class-attribute instance-attribute

StopSequence = 'StopReason'

Stop sequence encountered

ToolUse class-attribute instance-attribute

ToolUse = 'StopReason'

Tool was invoked

StreamOptions

StreamOptions(
    include_obfuscation: Optional[bool] = None,
    include_usage: Optional[bool] = None,
)

Options for streaming chat completion responses.

This class provides configuration for streaming behavior, including usage information and obfuscation settings.

Examples:

>>> options = StreamOptions(include_usage=True)
>>> options.include_usage
True

Parameters:

Name Type Description Default
include_obfuscation Optional[bool]

Whether to include obfuscation in the stream

None
include_usage Optional[bool]

Whether to include usage information in the stream

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    include_obfuscation: Optional[bool] = None,
    include_usage: Optional[bool] = None,
) -> None:
    """Initialize stream options.

    Args:
        include_obfuscation (Optional[bool]):
            Whether to include obfuscation in the stream
        include_usage (Optional[bool]):
            Whether to include usage information in the stream
    """

include_obfuscation property

include_obfuscation: Optional[bool]

Whether obfuscation is included.

include_usage property

include_usage: Optional[bool]

Whether usage information is included.

StringStats

char_stats property

char_stats: CharStats

Character statistics

distinct property

distinct: Distinct

Distinct value counts

word_stats property

word_stats: WordStats

word statistics

Sturges

Sturges()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Sturges equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

SystemPrompt

SystemPrompt(content: Any)

System prompt for Anthropic messages.

System-level instructions for Claude.

Examples:

>>> # Simple system prompt
>>> prompt = SystemPrompt(content="You are a helpful assistant.")
>>>
>>> # System prompt with multiple blocks
>>> blocks = [
...     TextBlockParam(text="You are helpful.", cache_control=None, citations=None),
...     TextBlockParam(text="Be concise.", cache_control=None, citations=None)
... ]
>>> prompt = SystemPrompt(content=blocks)

Parameters:

Name Type Description Default
content Any

System prompt content (string or list of TextBlockParam)

required
Source code in python/scouter/stubs.pyi
def __init__(self, content: Any) -> None:
    """Initialize system prompt.

    Args:
        content (Any):
            System prompt content (string or list of TextBlockParam)
    """

content property

content: List[TextBlockParam]

System prompt content blocks.

TagRecord

Represents a single tag record associated with an entity.

TagsResponse

Response structure containing a list of tag records.

Task

Task(
    agent_id: str,
    prompt: Prompt,
    id: Optional[str] = None,
    dependencies: List[str] = [],
    max_retries: int = 3,
)

Parameters:

Name Type Description Default
agent_id str

The ID of the agent that will execute the task.

required
prompt Prompt

The prompt to use for the task.

required
id Optional[str]

The ID of the task. If None, a random uuid7 will be generated.

None
dependencies List[str]

The dependencies of the task.

[]
max_retries int

The maximum number of retries for the task if it fails. Defaults to 3.

3
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    agent_id: str,
    prompt: Prompt,
    id: Optional[str] = None,
    dependencies: List[str] = [],
    max_retries: int = 3,
) -> None:
    """Create a Task object.

    Args:
        agent_id (str):
            The ID of the agent that will execute the task.
        prompt (Prompt):
            The prompt to use for the task.
        id (Optional[str]):
            The ID of the task. If None, a random uuid7 will be generated.
        dependencies (List[str]):
            The dependencies of the task.
        max_retries (int):
            The maximum number of retries for the task if it fails. Defaults to 3.
    """

dependencies property

dependencies: List[str]

The dependencies of the task.

id property

id: str

The ID of the task.

prompt property

prompt: Prompt

The prompt to use for the task.

status property

status: TaskStatus

The status of the task.

add_dependency

add_dependency(task_id: str) -> None

Add a dependency to the task.

Source code in python/scouter/stubs.pyi
def add_dependency(self, task_id: str) -> None:
    """Add a dependency to the task."""

TaskComparison

Represents a comparison between the same task in baseline and comparison evaluations

baseline_passed property

baseline_passed: bool

Check if the task passed in the baseline evaluation

comparison_passed property

comparison_passed: bool

Check if the task passed in the comparison evaluation

record_uid property

record_uid: str

Get the record unique identifier associated with this task comparison

status_changed property

status_changed: bool

Check if the task's pass/fail status changed between evaluations

task_id property

task_id: str

Get the task identifier

TaskEvent

A class representing an event that occurs during the execution of a task in a workflow.

details property

details: EventDetails

Additional details about the event. This can include information such as error messages or other relevant data.

id property

id: str

The ID of the event

status property

status: TaskStatus

The status of the task at the time of the event.

task_id property

task_id: str

The ID of the task that the event is associated with.

timestamp property

timestamp: datetime

The timestamp of the event. This is the time when the event occurred.

updated_at property

updated_at: datetime

The timestamp of when the event was last updated. This is useful for tracking changes to the event.

workflow_id property

workflow_id: str

The ID of the workflow that the task is part of.

TaskList

TaskList is a collection of Task objects used in a Workflow.

items property

items: Dict[str, Task]

Dictionary of tasks in the TaskList where keys are task IDs and values are Task objects.

TaskStatus

Status of a task in a workflow.

Indicates the current state of task execution.

Examples:

>>> status = TaskStatus.Pending
>>> status = TaskStatus.Completed

Completed instance-attribute

Completed: TaskStatus

Task has completed successfully

Failed instance-attribute

Failed: TaskStatus

Task has failed

Pending instance-attribute

Pending: TaskStatus

Task is pending execution

Running instance-attribute

Running: TaskStatus

Task is currently running

TerrellScott

TerrellScott()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Terrell-Scott equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

TestSpanExporter

TestSpanExporter(batch_export: bool = True)

Exporter for testing that collects spans in memory.

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to True.

True
Source code in python/scouter/stubs.pyi
def __init__(self, batch_export: bool = True) -> None:
    """Initialize the TestSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to True.
    """

baggage property

baggage: list[TraceBaggageRecord]

Get the collected trace baggage records.

spans property

spans: list[TraceSpanRecord]

Get the collected trace span records.

traces property

traces: list[TraceRecord]

Get the collected trace records.

clear

clear() -> None

Clear all collected trace records.

Source code in python/scouter/stubs.pyi
def clear(self) -> None:
    """Clear all collected trace records."""

TextBlock

Text content block in response.

Text content with optional citations.

Examples:

>>> block = response.content[0]
>>> print(block.text)
>>> if block.citations:
...     for citation in block.citations:
...         print(citation)

citations property

citations: Optional[List[Any]]

Citations.

text property

text: str

Text content.

type property

type: str

Block type.

TextBlockParam

TextBlockParam(
    text: str,
    cache_control: Optional[CacheControl] = None,
    citations: Optional[Any] = None,
)

Text content block parameter.

Regular text content with optional cache control and citations.

Examples:

>>> # Simple text block
>>> block = TextBlockParam(text="Hello, world!", cache_control=None, citations=None)
>>>
>>> # With cache control
>>> cache = CacheControl(cache_type="ephemeral", ttl="5m")
>>> block = TextBlockParam(text="Hello", cache_control=cache, citations=None)

Parameters:

Name Type Description Default
text str

Text content

required
cache_control Optional[CacheControl]

Cache control settings

None
citations Optional[Any]

Citation information

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    text: str,
    cache_control: Optional["CacheControl"] = None,
    citations: Optional[Any] = None,
) -> None:
    """Initialize text block parameter.

    Args:
        text (str):
            Text content
        cache_control (Optional[CacheControl]):
            Cache control settings
        citations (Optional[Any]):
            Citation information
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

text property

text: str

The text content.

type property

type: str

Content type (always 'text').

TextContentPart

TextContentPart(text: str)

Text content part for OpenAI chat messages.

This class represents text as part of a message's content.

Examples:

>>> text_part = TextContentPart(text="Hello, world!")
>>> text_part.text
'Hello, world!'
>>> text_part.type
'text'

Parameters:

Name Type Description Default
text str

Text content

required
Source code in python/scouter/stubs.pyi
def __init__(self, text: str) -> None:
    """Initialize text content part.

    Args:
        text (str):
            Text content
    """

text property

text: str

The text content.

type property

type: str

The content part type (always 'text').

TextFormat

TextFormat(type: str)

Text format for custom tool outputs.

This class defines unconstrained free-form text output format for custom tools.

Examples:

>>> format = TextFormat(type="text")
>>> format.type
'text'

Parameters:

Name Type Description Default
type str

Format type (typically "text")

required
Source code in python/scouter/stubs.pyi
def __init__(self, type: str) -> None:
    """Initialize text format.

    Args:
        type (str):
            Format type (typically "text")
    """

type property

type: str

The format type.

ThinkingBlock

Thinking content block in response.

Claude's internal thinking process.

Examples:

>>> block = response.content[0]
>>> print(block.thinking)
>>> if block.signature:
...     print(f"Signature: {block.signature}")

signature property

signature: Optional[str]

Cryptographic signature.

thinking property

thinking: str

Thinking content.

type property

type: str

Block type.

ThinkingBlockParam

ThinkingBlockParam(
    thinking: str, signature: Optional[str] = None
)

Thinking content block parameter.

Claude's internal thinking/reasoning process.

Examples:

>>> block = ThinkingBlockParam(
...     thinking="Let me think about this...",
...     signature="signature_string"
... )

Parameters:

Name Type Description Default
thinking str

The thinking content

required
signature Optional[str]

Cryptographic signature

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    thinking: str,
    signature: Optional[str] = None,
) -> None:
    """Initialize thinking block parameter.

    Args:
        thinking (str):
            The thinking content
        signature (Optional[str]):
            Cryptographic signature
    """

signature property

signature: Optional[str]

Cryptographic signature.

thinking property

thinking: str

Thinking content.

type property

type: str

Content type (always 'thinking').

ThinkingLevel

Level of model thinking/reasoning to apply.

Controls the depth of reasoning the model performs before generating its final response.

Examples:

>>> level = ThinkingLevel.High
>>> level.value
'HIGH'

High class-attribute instance-attribute

High = 'ThinkingLevel'

High level of thinking

Low class-attribute instance-attribute

Low = 'ThinkingLevel'

Low level of thinking

ThinkingLevelUnspecified class-attribute instance-attribute

ThinkingLevelUnspecified = 'ThinkingLevel'

Unspecified thinking level

TokenLogProbs

logprob property

logprob: float

The log probability of the token.

token property

token: str

The token for which the log probabilities are calculated.

ToolCall

Tool call information from OpenAI responses.

This class represents a single tool call made by the model during generation.

Examples:

>>> # Accessing tool call from response
>>> choice = response.choices[0]
>>> if choice.message.tool_calls:
...     tool_call = choice.message.tool_calls[0]
...     print(tool_call.function.name)
...     print(tool_call.function.arguments)

function property

function: Function

The function call information.

id property

id: str

The tool call ID.

type property

type: str

The tool call type.

ToolChoiceMode

Mode for tool choice behavior in chat completions.

This enum defines how the model should handle tool calls during generation.

Examples:

>>> mode = ToolChoiceMode.Auto
>>> mode.value
'auto'

Auto class-attribute instance-attribute

Auto = 'ToolChoiceMode'

Model can choose to call tools or generate a message

NA class-attribute instance-attribute

NA = 'ToolChoiceMode'

Model will not call any tools

Required class-attribute instance-attribute

Required = 'ToolChoiceMode'

Model must call one or more tools

ToolConfig

ToolConfig(
    function_calling_config: Optional[
        FunctionCallingConfig
    ] = None,
    retrieval_config: Optional[RetrievalConfig] = None,
)

Configuration for tool usage.

Controls function calling and retrieval behavior across all tools.

Examples:

>>> config = ToolConfig(
...     function_calling_config=FunctionCallingConfig(mode=Mode.Auto),
...     retrieval_config=RetrievalConfig(
...         lat_lng=LatLng(latitude=37.7749, longitude=-122.4194),
...         language_code="en-US"
...     )
... )

Parameters:

Name Type Description Default
function_calling_config Optional[FunctionCallingConfig]

Function calling configuration

None
retrieval_config Optional[RetrievalConfig]

Retrieval configuration

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    function_calling_config: Optional[FunctionCallingConfig] = None,
    retrieval_config: Optional[RetrievalConfig] = None,
) -> None:
    """Initialize tool configuration.

    Args:
        function_calling_config (Optional[FunctionCallingConfig]):
            Function calling configuration
        retrieval_config (Optional[RetrievalConfig]):
            Retrieval configuration
    """

function_calling_config property

function_calling_config: Optional[FunctionCallingConfig]

The function calling configuration.

retrieval_config property

retrieval_config: Optional[RetrievalConfig]

The retrieval configuration.

ToolDefinition

ToolDefinition(function_name: str)

Definition of a tool for allowed tools configuration.

This class defines a tool that can be included in an allowed tools list.

Examples:

>>> tool = ToolDefinition(function_name="get_weather")
>>> tool.type
'function'

Parameters:

Name Type Description Default
function_name str

Name of the function this tool wraps

required
Source code in python/scouter/stubs.pyi
def __init__(self, function_name: str) -> None:
    """Initialize tool definition.

    Args:
        function_name (str):
            Name of the function this tool wraps
    """

function property

function: FunctionChoice

The function specification.

type property

type: str

The tool type (always 'function').

ToolResultBlockParam

ToolResultBlockParam(
    tool_use_id: str,
    is_error: Optional[bool] = None,
    cache_control: Optional[CacheControl] = None,
    content: Optional[List[Any]] = None,
)

Tool result content block parameter.

Contains the result from executing a tool.

Examples:

>>> # Success result
>>> content = [TextBlockParam(text="Result data", cache_control=None, citations=None)]
>>> block = ToolResultBlockParam(
...     tool_use_id="tool_call_123",
...     is_error=False,
...     cache_control=None,
...     content=content
... )
>>>
>>> # Error result
>>> block = ToolResultBlockParam(
...     tool_use_id="tool_call_123",
...     is_error=True,
...     cache_control=None,
...     content=None
... )

Parameters:

Name Type Description Default
tool_use_id str

ID of the tool call this is a result for

required
is_error Optional[bool]

Whether this is an error result

None
cache_control Optional[CacheControl]

Cache control settings

None
content Optional[List[Any]]

Result content blocks

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    tool_use_id: str,
    is_error: Optional[bool] = None,
    cache_control: Optional["CacheControl"] = None,
    content: Optional[List[Any]] = None,
) -> None:
    """Initialize tool result block parameter.

    Args:
        tool_use_id (str):
            ID of the tool call this is a result for
        is_error (Optional[bool]):
            Whether this is an error result
        cache_control (Optional[CacheControl]):
            Cache control settings
        content (Optional[List[Any]]):
            Result content blocks
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

content property

content: Optional[Any]

Result content.

tool_use_id property

tool_use_id: str

Tool use ID.

type property

type: str

Content type (always 'tool_result').

ToolUseBlock

Tool use content block in response.

Represents a tool call from Claude.

Examples:

>>> block = response.content[0]
>>> print(f"Tool: {block.name}")
>>> print(f"ID: {block.id}")
>>> print(f"Input: {block.input}")

id property

id: str

Tool call ID.

name property

name: str

Tool name.

type property

type: str

Block type.

ToolUseBlockParam

ToolUseBlockParam(
    id: str,
    name: str,
    input: Any,
    cache_control: Optional[CacheControl] = None,
)

Tool use content block parameter.

Represents a tool call made by the model.

Examples:

>>> block = ToolUseBlockParam(
...     id="tool_call_123",
...     name="get_weather",
...     input={"location": "San Francisco"},
...     cache_control=None
... )

Parameters:

Name Type Description Default
id str

Tool call ID

required
name str

Tool name

required
input Any

Tool input parameters

required
cache_control Optional[CacheControl]

Cache control settings

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    id: str,
    name: str,
    input: Any,
    cache_control: Optional["CacheControl"] = None,
) -> None:
    """Initialize tool use block parameter.

    Args:
        id (str):
            Tool call ID
        name (str):
            Tool name
        input (Any):
            Tool input parameters
        cache_control (Optional[CacheControl]):
            Cache control settings
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

id property

id: str

Tool call ID.

input property

input: Any

Tool input parameters.

name property

name: str

Tool name.

type property

type: str

Content type (always 'tool_use').

TopCandidates

Top token candidates at a decoding step.

List of top candidates sorted by log probability.

Examples:

>>> top = TopCandidates(
...     candidates=[
...         LogprobsCandidate(token="hello", log_probability=-0.5),
...         LogprobsCandidate(token="hi", log_probability=-1.2)
...     ]
... )

candidates property

candidates: Optional[List[LogprobsCandidate]]

List of candidates.

TopLogProbs

Top log probability information for a token.

This class represents one of the top alternative tokens considered by the model, with its log probability.

Examples:

>>> # Accessing top log probs
>>> choice = response.choices[0]
>>> if choice.logprobs and choice.logprobs.content:
...     for log_content in choice.logprobs.content:
...         if log_content.top_logprobs:
...             for top in log_content.top_logprobs:
...                 print(f"{top.token}: {top.logprob}")

bytes property

bytes: Optional[List[int]]

UTF-8 bytes of the token.

logprob property

logprob: float

Log probability of the token.

token property

token: str

The token.

TraceBaggageRecord

Represents a single baggage record associated with a trace.

TraceBaggageResponse

Response structure containing trace baggage records.

TraceFilters

TraceFilters(
    service_name: Optional[str] = None,
    has_errors: Optional[bool] = None,
    status_code: Optional[int] = None,
    start_time: Optional[datetime] = None,
    end_time: Optional[datetime] = None,
    limit: Optional[int] = None,
    cursor_created_at: Optional[datetime] = None,
    cursor_trace_id: Optional[str] = None,
)

A struct for filtering traces, generated from Rust pyclass.

Parameters:

Name Type Description Default
service_name Optional[str]

Service name filter

None
has_errors Optional[bool]

Filter by presence of errors

None
status_code Optional[int]

Filter by root span status code

None
start_time Optional[datetime]

Start time boundary (UTC)

None
end_time Optional[datetime]

End time boundary (UTC)

None
limit Optional[int]

Maximum number of results to return

None
cursor_created_at Optional[datetime]

Pagination cursor: created at timestamp

None
cursor_trace_id Optional[str]

Pagination cursor: trace ID

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    service_name: Optional[str] = None,
    has_errors: Optional[bool] = None,
    status_code: Optional[int] = None,
    start_time: Optional[datetime.datetime] = None,
    end_time: Optional[datetime.datetime] = None,
    limit: Optional[int] = None,
    cursor_created_at: Optional[datetime.datetime] = None,
    cursor_trace_id: Optional[str] = None,
) -> None:
    """Initialize trace filters.

    Args:
        service_name:
            Service name filter
        has_errors:
            Filter by presence of errors
        status_code:
            Filter by root span status code
        start_time:
            Start time boundary (UTC)
        end_time:
            End time boundary (UTC)
        limit:
            Maximum number of results to return
        cursor_created_at:
            Pagination cursor: created at timestamp
        cursor_trace_id:
            Pagination cursor: trace ID
    """

TraceListItem

Represents a summary item for a trace in a list view.

TraceMetricBucket

Represents aggregated trace metrics for a specific time bucket.

TraceMetricsRequest

TraceMetricsRequest(
    start_time: datetime,
    end_time: datetime,
    bucket_interval: str,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
)

Request payload for fetching trace metrics.

Parameters:

Name Type Description Default
start_time datetime

Start time boundary (UTC)

required
end_time datetime

End time boundary (UTC)

required
bucket_interval str

The time interval for metric aggregation buckets (e.g., '1 minutes', '30 minutes')

required
space Optional[str]

Model space filter

None
name Optional[str]

Model name filter

None
version Optional[str]

Model version filter

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    start_time: datetime.datetime,
    end_time: datetime.datetime,
    bucket_interval: str,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
) -> None:
    """Initialize trace metrics request.

    Args:
        start_time:
            Start time boundary (UTC)
        end_time:
            End time boundary (UTC)
        bucket_interval:
            The time interval for metric aggregation buckets (e.g., '1 minutes', '30 minutes')
        space:
            Model space filter
        name:
            Model name filter
        version:
            Model version filter
    """

TraceMetricsResponse

Response structure containing aggregated trace metrics.

TracePaginationResponse

Response structure for paginated trace list requests.

TraceSpan

Detailed information for a single span within a trace.

TraceSpansResponse

Response structure containing a list of spans for a trace.

get_span_by_name

get_span_by_name(span_name: str) -> Optional[TraceSpan]

Retrieve a span by its name.

Source code in python/scouter/stubs.pyi
def get_span_by_name(self, span_name: str) -> Optional[TraceSpan]:
    """Retrieve a span by its name."""

TrafficType

Type of API traffic for billing purposes.

Indicates whether the request uses pay-as-you-go or provisioned quota.

Examples:

>>> traffic = TrafficType.OnDemand
>>> traffic.value
'ON_DEMAND'

OnDemand class-attribute instance-attribute

OnDemand = 'TrafficType'

Pay-as-you-go quota

ProvisionedThroughput class-attribute instance-attribute

ProvisionedThroughput = 'TrafficType'

Provisioned throughput quota

TrafficTypeUnspecified class-attribute instance-attribute

TrafficTypeUnspecified = 'TrafficType'

Unspecified traffic type

UrlCitation

URL citation from OpenAI web search.

This class represents a citation to a web source used by the model when web search is enabled.

Examples:

>>> # Accessing citations from response
>>> choice = response.choices[0]
>>> for annotation in choice.message.annotations:
...     for citation in annotation.url_citations:
...         print(f"{citation.title}: {citation.url}")

end_index property

end_index: int

The end index in the message content.

start_index property

start_index: int

The start index in the message content.

title property

title: str

The page title.

url property

url: str

The URL.

UrlContext

UrlContext()

URL context tool configuration.

Enables retrieval from user-provided URLs.

This type has no configuration fields.

Examples:

>>> url_context = UrlContext()
Source code in python/scouter/stubs.pyi
def __init__(self) -> None:
    """Initialize URL context tool."""

UrlContextMetadata

Metadata about URL context tool usage.

Contains information about URLs retrieved by the tool.

Examples:

>>> metadata = UrlContextMetadata(
...     url_metadata=[
...         UrlMetadata(retrieved_url="https://example.com", ...)
...     ]
... )

url_metadata property

url_metadata: Optional[List[UrlMetadata]]

List of URL metadata.

UrlImageSource

UrlImageSource(url: str)

URL-based image source.

Image referenced by URL.

Examples:

>>> source = UrlImageSource(url="https://example.com/image.jpg")

Parameters:

Name Type Description Default
url str

Image URL

required
Source code in python/scouter/stubs.pyi
def __init__(self, url: str) -> None:
    """Initialize URL image source.

    Args:
        url (str):
            Image URL
    """

type property

type: str

Source type (always 'url').

url property

url: str

Image URL.

UrlMetadata

Metadata about URL retrieval.

Information about a URL retrieved by the URL context tool.

Examples:

>>> metadata = UrlMetadata(
...     retrieved_url="https://example.com",
...     url_retrieval_status=UrlRetrievalStatus.UrlRetrievalStatusSuccess
... )

retrieved_url property

retrieved_url: Optional[str]

The retrieved URL.

url_retrieval_status property

url_retrieval_status: Optional[UrlRetrievalStatus]

Retrieval status.

UrlPDFSource

UrlPDFSource(url: str)

URL-based PDF source.

PDF document referenced by URL.

Examples:

>>> source = UrlPDFSource(url="https://example.com/document.pdf")

Parameters:

Name Type Description Default
url str

PDF document URL

required
Source code in python/scouter/stubs.pyi
def __init__(self, url: str) -> None:
    """Initialize URL PDF source.

    Args:
        url (str):
            PDF document URL
    """

type property

type: str

Source type (always 'url').

url property

url: str

PDF URL.

UrlRetrievalStatus

Status of URL retrieval operation.

Indicates whether a URL was successfully retrieved by the tool.

Examples:

>>> status = UrlRetrievalStatus.UrlRetrievalStatusSuccess
>>> status.value
'URL_RETRIEVAL_STATUS_SUCCESS'

UrlRetrievalStatusError class-attribute instance-attribute

UrlRetrievalStatusError = 'UrlRetrievalStatus'

URL retrieval failed

UrlRetrievalStatusSuccess class-attribute instance-attribute

UrlRetrievalStatusSuccess = 'UrlRetrievalStatus'

URL retrieved successfully

UrlRetrievalStatusUnspecified class-attribute instance-attribute

UrlRetrievalStatusUnspecified = 'UrlRetrievalStatus'

Unspecified status

Usage

Token usage statistics for OpenAI chat completions.

This class provides comprehensive token usage information, including detailed breakdowns for both prompt and completion tokens.

Examples:

>>> # Accessing usage information
>>> usage = response.usage
>>> print(f"Total tokens: {usage.total_tokens}")
>>> print(f"Prompt tokens: {usage.prompt_tokens}")
>>> print(f"Completion tokens: {usage.completion_tokens}")
>>>
>>> # Detailed breakdown
>>> print(f"Cached tokens: {usage.prompt_tokens_details.cached_tokens}")
>>> print(f"Reasoning tokens: {usage.completion_tokens_details.reasoning_tokens}")

completion_tokens property

completion_tokens: int

Total completion tokens.

completion_tokens_details property

completion_tokens_details: CompletionTokenDetails

Detailed completion token breakdown.

finish_reason property

finish_reason: Optional[str]

Finish reason if applicable.

prompt_tokens property

prompt_tokens: int

Total prompt tokens.

prompt_tokens_details property

prompt_tokens_details: PromptTokenDetails

Detailed prompt token breakdown.

total_tokens property

total_tokens: int

Total tokens (prompt + completion).

UsageMetadata

Token usage metadata for a request/response.

Provides detailed breakdown of token usage across different components.

Examples:

>>> usage = UsageMetadata(
...     prompt_token_count=100,
...     candidates_token_count=50,
...     total_token_count=150,
...     cached_content_token_count=20
... )

cache_tokens_details property

cache_tokens_details: Optional[List[ModalityTokenCount]]

Cache tokens by modality.

cached_content_token_count property

cached_content_token_count: Optional[int]

Tokens from cached content.

candidates_token_count property

candidates_token_count: Optional[int]

Tokens in generated candidates.

candidates_tokens_details property

candidates_tokens_details: Optional[
    List[ModalityTokenCount]
]

Candidate tokens by modality.

prompt_token_count property

prompt_token_count: Optional[int]

Tokens in the prompt.

prompt_tokens_details property

prompt_tokens_details: Optional[List[ModalityTokenCount]]

Prompt tokens by modality.

thoughts_token_count property

thoughts_token_count: Optional[int]

Tokens in thinking/reasoning.

tool_use_prompt_token_count property

tool_use_prompt_token_count: Optional[int]

Tokens from tool use results.

tool_use_prompt_tokens_details property

tool_use_prompt_tokens_details: Optional[
    List[ModalityTokenCount]
]

Tool use tokens by modality.

total_token_count property

total_token_count: Optional[int]

Total token count.

traffic_type property

traffic_type: Optional[TrafficType]

Traffic type for billing.

UsageObject

Token usage for embedding request.

This class provides token usage statistics for embedding requests.

Examples:

>>> usage = response.usage
>>> print(f"Prompt tokens: {usage.prompt_tokens}")
>>> print(f"Total tokens: {usage.total_tokens}")

prompt_tokens property

prompt_tokens: int

Tokens in input prompts.

total_tokens property

total_tokens: int

Total tokens processed.

VertexAISearch

VertexAISearch(
    datastore: Optional[str] = None,
    engine: Optional[str] = None,
    max_results: Optional[int] = None,
    filter: Optional[str] = None,
    data_store_specs: Optional[List[DataStoreSpec]] = None,
)

Vertex AI Search retrieval configuration.

Configures retrieval from Vertex AI Search datastores or engines.

Examples:

>>> # Using a datastore
>>> search = VertexAISearch(
...     datastore="projects/my-project/locations/us/collections/default/dataStores/my-store",
...     max_results=5
... )
>>> # Using an engine with multiple datastores
>>> search = VertexAISearch(
...     engine="projects/my-project/locations/us/collections/default/engines/my-engine",
...     data_store_specs=[
...         DataStoreSpec(data_store="store1", filter="category:a"),
...         DataStoreSpec(data_store="store2", filter="category:b")
...     ]
... )

Parameters:

Name Type Description Default
datastore Optional[str]

Datastore resource name

None
engine Optional[str]

Engine resource name

None
max_results Optional[int]

Maximum number of results (default 10, max 10)

None
filter Optional[str]

Filter expression

None
data_store_specs Optional[List[DataStoreSpec]]

Datastore specifications (for engines)

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    datastore: Optional[str] = None,
    engine: Optional[str] = None,
    max_results: Optional[int] = None,
    filter: Optional[str] = None,
    data_store_specs: Optional[List[DataStoreSpec]] = None,
) -> None:
    """Initialize Vertex AI Search configuration.

    Args:
        datastore (Optional[str]):
            Datastore resource name
        engine (Optional[str]):
            Engine resource name
        max_results (Optional[int]):
            Maximum number of results (default 10, max 10)
        filter (Optional[str]):
            Filter expression
        data_store_specs (Optional[List[DataStoreSpec]]):
            Datastore specifications (for engines)
    """

data_store_specs property

data_store_specs: Optional[List[DataStoreSpec]]

Datastore specifications.

datastore property

datastore: Optional[str]

The datastore resource name.

engine property

engine: Optional[str]

The engine resource name.

filter property

filter: Optional[str]

The filter expression.

max_results property

max_results: Optional[int]

Maximum results to return.

VertexGoogleSearch

VertexGoogleSearch(
    exclude_domains: Optional[List[str]] = None,
    blocking_confidence: Optional[
        PhishBlockThreshold
    ] = None,
)

Google Search tool configuration (Vertex API).

Configures Google Search with domain blocking and phishing filters.

Examples:

>>> search = VertexGoogleSearch(
...     exclude_domains=["example.com", "spam.com"],
...     blocking_confidence=PhishBlockThreshold.BlockMediumAndAbove
... )

Parameters:

Name Type Description Default
exclude_domains Optional[List[str]]

Domains to exclude from results

None
blocking_confidence Optional[PhishBlockThreshold]

Phishing blocking threshold

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    exclude_domains: Optional[List[str]] = None,
    blocking_confidence: Optional[PhishBlockThreshold] = None,
) -> None:
    """Initialize Vertex Google Search configuration.

    Args:
        exclude_domains (Optional[List[str]]):
            Domains to exclude from results
        blocking_confidence (Optional[PhishBlockThreshold]):
            Phishing blocking threshold
    """

blocking_confidence property

blocking_confidence: Optional[PhishBlockThreshold]

Phishing blocking threshold.

exclude_domains property

exclude_domains: Optional[List[str]]

Domains to exclude.

VertexRagStore

Vertex RAG Store retrieval configuration.

Configures retrieval from Vertex RAG Store.

Examples:

>>> store = VertexRagStore(
...     rag_resources=[
...         RagResource(
...             rag_corpus="projects/my-project/locations/us/ragCorpora/my-corpus"
...         )
...     ],
...     rag_retrieval_config=RagRetrievalConfig(top_k=5),
...     similarity_top_k=10
... )

rag_resources property

rag_resources: Optional[List[RagResource]]

RAG resources to use.

rag_retrieval_config property

rag_retrieval_config: Optional[RagRetrievalConfig]

Retrieval configuration.

similarity_top_k property

similarity_top_k: Optional[int]

Number of similar results.

vector_distance_threshold property

vector_distance_threshold: Optional[float]

Vector distance threshold.

VideoMetadata

Metadata for video content.

Specifies time ranges and frame rates for video processing.

end_offset property

end_offset: Optional[str]

The end offset.

start_offset property

start_offset: Optional[str]

The start offset.

VoiceConfig

VoiceConfig(prebuilt_voice_config: PrebuiltVoiceConfig)

Voice configuration for speech generation.

Configures the voice to use for text-to-speech.

Examples:

>>> config = VoiceConfig(
...     prebuilt_voice_config=PrebuiltVoiceConfig(voice_name="Puck")
... )

Parameters:

Name Type Description Default
prebuilt_voice_config PrebuiltVoiceConfig

Prebuilt voice to use

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    prebuilt_voice_config: PrebuiltVoiceConfig,
) -> None:
    """Initialize voice configuration.

    Args:
        prebuilt_voice_config (PrebuiltVoiceConfig):
            Prebuilt voice to use
    """

prebuilt_voice_config property

prebuilt_voice_config: PrebuiltVoiceConfig

The prebuilt voice configuration.

Web

Web source information.

Information about a web source used for grounding.

Examples:

>>> web = Web(
...     uri="https://example.com/page",
...     title="Example Page",
...     domain="example.com"
... )

domain property

domain: Optional[str]

The domain name.

title property

title: Optional[str]

The page title.

uri property

uri: Optional[str]

The source URI.

WebSearchResultBlock

Web search result block in response.

Single web search result.

Examples:

>>> result = block.content[0]
>>> print(f"{result.title}: {result.url}")
>>> if result.page_age:
...     print(f"Age: {result.page_age}")

encrypted_content property

encrypted_content: str

Encrypted content.

page_age property

page_age: Optional[str]

Page age.

title property

title: str

Result title.

type property

type: str

Block type.

url property

url: str

Result URL.

WebSearchResultBlockParam

WebSearchResultBlockParam(
    encrypted_content: str,
    title: str,
    url: str,
    page_agent: Optional[str] = None,
)

Web search result block parameter.

Contains a single web search result.

Examples:

>>> block = WebSearchResultBlockParam(
...     encrypted_content="encrypted_data",
...     title="Search Result",
...     url="https://example.com",
...     page_agent="5 hours ago"
... )

Parameters:

Name Type Description Default
encrypted_content str

Encrypted content data

required
title str

Result title

required
url str

Result URL

required
page_agent Optional[str]

Page age information

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    encrypted_content: str,
    title: str,
    url: str,
    page_agent: Optional[str] = None,
) -> None:
    """Initialize web search result block parameter.

    Args:
        encrypted_content (str):
            Encrypted content data
        title (str):
            Result title
        url (str):
            Result URL
        page_agent (Optional[str]):
            Page age information
    """

encrypted_content property

encrypted_content: str

Encrypted content.

page_agent property

page_agent: Optional[str]

Page age information.

title property

title: str

Result title.

type property

type: str

Content type (always 'web_search_result').

url property

url: str

Result URL.

WebSearchToolResultBlock

Web search tool result block in response.

Contains web search results or error.

Examples:

>>> block = response.content[0]
>>> print(f"Tool use ID: {block.tool_use_id}")
>>> if isinstance(block.content, list):
...     for result in block.content:
...         print(result.title)

content property

content: Any

Search results or error.

tool_use_id property

tool_use_id: str

Tool use ID.

type property

type: str

Block type.

WebSearchToolResultBlockParam

WebSearchToolResultBlockParam(
    tool_use_id: str,
    content: List[WebSearchResultBlockParam],
    cache_control: Optional[CacheControl] = None,
)

Web search tool result block parameter.

Contains multiple web search results from a tool call.

Examples:

>>> results = [WebSearchResultBlockParam(...), WebSearchResultBlockParam(...)]
>>> block = WebSearchToolResultBlockParam(
...     tool_use_id="search_123",
...     content=results,
...     cache_control=None
... )

Parameters:

Name Type Description Default
tool_use_id str

ID of the web search tool call

required
content List[WebSearchResultBlockParam]

List of search results

required
cache_control Optional[CacheControl]

Cache control settings

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    tool_use_id: str,
    content: List[WebSearchResultBlockParam],
    cache_control: Optional["CacheControl"] = None,
) -> None:
    """Initialize web search tool result block parameter.

    Args:
        tool_use_id (str):
            ID of the web search tool call
        content (List[WebSearchResultBlockParam]):
            List of search results
        cache_control (Optional[CacheControl]):
            Cache control settings
    """

cache_control property

cache_control: Optional[CacheControl]

Cache control settings.

content property

content: List[WebSearchResultBlockParam]

Search results.

tool_use_id property

tool_use_id: str

Tool use ID.

type property

type: str

Content type (always 'web_search_tool_result').

WebSearchToolResultError

Web search tool error result.

Error information from web search tool.

Examples:

>>> error = block.content
>>> print(f"Error: {error.error_code}")

error_code property

error_code: str

Error code.

type property

type: str

Error type.

WordStats

words property

words: Dict[str, Distinct]

Distinct word counts

Workflow

Workflow(name: str)

Parameters:

Name Type Description Default
name str

The name of the workflow.

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str) -> None:
    """Create a Workflow object.

    Args:
        name (str):
            The name of the workflow.
    """

agents property

agents: Dict[str, Agent]

The agents in the workflow.

is_workflow property

is_workflow: bool

Returns True if the workflow is a valid workflow, otherwise False. This is used to determine if the workflow can be executed.

name property

name: str

The name of the workflow.

task_list property

task_list: TaskList

The tasks in the workflow.

add_agent

add_agent(agent: Agent) -> None

Add an agent to the workflow.

Parameters:

Name Type Description Default
agent Agent

The agent to add to the workflow.

required
Source code in python/scouter/stubs.pyi
def add_agent(self, agent: Agent) -> None:
    """Add an agent to the workflow.

    Args:
        agent (Agent):
            The agent to add to the workflow.
    """

add_agents

add_agents(agents: List[Agent]) -> None

Add multiple agents to the workflow.

Source code in python/scouter/stubs.pyi
def add_agents(self, agents: List[Agent]) -> None:
    """Add multiple agents to the workflow."""

add_task

add_task(task: Task, output_type: Optional[Any]) -> None

Add a task to the workflow.

Parameters:

Name Type Description Default
task Task

The task to add to the workflow.

required
output_type Optional[Any]

The output type to use for the task. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

required
Source code in python/scouter/stubs.pyi
def add_task(self, task: Task, output_type: Optional[Any]) -> None:
    """Add a task to the workflow.

    Args:
        task (Task):
            The task to add to the workflow.
        output_type (Optional[Any]):
            The output type to use for the task. This can either be a Pydantic `BaseModel` class
            or a supported potato_head response type such as `Score`.
    """

add_task_output_types

add_task_output_types(
    task_output_types: Dict[str, Any]
) -> None

Add output types for tasks in the workflow. This is primarily used for rehydrating the task output types when loading a workflow from JSON, as python objects are not serializable.

Parameters:

Name Type Description Default
task_output_types Dict[str, Any]

A dictionary mapping task IDs to their output types. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

required
Source code in python/scouter/stubs.pyi
def add_task_output_types(self, task_output_types: Dict[str, Any]) -> None:
    """Add output types for tasks in the workflow. This is primarily used for
    rehydrating the task output types when loading a workflow from JSON,
    as python objects are not serializable.

    Args:
        task_output_types (Dict[str, Any]):
            A dictionary mapping task IDs to their output types.
            This can either be a Pydantic `BaseModel` class or a supported potato_head response type such as `Score`.
    """

add_tasks

add_tasks(tasks: List[Task]) -> None

Add multiple tasks to the workflow.

Parameters:

Name Type Description Default
tasks List[Task]

The tasks to add to the workflow.

required
Source code in python/scouter/stubs.pyi
def add_tasks(self, tasks: List[Task]) -> None:
    """Add multiple tasks to the workflow.

    Args:
        tasks (List[Task]):
            The tasks to add to the workflow.
    """

execute_task

execute_task(
    task_id: str, global_context: Optional[Any] = None
) -> Any

Execute a single task in the workflow by its ID. Args: task_id (str): The ID of the task to execute. global_context (Optional[Any]): Any serializable global context to bind to the task before execution. This is typically a dictionary or Pydantic BaseModel. Returns: Any:

Source code in python/scouter/stubs.pyi
def execute_task(
    self,
    task_id: str,
    global_context: Optional[Any] = None,
) -> Any:
    """Execute a single task in the workflow by its ID.
    Args:
        task_id (str):
            The ID of the task to execute.
        global_context (Optional[Any]):
            Any serializable global context to bind to the task before execution.
            This is typically a dictionary or Pydantic BaseModel.
    Returns:
        Any:
    """

execution_plan

execution_plan() -> Dict[str, List[str]]

Get the execution plan for the workflow.

Returns:

Type Description
Dict[str, List[str]]

Dict[str, List[str]]: A dictionary where the keys are task IDs and the values are lists of task IDs that the task depends on.

Source code in python/scouter/stubs.pyi
def execution_plan(self) -> Dict[str, List[str]]:
    """Get the execution plan for the workflow.

    Returns:
        Dict[str, List[str]]:
            A dictionary where the keys are task IDs and the values are lists of task IDs
            that the task depends on.
    """

is_complete

is_complete() -> bool

Check if the workflow is complete.

Returns:

Name Type Description
bool bool

True if the workflow is complete, False otherwise.

Source code in python/scouter/stubs.pyi
def is_complete(self) -> bool:
    """Check if the workflow is complete.

    Returns:
        bool:
            True if the workflow is complete, False otherwise.
    """

model_dump_json

model_dump_json() -> str

Dump the workflow to a JSON string.

Returns:

Name Type Description
str str

The JSON string.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Dump the workflow to a JSON string.

    Returns:
        str:
            The JSON string.
    """

model_validate_json staticmethod

model_validate_json(
    json_string: str,
    output_types: Optional[Dict[str, Any]] = None,
) -> Workflow

Load a workflow from a JSON string.

Parameters:

Name Type Description Default
json_string str

The JSON string to validate.

required
output_types Optional[Dict[str, Any]]

A dictionary mapping task IDs to their output types. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

None

Returns:

Name Type Description
Workflow Workflow

The workflow object.

Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(
    json_string: str,
    output_types: Optional[Dict[str, Any]] = None,
) -> "Workflow":
    """Load a workflow from a JSON string.

    Args:
        json_string (str):
            The JSON string to validate.
        output_types (Optional[Dict[str, Any]]):
            A dictionary mapping task IDs to their output types.
            This can either be a Pydantic `BaseModel` class or a supported potato_head response type such as `Score`.

    Returns:
        Workflow:
            The workflow object.
    """

pending_count

pending_count() -> int

Get the number of pending tasks in the workflow.

Returns:

Name Type Description
int int

The number of pending tasks in the workflow.

Source code in python/scouter/stubs.pyi
def pending_count(self) -> int:
    """Get the number of pending tasks in the workflow.

    Returns:
        int:
            The number of pending tasks in the workflow.
    """

run

run(
    global_context: Optional[Dict[str, Any]] = None
) -> WorkflowResult

Run the workflow. This will execute all tasks in the workflow and return when all tasks are complete.

Parameters:

Name Type Description Default
global_context Optional[Dict[str, Any]]

A dictionary of global context to bind to the workflow. All tasks in the workflow will have this context bound to them.

None
Source code in python/scouter/stubs.pyi
def run(
    self,
    global_context: Optional[Dict[str, Any]] = None,
) -> "WorkflowResult":
    """Run the workflow. This will execute all tasks in the workflow and return when all tasks are complete.

    Args:
        global_context (Optional[Dict[str, Any]]):
            A dictionary of global context to bind to the workflow.
            All tasks in the workflow will have this context bound to them.
    """

WorkflowComparison

Represents a comparison between matching workflows in baseline and comparison evaluations

baseline_pass_rate property

baseline_pass_rate: float

Get the baseline workflow pass rate (0.0 to 1.0)

baseline_uid property

baseline_uid: str

Get the baseline workflow unique identifier

comparison_pass_rate property

comparison_pass_rate: float

Get the comparison workflow pass rate (0.0 to 1.0)

comparison_uid property

comparison_uid: str

Get the comparison workflow unique identifier

is_regression property

is_regression: bool

Check if this workflow shows a significant regression

pass_rate_delta property

pass_rate_delta: float

Get the change in pass rate (positive = improvement, negative = regression)

task_comparisons property

task_comparisons: List[TaskComparison]

Get detailed task-by-task comparisons for this workflow

WorkflowResult

events property

events: List[TaskEvent]

The events that occurred during the workflow execution. This is a list of dictionaries where each dictionary contains information about the event such as the task ID, status, and timestamp.

result property

result: Optional[Any]

The result from the last task of the workflow if it has been executed, otherwise None.

tasks property

tasks: Dict[str, WorkflowTask]

The tasks in the workflow result.

WorkflowTask

Python-specific task interface for Task objects and results

agent_id property

agent_id: str

The ID of the agent that will execute the task.

dependencies property

dependencies: List[str]

The dependencies of the task.

id property

id: str

The ID of the task.

prompt property

prompt: Prompt

The prompt to use for the task.

result property

result: Optional[Any]

The result of the task if it has been executed, otherwise None.

status property

status: TaskStatus

The status of the task.

flush_tracer

flush_tracer() -> None

Force flush the tracer's exporter.

Source code in python/scouter/stubs.pyi
def flush_tracer() -> None:
    """Force flush the tracer's exporter."""

get_current_active_span

get_current_active_span(self) -> ActiveSpan

Get the current active span.

Returns:

Name Type Description
ActiveSpan ActiveSpan

The current active span. Raises an error if no active span exists.

Source code in python/scouter/stubs.pyi
def get_current_active_span(self) -> ActiveSpan:
    """Get the current active span.

    Returns:
        ActiveSpan:
            The current active span.
            Raises an error if no active span exists.
    """

get_function_type

get_function_type(func: Callable[..., Any]) -> FunctionType

Determine the function type (sync, async, generator, async generator).

Parameters:

Name Type Description Default
func Callable[..., Any]

The function to analyze.

required
Source code in python/scouter/stubs.pyi
def get_function_type(func: Callable[..., Any]) -> "FunctionType":
    """Determine the function type (sync, async, generator, async generator).

    Args:
        func (Callable[..., Any]):
            The function to analyze.
    """

get_tracing_headers_from_current_span

get_tracing_headers_from_current_span() -> Dict[str, str]

Get tracing headers from the current active span and global propagator.

Returns:

Type Description
Dict[str, str]

Dict[str, str]: A dictionary of tracing headers.

Source code in python/scouter/stubs.pyi
def get_tracing_headers_from_current_span() -> Dict[str, str]:
    """Get tracing headers from the current active span and global propagator.

    Returns:
        Dict[str, str]:
            A dictionary of tracing headers.
    """

init_tracer

init_tracer(
    service_name: str = "scouter_service",
    scope: str = "scouter.tracer.{version}",
    transport_config: Optional[
        HttpConfig
        | KafkaConfig
        | RabbitMQConfig
        | RedisConfig
        | GrpcConfig
    ] = None,
    exporter: Optional[
        HttpSpanExporter
        | GrpcSpanExporter
        | StdoutSpanExporter
        | TestSpanExporter
    ] = None,
    batch_config: Optional[BatchConfig] = None,
    sample_ratio: Optional[float] = None,
) -> None

Initialize the tracer for a service with dual export capability.

╔════════════════════════════════════════════╗
║          DUAL EXPORT ARCHITECTURE          ║
╠════════════════════════════════════════════╣
║                                            ║
║  Your Application                          ║
║       │                                    ║
║       │  init_tracer()                     ║
║       │                                    ║
║       ├──────────────────┬                 ║
║       │                  │                 ║
║       ▼                  ▼                 ║
║  ┌─────────────┐   ┌──────────────┐        ║
║  │  Transport  │   │   Optional   │        ║
║  │   to        │   │     OTEL     │        ║
║  │  Scouter    │   │  Exporter    │        ║
║  │  (Required) │   │              │        ║
║  └──────┬──────┘   └──────┬───────┘        ║
║         │                 │                ║
║         │                 │                ║
║    ┌────▼────┐       ┌────▼────┐           ║
║    │ Scouter │       │  OTEL   │           ║
║    │ Server  │       │Collector│           ║
║    └─────────┘       └─────────┘           ║
║                                            ║
╚════════════════════════════════════════════╝
Configuration Overview: This function sets up a service tracer with mandatory export to Scouter and optional export to OpenTelemetry-compatible backends.

┌─ REQUIRED: Scouter Export ────────────────────────────────────────────────┐
│                                                                           │
│  All spans are ALWAYS exported to Scouter via transport_config:           │
│    • HttpConfig    → HTTP endpoint (default)                              │
│    • GrpcConfig    → gRPC endpoint                                        │
│    • KafkaConfig   → Kafka topic                                          │
│    • RabbitMQConfig→ RabbitMQ queue                                       │
│    • RedisConfig   → Redis stream/channel                                 │
│                                                                           │
└───────────────────────────────────────────────────────────────────────────┘

┌─ OPTIONAL: OTEL Export ───────────────────────────────────────────────────┐
│                                                                           │
│  Optionally export spans to external OTEL-compatible systems:             │
│    • HttpSpanExporter   → OTEL Collector (HTTP)                           │
│    • GrpcSpanExporter   → OTEL Collector (gRPC)                           │
│    • StdoutSpanExporter → Console output (debugging)                      │
│    • TestSpanExporter   → In-memory (testing)                             │
│                                                                           │
│  If None: Only Scouter export is active (NoOpExporter)                    │
│                                                                           │
└───────────────────────────────────────────────────────────────────────────┘

Parameters:

Name Type Description Default
service_name str

The required name of the service this tracer is associated with. This is typically a logical identifier for the application or component. Default: "scouter_service"

'scouter_service'
scope str

The scope for the tracer. Used to differentiate tracers by version or environment. Default: "scouter.tracer.{version}"

'scouter.tracer.{version}'
transport_config HttpConfig | GrpcConfig | KafkaConfig | RabbitMQConfig | RedisConfig | None

Configuration for sending spans to Scouter. If None, defaults to HttpConfig.

Supported transports: • HttpConfig : Export to Scouter via HTTP • GrpcConfig : Export to Scouter via gRPC • KafkaConfig : Export to Scouter via Kafka • RabbitMQConfig : Export to Scouter via RabbitMQ • RedisConfig : Export to Scouter via Redis

None
exporter HttpSpanExporter | GrpcSpanExporter | StdoutSpanExporter | TestSpanExporter | None

Optional secondary exporter for OpenTelemetry-compatible backends. If None, spans are ONLY sent to Scouter (NoOpExporter used internally).

Available exporters: • HttpSpanExporter : Send to OTEL Collector via HTTP • GrpcSpanExporter : Send to OTEL Collector via gRPC • StdoutSpanExporter : Write to stdout (debugging) • TestSpanExporter : Collect in-memory (testing)

None
batch_config BatchConfig | None

Configuration for batch span export. If provided, spans are queued and exported in batches. If None and the exporter supports batching, default batch settings apply.

Batching improves performance for high-throughput applications.

None
sample_ratio float | None

Sampling ratio for tracing. A value between 0.0 and 1.0. All provided values are clamped between 0.0 and 1.0. If None, all spans are sampled (no sampling).

None

Examples:

Basic setup (Scouter only via HTTP): >>> init_tracer(service_name="my-service")

Scouter via Kafka + OTEL Collector: >>> init_tracer( ... service_name="my-service", ... transport_config=KafkaConfig(brokers="kafka:9092"), ... exporter=HttpSpanExporter( ... export_config=OtelExportConfig( ... endpoint="http://otel-collector:4318" ... ) ... ) ... )

Scouter via gRPC + stdout debugging: >>> init_tracer( ... service_name="my-service", ... transport_config=GrpcConfig(server_uri="grpc://scouter:50051"), ... exporter=StdoutSpanExporter() ... )

Notes

• Spans are ALWAYS exported to Scouter via transport_config • OTEL export via exporter is completely optional • Both exports happen in parallel without blocking each other • Use batch_config to optimize performance for high-volume tracing

See Also
  • HttpConfig, GrpcConfig, KafkaConfig, RabbitMQConfig, RedisConfig
  • HttpSpanExporter, GrpcSpanExporter, StdoutSpanExporter, TestSpanExporter
  • BatchConfig
Source code in python/scouter/stubs.pyi
def init_tracer(
    service_name: str = "scouter_service",
    scope: str = "scouter.tracer.{version}",
    transport_config: Optional[HttpConfig | KafkaConfig | RabbitMQConfig | RedisConfig | GrpcConfig] = None,
    exporter: Optional[HttpSpanExporter | GrpcSpanExporter | StdoutSpanExporter | TestSpanExporter] = None,
    batch_config: Optional[BatchConfig] = None,
    sample_ratio: Optional[float] = None,
) -> None:
    """
    Initialize the tracer for a service with dual export capability.
    ```
    ╔════════════════════════════════════════════╗
    ║          DUAL EXPORT ARCHITECTURE          ║
    ╠════════════════════════════════════════════╣
    ║                                            ║
    ║  Your Application                          ║
    ║       │                                    ║
    ║       │  init_tracer()                     ║
    ║       │                                    ║
    ║       ├──────────────────┬                 ║
    ║       │                  │                 ║
    ║       ▼                  ▼                 ║
    ║  ┌─────────────┐   ┌──────────────┐        ║
    ║  │  Transport  │   │   Optional   │        ║
    ║  │   to        │   │     OTEL     │        ║
    ║  │  Scouter    │   │  Exporter    │        ║
    ║  │  (Required) │   │              │        ║
    ║  └──────┬──────┘   └──────┬───────┘        ║
    ║         │                 │                ║
    ║         │                 │                ║
    ║    ┌────▼────┐       ┌────▼────┐           ║
    ║    │ Scouter │       │  OTEL   │           ║
    ║    │ Server  │       │Collector│           ║
    ║    └─────────┘       └─────────┘           ║
    ║                                            ║
    ╚════════════════════════════════════════════╝
    ```
    Configuration Overview:
        This function sets up a service tracer with **mandatory** export to Scouter
        and **optional** export to OpenTelemetry-compatible backends.

    ```
    ┌─ REQUIRED: Scouter Export ────────────────────────────────────────────────┐
    │                                                                           │
    │  All spans are ALWAYS exported to Scouter via transport_config:           │
    │    • HttpConfig    → HTTP endpoint (default)                              │
    │    • GrpcConfig    → gRPC endpoint                                        │
    │    • KafkaConfig   → Kafka topic                                          │
    │    • RabbitMQConfig→ RabbitMQ queue                                       │
    │    • RedisConfig   → Redis stream/channel                                 │
    │                                                                           │
    └───────────────────────────────────────────────────────────────────────────┘

    ┌─ OPTIONAL: OTEL Export ───────────────────────────────────────────────────┐
    │                                                                           │
    │  Optionally export spans to external OTEL-compatible systems:             │
    │    • HttpSpanExporter   → OTEL Collector (HTTP)                           │
    │    • GrpcSpanExporter   → OTEL Collector (gRPC)                           │
    │    • StdoutSpanExporter → Console output (debugging)                      │
    │    • TestSpanExporter   → In-memory (testing)                             │
    │                                                                           │
    │  If None: Only Scouter export is active (NoOpExporter)                    │
    │                                                                           │
    └───────────────────────────────────────────────────────────────────────────┘
    ```

    Args:
        service_name (str):
            The **required** name of the service this tracer is associated with.
            This is typically a logical identifier for the application or component.
            Default: "scouter_service"

        scope (str):
            The scope for the tracer. Used to differentiate tracers by version
            or environment.
            Default: "scouter.tracer.{version}"

        transport_config (HttpConfig | GrpcConfig | KafkaConfig | RabbitMQConfig | RedisConfig | None):

            Configuration for sending spans to Scouter. If None, defaults to HttpConfig.

            Supported transports:
                • HttpConfig     : Export to Scouter via HTTP
                • GrpcConfig     : Export to Scouter via gRPC
                • KafkaConfig    : Export to Scouter via Kafka
                • RabbitMQConfig : Export to Scouter via RabbitMQ
                • RedisConfig    : Export to Scouter via Redis

        exporter (HttpSpanExporter | GrpcSpanExporter | StdoutSpanExporter | TestSpanExporter | None):

            Optional secondary exporter for OpenTelemetry-compatible backends.
            If None, spans are ONLY sent to Scouter (NoOpExporter used internally).

            Available exporters:
                • HttpSpanExporter   : Send to OTEL Collector via HTTP
                • GrpcSpanExporter   : Send to OTEL Collector via gRPC
                • StdoutSpanExporter : Write to stdout (debugging)
                • TestSpanExporter   : Collect in-memory (testing)

        batch_config (BatchConfig | None):
            Configuration for batch span export. If provided, spans are queued
            and exported in batches. If None and the exporter supports batching,
            default batch settings apply.

            Batching improves performance for high-throughput applications.

        sample_ratio (float | None):
            Sampling ratio for tracing. A value between 0.0 and 1.0.
            All provided values are clamped between 0.0 and 1.0.
            If None, all spans are sampled (no sampling).

    Examples:
        Basic setup (Scouter only via HTTP):
            >>> init_tracer(service_name="my-service")

        Scouter via Kafka + OTEL Collector:
            >>> init_tracer(
            ...     service_name="my-service",
            ...     transport_config=KafkaConfig(brokers="kafka:9092"),
            ...     exporter=HttpSpanExporter(
            ...         export_config=OtelExportConfig(
            ...             endpoint="http://otel-collector:4318"
            ...         )
            ...     )
            ... )

        Scouter via gRPC + stdout debugging:
            >>> init_tracer(
            ...     service_name="my-service",
            ...     transport_config=GrpcConfig(server_uri="grpc://scouter:50051"),
            ...     exporter=StdoutSpanExporter()
            ... )

    Notes:
        • Spans are ALWAYS exported to Scouter via transport_config
        • OTEL export via exporter is completely optional
        • Both exports happen in parallel without blocking each other
        • Use batch_config to optimize performance for high-volume tracing

    See Also:
        - HttpConfig, GrpcConfig, KafkaConfig, RabbitMQConfig, RedisConfig
        - HttpSpanExporter, GrpcSpanExporter, StdoutSpanExporter, TestSpanExporter
        - BatchConfig
    """

shutdown_tracer

shutdown_tracer() -> None

Shutdown the tracer and flush any remaining spans.

Source code in python/scouter/stubs.pyi
def shutdown_tracer() -> None:
    """Shutdown the tracer and flush any remaining spans."""