Skip to content

API

ActiveSpan

Represents an active tracing span.

context_id property

context_id: str

Get the context ID of the active span.

add_event

add_event(name: str, attributes: Any) -> None

Add an event to the active span.

Parameters:

Name Type Description Default
name str

The name of the event.

required
attributes Any

Optional attributes for the event. Can be any serializable type or pydantic BaseModel.

required
Source code in python/scouter/stubs.pyi
def add_event(self, name: str, attributes: Any) -> None:
    """Add an event to the active span.

    Args:
        name (str):
            The name of the event.
        attributes (Any):
            Optional attributes for the event.
            Can be any serializable type or pydantic `BaseModel`.
    """

set_attribute

set_attribute(key: str, value: SerializedType) -> None

Set an attribute on the active span.

Parameters:

Name Type Description Default
key str

The attribute key.

required
value SerializedType

The attribute value.

required
Source code in python/scouter/stubs.pyi
def set_attribute(self, key: str, value: SerializedType) -> None:
    """Set an attribute on the active span.

    Args:
        key (str):
            The attribute key.
        value (SerializedType):
            The attribute value.
    """

set_input

set_input(input: Any, max_length: int = 1000) -> None

Set the input for the active span.

Parameters:

Name Type Description Default
input Any

The input to set. Can be any serializable primitive type (str, int, float, bool, list, dict), or a pydantic BaseModel.

required
max_length int

The maximum length for a given string input. Defaults to 1000.

1000
Source code in python/scouter/stubs.pyi
def set_input(self, input: Any, max_length: int = 1000) -> None:
    """Set the input for the active span.

    Args:
        input (Any):
            The input to set. Can be any serializable primitive type (str, int, float, bool, list, dict),
            or a pydantic `BaseModel`.
        max_length (int):
            The maximum length for a given string input. Defaults to 1000.
    """

set_output

set_output(output: Any, max_length: int = 1000) -> None

Set the output for the active span.

Parameters:

Name Type Description Default
output Any

The output to set. Can be any serializable primitive type (str, int, float, bool, list, dict), or a pydantic BaseModel.

required
max_length int

The maximum length for a given string output. Defaults to 1000.

1000
Source code in python/scouter/stubs.pyi
def set_output(self, output: Any, max_length: int = 1000) -> None:
    """Set the output for the active span.

    Args:
        output (Any):
            The output to set. Can be any serializable primitive type (str, int, float, bool, list, dict),
            or a pydantic `BaseModel`.
        max_length (int):
            The maximum length for a given string output. Defaults to 1000.

    """

set_status

set_status(
    status: str, description: Optional[str] = None
) -> None

Set the status of the active span.

Parameters:

Name Type Description Default
status str

The status code (e.g., "OK", "ERROR").

required
description Optional[str]

Optional description for the status.

None
Source code in python/scouter/stubs.pyi
def set_status(self, status: str, description: Optional[str] = None) -> None:
    """Set the status of the active span.

    Args:
        status (str):
            The status code (e.g., "OK", "ERROR").
        description (Optional[str]):
            Optional description for the status.
    """

Agent

Agent(
    provider: Provider | str,
    system_instruction: Optional[
        str | List[str] | Message | List[Message]
    ] = None,
)

Parameters:

Name Type Description Default
provider Provider | str

The provider to use for the agent. This can be a Provider enum or a string representing the provider.

required
system_instruction Optional[str | List[str] | Message | List[Message]]

The system message to use for the agent. This can be a string, a list of strings, a Message object, or a list of Message objects. If None, no system message will be used. This is added to all tasks that the agent executes. If a given task contains it's own system message, the agent's system message will be prepended to the task's system message.

None

Example:

    agent = Agent(
        provider=Provider.OpenAI,
        system_instruction="You are a helpful assistant.",
    )

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    provider: Provider | str,
    system_instruction: Optional[str | List[str] | Message | List[Message]] = None,
) -> None:
    """Create an Agent object.

    Args:
        provider (Provider | str):
            The provider to use for the agent. This can be a Provider enum or a string
            representing the provider.
        system_instruction (Optional[str | List[str] | Message | List[Message]]):
            The system message to use for the agent. This can be a string, a list of strings,
            a Message object, or a list of Message objects. If None, no system message will be used.
            This is added to all tasks that the agent executes. If a given task contains it's own
            system message, the agent's system message will be prepended to the task's system message.

    Example:
    ```python
        agent = Agent(
            provider=Provider.OpenAI,
            system_instruction="You are a helpful assistant.",
        )
    ```
    """

id property

id: str

The ID of the agent. This is a random uuid7 that is generated when the agent is created.

system_instruction property

system_instruction: List[Message]

The system message to use for the agent. This is a list of Message objects.

execute_prompt

execute_prompt(
    prompt: Prompt,
    output_type: Optional[Any] = None,
    model: Optional[str] = None,
) -> AgentResponse

Execute a prompt.

Parameters:

Name Type Description Default
prompt Prompt

` The prompt to execute.

required
output_type Optional[Any]

The output type to use for the task. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

None
model Optional[str]

The model to use for the task. If not provided, defaults to the model provided within the Prompt. If the Prompt does not have a model, an error will be raised.

None

Returns:

Name Type Description
AgentResponse AgentResponse

The response from the agent after executing the task.

Source code in python/scouter/stubs.pyi
def execute_prompt(
    self,
    prompt: Prompt,
    output_type: Optional[Any] = None,
    model: Optional[str] = None,
) -> AgentResponse:
    """Execute a prompt.

    Args:
        prompt (Prompt):`
            The prompt to execute.
        output_type (Optional[Any]):
            The output type to use for the task. This can either be a Pydantic `BaseModel` class
            or a supported potato_head response type such as `Score`.
        model (Optional[str]):
            The model to use for the task. If not provided, defaults to the `model` provided within
            the Prompt. If the Prompt does not have a model, an error will be raised.

    Returns:
        AgentResponse:
            The response from the agent after executing the task.
    """

execute_task

execute_task(
    task: Task,
    output_type: Optional[Any] = None,
    model: Optional[str] = None,
) -> AgentResponse

Execute a task.

Parameters:

Name Type Description Default
task Task

The task to execute.

required
output_type Optional[Any]

The output type to use for the task. This can either be a Pydantic BaseModel class or a supported PotatoHead response type such as Score.

None
model Optional[str]

The model to use for the task. If not provided, defaults to the model provided within the Task's prompt. If the Task's prompt does not have a model, an error will be raised.

None

Returns:

Name Type Description
AgentResponse AgentResponse

The response from the agent after executing the task.

Source code in python/scouter/stubs.pyi
def execute_task(
    self,
    task: Task,
    output_type: Optional[Any] = None,
    model: Optional[str] = None,
) -> AgentResponse:
    """Execute a task.

    Args:
        task (Task):
            The task to execute.
        output_type (Optional[Any]):
            The output type to use for the task. This can either be a Pydantic `BaseModel` class
            or a supported PotatoHead response type such as `Score`.
        model (Optional[str]):
            The model to use for the task. If not provided, defaults to the `model` provided within
            the Task's prompt. If the Task's prompt does not have a model, an error will be raised.

    Returns:
        AgentResponse:
            The response from the agent after executing the task.
    """

AgentResponse

id property

id: str

The ID of the agent response.

log_probs property

log_probs: List[ResponseLogProbs]

Returns the log probabilities of the agent response if supported. This is primarily used for debugging and analysis purposes.

result property

result: Any

The result of the agent response. This can be a Pydantic BaseModel class or a supported potato_head response type such as Score. If neither is provided, the response json will be returned as a dictionary.

token_usage property

token_usage: Usage

Returns the token usage of the agent response if supported

AlertDispatchType

to_string staticmethod

to_string() -> str

Return the string representation of the alert dispatch type

Source code in python/scouter/stubs.pyi
@staticmethod
def to_string() -> str:
    """Return the string representation of the alert dispatch type"""

AlertThreshold

Enum representing different alert conditions for monitoring metrics.

Attributes:

Name Type Description
Below AlertThreshold

Indicates that an alert should be triggered when the metric is below a threshold.

Above AlertThreshold

Indicates that an alert should be triggered when the metric is above a threshold.

Outside AlertThreshold

Indicates that an alert should be triggered when the metric is outside a specified range.

from_value staticmethod

from_value(value: str) -> AlertThreshold

Creates an AlertThreshold enum member from a string value.

Parameters:

Name Type Description Default
value str

The string representation of the alert condition.

required

Returns:

Name Type Description
AlertThreshold AlertThreshold

The corresponding AlertThreshold enum member.

Source code in python/scouter/stubs.pyi
@staticmethod
def from_value(value: str) -> "AlertThreshold":
    """
    Creates an AlertThreshold enum member from a string value.

    Args:
        value (str): The string representation of the alert condition.

    Returns:
        AlertThreshold: The corresponding AlertThreshold enum member.
    """

Attribute

Represents a key-value attribute associated with a span.

AudioUrl

AudioUrl(
    url: str, kind: Literal["audio-url"] = "audio-url"
)

Parameters:

Name Type Description Default
url str

The URL of the audio.

required
kind Literal['audio-url']

The kind of the content.

'audio-url'
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    url: str,
    kind: Literal["audio-url"] = "audio-url",
) -> None:
    """Create an AudioUrl object.

    Args:
        url (str):
            The URL of the audio.
        kind (Literal["audio-url"]):
            The kind of the content.
    """

format property

format: str

The format of the audio URL.

kind property

kind: str

The kind of the content.

media_type property

media_type: str

The media type of the audio URL.

url property

url: str

The URL of the audio.

BaseModel

Bases: Protocol

Protocol for pydantic BaseModel to ensure compatibility with context

model_dump

model_dump() -> Dict[str, Any]

Dump the model as a dictionary

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Dump the model as a dictionary"""

model_dump_json

model_dump_json() -> str

Dump the model as a JSON string

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Dump the model as a JSON string"""

BaseTracer

BaseTracer(name: str)

Parameters:

Name Type Description Default
name str

The name of the service for tracing.

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str) -> None:
    """Initialize the BaseTracer with a service name.

    Args:
        name (str):
            The name of the service for tracing.
    """

current_span

current_span() -> ActiveSpan

Get the current active span.

Returns:

Name Type Description
ActiveSpan ActiveSpan

The current active span. Raises an error if no active span exists.

Source code in python/scouter/stubs.pyi
def current_span(self) -> ActiveSpan:
    """Get the current active span.

    Returns:
        ActiveSpan:
            The current active span.
            Raises an error if no active span exists.
    """

start_as_current_span

start_as_current_span(
    name: str,
    kind: Optional[SpanKind] = SpanKind.Internal,
    label: Optional[str] = None,
    attributes: Optional[dict[str, str]] = None,
    baggage: Optional[dict[str, str]] = None,
    tags: Optional[dict[str, str]] = None,
    parent_context_id: Optional[str] = None,
) -> ActiveSpan

Context manager to start a new span as the current span.

Parameters:

Name Type Description Default
name str

The name of the span.

required
kind Optional[SpanKind]

The kind of span (e.g., "SERVER", "CLIENT").

Internal
label Optional[str]

An optional label for the span.

None
attributes Optional[dict[str, str]]

Optional attributes to set on the span.

None
baggage Optional[dict[str, str]]

Optional baggage items to attach to the span.

None
tags Optional[dict[str, str]]

Optional tags to set on the span and trace.

None
parent_context_id Optional[str]

Optional parent span context ID.

None

Returns: ActiveSpan:

Source code in python/scouter/stubs.pyi
def start_as_current_span(
    self,
    name: str,
    kind: Optional[SpanKind] = SpanKind.Internal,
    label: Optional[str] = None,
    attributes: Optional[dict[str, str]] = None,
    baggage: Optional[dict[str, str]] = None,
    tags: Optional[dict[str, str]] = None,
    parent_context_id: Optional[str] = None,
) -> ActiveSpan:
    """Context manager to start a new span as the current span.

    Args:
        name (str):
            The name of the span.
        kind (Optional[SpanKind]):
            The kind of span (e.g., "SERVER", "CLIENT").
        label (Optional[str]):
            An optional label for the span.
        attributes (Optional[dict[str, str]]):
            Optional attributes to set on the span.
        baggage (Optional[dict[str, str]]):
            Optional baggage items to attach to the span.
        tags (Optional[dict[str, str]]):
            Optional tags to set on the span and trace.
        parent_context_id (Optional[str]):
            Optional parent span context ID.
    Returns:
        ActiveSpan:
    """

BatchConfig

BatchConfig(
    max_queue_size: int = 2048,
    scheduled_delay_ms: int = 5000,
    max_export_batch_size: int = 512,
)

Configuration for batch exporting of spans.

Parameters:

Name Type Description Default
max_queue_size int

The maximum queue size for spans. Defaults to 2048.

2048
scheduled_delay_ms int

The delay in milliseconds between export attempts. Defaults to 5000.

5000
max_export_batch_size int

The maximum batch size for exporting spans. Defaults to 512.

512
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    max_queue_size: int = 2048,
    scheduled_delay_ms: int = 5000,
    max_export_batch_size: int = 512,
) -> None:
    """Initialize the BatchConfig.

    Args:
        max_queue_size (int):
            The maximum queue size for spans. Defaults to 2048.
        scheduled_delay_ms (int):
            The delay in milliseconds between export attempts. Defaults to 5000.
        max_export_batch_size (int):
            The maximum batch size for exporting spans. Defaults to 512.
    """

Bin

id property

id: int

Return the bin id.

lower_limit property

lower_limit: float

Return the lower limit of the bin.

proportion property

proportion: float

Return the proportion of data found in the bin.

upper_limit property

upper_limit: Optional[float]

Return the upper limit of the bin.

BinaryContent

BinaryContent(
    data: bytes, media_type: str, kind: str = "binary"
)

Parameters:

Name Type Description Default
data bytes

The binary data.

required
media_type str

The media type of the binary data.

required
kind str

The kind of the content

'binary'
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    data: bytes,
    media_type: str,
    kind: str = "binary",
) -> None:
    """Create a BinaryContent object.

    Args:
        data (bytes):
            The binary data.
        media_type (str):
            The media type of the binary data.
        kind (str):
            The kind of the content
    """

data property

data: bytes

The binary data.

format property

format: str

The format of the binary content.

kind property

kind: str

The kind of the content.

media_type property

media_type: str

The media type of the binary content.

CharStats

max_length property

max_length: int

Maximum string length

mean_length property

mean_length: float

Mean string length

median_length property

median_length: int

Median string length

min_length property

min_length: int

Minimum string length

ChatResponse

to_py

to_py() -> Any

Convert the ChatResponse to it's Python representation.

Source code in python/scouter/stubs.pyi
def to_py(self) -> Any:
    """Convert the ChatResponse to it's Python representation."""

CommonCrons

cron property

cron: str

Return the cron

get_next

get_next() -> str

Return the next cron time

Source code in python/scouter/stubs.pyi
def get_next(self) -> str:
    """Return the next cron time"""

CompletionTokenDetails

Details about the completion tokens used in a model response.

accepted_prediction_tokens property

accepted_prediction_tokens: int

The number of accepted prediction tokens used in the response.

audio_tokens property

audio_tokens: int

The number of audio tokens used in the response.

reasoning_tokens property

reasoning_tokens: int

The number of reasoning tokens used in the response.

rejected_prediction_tokens property

rejected_prediction_tokens: int

The number of rejected prediction tokens used in the response.

ConsoleDispatchConfig

ConsoleDispatchConfig()
Source code in python/scouter/stubs.pyi
def __init__(self):
    """Initialize alert config"""

enabled property

enabled: bool

Return the alert dispatch type

CustomDriftProfile

CustomDriftProfile(
    config: CustomMetricDriftConfig,
    metrics: list[CustomMetric],
)

Parameters:

Name Type Description Default
config CustomMetricDriftConfig

The configuration for custom metric drift detection.

required
metrics list[CustomMetric]

A list of CustomMetric objects representing the metrics to be monitored.

required
Example

config = CustomMetricDriftConfig(...) metrics = [CustomMetric("accuracy", 0.95), CustomMetric("f1_score", 0.88)] profile = CustomDriftProfile(config, metrics, "1.0.0")

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    config: CustomMetricDriftConfig,
    metrics: list[CustomMetric],
):
    """Initialize a CustomDriftProfile instance.

    Args:
        config (CustomMetricDriftConfig):
            The configuration for custom metric drift detection.
        metrics (list[CustomMetric]):
            A list of CustomMetric objects representing the metrics to be monitored.

    Example:
        config = CustomMetricDriftConfig(...)
        metrics = [CustomMetric("accuracy", 0.95), CustomMetric("f1_score", 0.88)]
        profile = CustomDriftProfile(config, metrics, "1.0.0")
    """

config property

config: CustomMetricDriftConfig

Return the drift config

custom_metrics property

custom_metrics: list[CustomMetric]

Return custom metric objects that were used to create the drift profile

metrics property

metrics: dict[str, float]

Return custom metrics and their corresponding values

scouter_version property

scouter_version: str

Return scouter version used to create DriftProfile

from_file staticmethod

from_file(path: Path) -> CustomDriftProfile

Load drift profile from file

Parameters:

Name Type Description Default
path Path

Path to the file

required
Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "CustomDriftProfile":
    """Load drift profile from file

    Args:
        path: Path to the file
    """

model_dump

model_dump() -> Dict[str, Any]

Return dictionary representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Return dictionary representation of drift profile"""

model_dump_json

model_dump_json() -> str

Return json representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of drift profile"""

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> CustomDriftProfile

Load drift profile from dictionary

Parameters:

Name Type Description Default
data Dict[str, Any]

DriftProfile dictionary

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "CustomDriftProfile":
    """Load drift profile from dictionary

    Args:
        data:
            DriftProfile dictionary
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> CustomDriftProfile

Load drift profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "CustomDriftProfile":
    """Load drift profile from json

    Args:
        json_string:
            JSON string representation of the drift profile

    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift profile. If None, outputs to custom_drift_profile.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift profile to json file

    Args:
        path:
            Optional path to save the drift profile. If None, outputs to `custom_drift_profile.json`

    Returns:
        Path to the saved json file
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
space Optional[str]

Model space

None
name Optional[str]

Model name

None
version Optional[str]

Model version

None
alert_config Optional[CustomMetricAlertConfig]

Custom metric alert configuration

None

Returns:

Type Description
None

None

Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        space (Optional[str]):
            Model space
        name (Optional[str]):
            Model name
        version (Optional[str]):
            Model version
        alert_config (Optional[CustomMetricAlertConfig]):
            Custom metric alert configuration

    Returns:
        None
    """

CustomMetric

CustomMetric(
    name: str,
    value: float,
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float] = None,
)

This class represents a custom metric that uses comparison-based alerting. It applies an alert condition to a single metric value.

Parameters:

Name Type Description Default
name str

The name of the metric being monitored. This should be a descriptive identifier for the metric.

required
value float

The current value of the metric.

required
alert_threshold AlertThreshold

The condition used to determine when an alert should be triggered.

required
alert_threshold_value Optional[float]

The threshold or boundary value used in conjunction with the alert_threshold. If supplied, this value will be added or subtracted from the provided metric value to determine if an alert should be triggered.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    value: float,
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float] = None,
):
    """
    Initialize a custom metric for alerting.

    This class represents a custom metric that uses comparison-based alerting. It applies
    an alert condition to a single metric value.

    Args:
        name (str): The name of the metric being monitored. This should be a
            descriptive identifier for the metric.
        value (float): The current value of the metric.
        alert_threshold (AlertThreshold):
            The condition used to determine when an alert should be triggered.
        alert_threshold_value (Optional[float]):
            The threshold or boundary value used in conjunction with the alert_threshold.
            If supplied, this value will be added or subtracted from the provided metric value to
            determine if an alert should be triggered.

    """

alert_condition property writable

alert_condition: CustomMetricAlertCondition

Return the alert_condition

alert_threshold property

alert_threshold: AlertThreshold

Return the alert_threshold

alert_threshold_value property

alert_threshold_value: Optional[float]

Return the alert_threshold_value

name property writable

name: str

Return the metric name

value property writable

value: float

Return the metric value

CustomMetricAlertCondition

CustomMetricAlertCondition(
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float],
)
alert_threshold (AlertThreshold): The condition that determines when an alert
    should be triggered. This could be comparisons like 'greater than',
    'less than', 'equal to', etc.
alert_threshold_value (Optional[float], optional): A numerical boundary used in
    conjunction with the alert_threshold. This can be None for certain
    types of comparisons that don't require a fixed boundary.

Example: alert_threshold = CustomMetricAlertCondition(AlertCondition.BELOW, 2.0)

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float],
):
    """Initialize a CustomMetricAlertCondition instance.
    Args:
        alert_threshold (AlertThreshold): The condition that determines when an alert
            should be triggered. This could be comparisons like 'greater than',
            'less than', 'equal to', etc.
        alert_threshold_value (Optional[float], optional): A numerical boundary used in
            conjunction with the alert_threshold. This can be None for certain
            types of comparisons that don't require a fixed boundary.
    Example:
        alert_threshold = CustomMetricAlertCondition(AlertCondition.BELOW, 2.0)
    """

alert_threshold property writable

alert_threshold: AlertThreshold

Return the alert_threshold

alert_threshold_value property writable

alert_threshold_value: float

Return the alert_threshold_value

CustomMetricAlertConfig

CustomMetricAlertConfig(
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
)

Parameters:

Name Type Description Default
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch config. Defaults to console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
):
    """Initialize alert config

    Args:
        dispatch_config:
            Alert dispatch config. Defaults to console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight

    """

alert_conditions property writable

alert_conditions: dict[str, CustomMetricAlertCondition]

Return the alert_condition that were set during metric definition

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

schedule property writable

schedule: str

Return the schedule

CustomMetricDriftConfig

CustomMetricDriftConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: CustomMetricAlertConfig = CustomMetricAlertConfig(),
)
space:
    Model space
name:
    Model name
version:
    Model version. Defaults to 0.1.0
sample_size:
    Sample size
alert_config:
    Custom metric alert configuration
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: CustomMetricAlertConfig = CustomMetricAlertConfig(),
):
    """Initialize drift config
    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version. Defaults to 0.1.0
        sample_size:
            Sample size
        alert_config:
            Custom metric alert configuration
    """

alert_config property writable

alert_config: CustomMetricAlertConfig

get alert_config

drift_type property

drift_type: DriftType

Drift type

name property writable

name: str

Model Name

space property writable

space: str

Model space

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> CustomMetricDriftConfig

Load config from json file Args: path: Path to json file to load config from.

Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "CustomMetricDriftConfig":
    """Load config from json file
    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None

Inplace operation that updates config args Args: space: Model space name: Model name version: Model version alert_config: Custom metric alert configuration

Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[CustomMetricAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args
    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        alert_config:
            Custom metric alert configuration
    """

CustomMetricServerRecord

CustomMetricServerRecord(
    space: str,
    name: str,
    version: str,
    metric: str,
    value: float,
)

Parameters:

Name Type Description Default
space str

Model space

required
name str

Model name

required
version str

Model version

required
metric str

Metric name

required
value float

Metric value

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str,
    name: str,
    version: str,
    metric: str,
    value: float,
):
    """Initialize spc drift server record

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        metric:
            Metric name
        value:
            Metric value
    """

created_at property

created_at: datetime

Return the created at timestamp.

metric property

metric: str

Return the metric name.

name property

name: str

Return the name.

space property

space: str

Return the space.

value property

value: float

Return the metric value.

version property

version: str

Return the version.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

to_dict

to_dict() -> Dict[str, str]

Return the dictionary representation of the record.

Source code in python/scouter/stubs.pyi
def to_dict(self) -> Dict[str, str]:
    """Return the dictionary representation of the record."""

DataProfile

Data profile of features

features property

features: Dict[str, FeatureProfile]

Returns dictionary of features and their data profiles

model_dump_json

model_dump_json() -> str

Return json representation of data profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of data profile"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> DataProfile

Load Data profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the data profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "DataProfile":
    """Load Data profile from json

    Args:
        json_string:
            JSON string representation of the data profile
    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save data profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the data profile. If None, outputs to data_profile.json

None

Returns:

Type Description
Path

Path to the saved data profile

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save data profile to json file

    Args:
        path:
            Optional path to save the data profile. If None, outputs to `data_profile.json`

    Returns:
        Path to the saved data profile

    """

DataProfiler

DataProfiler()
Source code in python/scouter/stubs.pyi
def __init__(self):
    """Instantiate DataProfiler class that is
    used to profile data"""

create_data_profile

create_data_profile(
    data: Any,
    data_type: Optional[ScouterDataType] = None,
    bin_size: int = 20,
    compute_correlations: bool = False,
) -> DataProfile

Create a data profile from data.

Parameters:

Name Type Description Default
data Any

Data to create a data profile from. Data can be a numpy array, a polars dataframe or pandas dataframe.

Data is expected to not contain any missing values, NaNs or infinities

These types are incompatible with computing quantiles, histograms, and correlations. These values must be removed or imputed.

required
data_type Optional[ScouterDataType]

Optional data type. Inferred from data if not provided.

None
bin_size int

Optional bin size for histograms. Defaults to 20 bins.

20
compute_correlations bool

Whether to compute correlations or not.

False

Returns:

Type Description
DataProfile

DataProfile

Source code in python/scouter/stubs.pyi
def create_data_profile(
    self,
    data: Any,
    data_type: Optional[ScouterDataType] = None,
    bin_size: int = 20,
    compute_correlations: bool = False,
) -> DataProfile:
    """Create a data profile from data.

    Args:
        data:
            Data to create a data profile from. Data can be a numpy array,
            a polars dataframe or pandas dataframe.

            **Data is expected to not contain any missing values, NaNs or infinities**

            These types are incompatible with computing
            quantiles, histograms, and correlations. These values must be removed or imputed.

        data_type:
            Optional data type. Inferred from data if not provided.
        bin_size:
            Optional bin size for histograms. Defaults to 20 bins.
        compute_correlations:
            Whether to compute correlations or not.

    Returns:
        DataProfile
    """

Distinct

count property

count: int

total unique value counts

percent property

percent: float

percent value uniqueness

Doane

Doane()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Doane equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

DocumentUrl

DocumentUrl(
    url: str, kind: Literal["document-url"] = "document-url"
)

Parameters:

Name Type Description Default
url str

The URL of the document.

required
kind Literal['document-url']

The kind of the content.

'document-url'
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    url: str,
    kind: Literal["document-url"] = "document-url",
) -> None:
    """Create a DocumentUrl object.

    Args:
        url (str):
            The URL of the document.
        kind (Literal["document-url"]):
            The kind of the content.
    """

format property

format: str

The format of the document URL.

kind property

kind: str

The kind of the content.

media_type property

media_type: str

The media type of the document URL.

url property

url: str

The URL of the document.

DriftAlertRequest

DriftAlertRequest(
    name: str,
    space: str,
    version: str,
    active: bool = False,
    limit_datetime: Optional[datetime] = None,
    limit: Optional[int] = None,
)

Parameters:

Name Type Description Default
name str

Name

required
space str

Space

required
version str

Version

required
active bool

Whether to get active alerts only

False
limit_datetime Optional[datetime]

Limit datetime for alerts

None
limit Optional[int]

Limit for number of alerts to return

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    space: str,
    version: str,
    active: bool = False,
    limit_datetime: Optional[datetime] = None,
    limit: Optional[int] = None,
) -> None:
    """Initialize drift alert request

    Args:
        name:
            Name
        space:
            Space
        version:
            Version
        active:
            Whether to get active alerts only
        limit_datetime:
            Limit datetime for alerts
        limit:
            Limit for number of alerts to return
    """

DriftRequest

DriftRequest(
    name: str,
    space: str,
    version: str,
    time_interval: TimeInterval,
    max_data_points: int,
    drift_type: DriftType,
)

Parameters:

Name Type Description Default
name str

Model name

required
space str

Model space

required
version str

Model version

required
time_interval TimeInterval

Time window for drift request

required
max_data_points int

Maximum data points to return

required
drift_type DriftType

Drift type for request

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    space: str,
    version: str,
    time_interval: TimeInterval,
    max_data_points: int,
    drift_type: DriftType,
) -> None:
    """Initialize drift request

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        time_interval:
            Time window for drift request
        max_data_points:
            Maximum data points to return
        drift_type:
            Drift type for request
    """

Drifter

Drifter()
Source code in python/scouter/stubs.pyi
def __init__(self) -> None:
    """Instantiate Rust Drifter class that is
    used to create monitoring profiles and compute drifts.
    """

compute_drift

compute_drift(
    data: Any,
    drift_profile: SpcDriftProfile,
    data_type: Optional[ScouterDataType] = None,
) -> SpcDriftMap
compute_drift(
    data: Any,
    drift_profile: PsiDriftProfile,
    data_type: Optional[ScouterDataType] = None,
) -> PsiDriftMap
compute_drift(
    data: Union[LLMRecord, List[LLMRecord]],
    drift_profile: LLMDriftProfile,
    data_type: Optional[ScouterDataType] = None,
) -> LLMDriftMap
compute_drift(
    data: Any,
    drift_profile: Union[
        SpcDriftProfile, PsiDriftProfile, LLMDriftProfile
    ],
    data_type: Optional[ScouterDataType] = None,
) -> Union[SpcDriftMap, PsiDriftMap, LLMDriftMap]

Create a drift map from data.

Parameters:

Name Type Description Default
data Any

Data to create a data profile from. Data can be a numpy array, a polars dataframe or a pandas dataframe.

required
drift_profile Union[SpcDriftProfile, PsiDriftProfile, LLMDriftProfile]

Drift profile to use to compute drift map

required
data_type Optional[ScouterDataType]

Optional data type. Inferred from data if not provided.

None

Returns:

Type Description
Union[SpcDriftMap, PsiDriftMap, LLMDriftMap]

SpcDriftMap, PsiDriftMap or LLMDriftMap

Source code in python/scouter/stubs.pyi
def compute_drift(  # type: ignore
    self,
    data: Any,
    drift_profile: Union[SpcDriftProfile, PsiDriftProfile, LLMDriftProfile],
    data_type: Optional[ScouterDataType] = None,
) -> Union[SpcDriftMap, PsiDriftMap, LLMDriftMap]:
    """Create a drift map from data.

    Args:
        data:
            Data to create a data profile from. Data can be a numpy array,
            a polars dataframe or a pandas dataframe.
        drift_profile:
            Drift profile to use to compute drift map
        data_type:
            Optional data type. Inferred from data if not provided.

    Returns:
        SpcDriftMap, PsiDriftMap or LLMDriftMap
    """

create_drift_profile

create_drift_profile(
    data: Any,
    config: SpcDriftConfig,
    data_type: Optional[ScouterDataType] = None,
) -> SpcDriftProfile
create_drift_profile(
    data: Any, data_type: Optional[ScouterDataType] = None
) -> SpcDriftProfile
create_drift_profile(
    data: Any,
    config: PsiDriftConfig,
    data_type: Optional[ScouterDataType] = None,
) -> PsiDriftProfile
create_drift_profile(
    data: Union[CustomMetric, List[CustomMetric]],
    config: CustomMetricDriftConfig,
    data_type: Optional[ScouterDataType] = None,
) -> CustomDriftProfile
create_drift_profile(
    data: Any,
    config: Optional[
        Union[
            SpcDriftConfig,
            PsiDriftConfig,
            CustomMetricDriftConfig,
        ]
    ] = None,
    data_type: Optional[ScouterDataType] = None,
) -> Union[
    SpcDriftProfile, PsiDriftProfile, CustomDriftProfile
]

Create a drift profile from data.

Parameters:

Name Type Description Default
data Any

Data to create a data profile from. Data can be a numpy array, a polars dataframe, pandas dataframe or a list of CustomMetric if creating a custom metric profile.

Data is expected to not contain any missing values, NaNs or infinities

required
config Optional[Union[SpcDriftConfig, PsiDriftConfig, CustomMetricDriftConfig]]

Drift config that will be used for monitoring

None
data_type Optional[ScouterDataType]

Optional data type. Inferred from data if not provided.

None

Returns:

Type Description
Union[SpcDriftProfile, PsiDriftProfile, CustomDriftProfile]

SpcDriftProfile, PsiDriftProfile or CustomDriftProfile

Source code in python/scouter/stubs.pyi
def create_drift_profile(  # type: ignore
    self,
    data: Any,
    config: Optional[Union[SpcDriftConfig, PsiDriftConfig, CustomMetricDriftConfig]] = None,
    data_type: Optional[ScouterDataType] = None,
) -> Union[SpcDriftProfile, PsiDriftProfile, CustomDriftProfile]:
    """Create a drift profile from data.

    Args:
        data:
            Data to create a data profile from. Data can be a numpy array,
            a polars dataframe, pandas dataframe or a list of CustomMetric if creating
            a custom metric profile.

            **Data is expected to not contain any missing values, NaNs or infinities**

        config:
            Drift config that will be used for monitoring
        data_type:
            Optional data type. Inferred from data if not provided.

    Returns:
        SpcDriftProfile, PsiDriftProfile or CustomDriftProfile
    """

create_llm_drift_profile

create_llm_drift_profile(
    config: LLMDriftConfig,
    metrics: List[LLMDriftMetric],
    workflow: Optional[Workflow] = None,
) -> LLMDriftProfile

Initialize a LLMDriftProfile for LLM evaluation and drift detection.

LLM evaluations are run asynchronously on the scouter server.

Logic flow
  1. If only metrics are provided, a workflow will be created automatically from the metrics. In this case a prompt is required for each metric.
  2. If a workflow is provided, it will be parsed and validated for compatibility:
  3. A list of metrics to evaluate workflow output must be provided
  4. Metric names must correspond to the final task names in the workflow

Baseline metrics and thresholds will be extracted from the LLMDriftMetric objects.

Parameters:

Name Type Description Default
config LLMDriftConfig

The configuration for the LLM drift profile containing space, name, version, and alert settings.

required
metrics list[LLMDriftMetric]

A list of LLMDriftMetric objects representing the metrics to be monitored. Each metric defines evaluation criteria and alert thresholds.

required
workflow Optional[Workflow]

Optional custom workflow for advanced evaluation scenarios. If provided, the workflow will be validated to ensure proper parameter and response type configuration.

None

Returns:

Name Type Description
LLMDriftProfile LLMDriftProfile

Configured profile ready for LLM drift monitoring.

Raises:

Type Description
ProfileError

If workflow validation fails, metrics are empty when no workflow is provided, or if workflow tasks don't match metric names.

Examples:

Basic usage with metrics only:

>>> config = LLMDriftConfig("my_space", "my_model", "1.0")
>>> metrics = [
...     LLMDriftMetric("accuracy", 0.95, AlertThreshold.Above, 0.1, prompt),
...     LLMDriftMetric("relevance", 0.85, AlertThreshold.Below, 0.2, prompt2)
... ]
>>> profile = Drifter().create_llm_drift_profile(config, metrics)

Advanced usage with custom workflow:

>>> workflow = create_custom_workflow()  # Your custom workflow
>>> metrics = [LLMDriftMetric("final_task", 0.9, AlertThreshold.Above)]
>>> profile = Drifter().create_llm_drift_profile(config, metrics, workflow)
Note
  • When using custom workflows, ensure final tasks have Score response types
  • Initial workflow tasks must include "input" and/or "response" parameters
  • All metric names must match corresponding workflow task names
Source code in python/scouter/stubs.pyi
def create_llm_drift_profile(
    self,
    config: LLMDriftConfig,
    metrics: List[LLMDriftMetric],
    workflow: Optional[Workflow] = None,
) -> LLMDriftProfile:
    """Initialize a LLMDriftProfile for LLM evaluation and drift detection.

    LLM evaluations are run asynchronously on the scouter server.

    Logic flow:
        1. If only metrics are provided, a workflow will be created automatically
           from the metrics. In this case a prompt is required for each metric.
        2. If a workflow is provided, it will be parsed and validated for compatibility:
           - A list of metrics to evaluate workflow output must be provided
           - Metric names must correspond to the final task names in the workflow

    Baseline metrics and thresholds will be extracted from the LLMDriftMetric objects.

    Args:
        config (LLMDriftConfig):
            The configuration for the LLM drift profile containing space, name,
            version, and alert settings.
        metrics (list[LLMDriftMetric]):
            A list of LLMDriftMetric objects representing the metrics to be monitored.
            Each metric defines evaluation criteria and alert thresholds.
        workflow (Optional[Workflow]):
            Optional custom workflow for advanced evaluation scenarios. If provided,
            the workflow will be validated to ensure proper parameter and response
            type configuration.

    Returns:
        LLMDriftProfile: Configured profile ready for LLM drift monitoring.

    Raises:
        ProfileError: If workflow validation fails, metrics are empty when no
            workflow is provided, or if workflow tasks don't match metric names.

    Examples:
        Basic usage with metrics only:

        >>> config = LLMDriftConfig("my_space", "my_model", "1.0")
        >>> metrics = [
        ...     LLMDriftMetric("accuracy", 0.95, AlertThreshold.Above, 0.1, prompt),
        ...     LLMDriftMetric("relevance", 0.85, AlertThreshold.Below, 0.2, prompt2)
        ... ]
        >>> profile = Drifter().create_llm_drift_profile(config, metrics)

        Advanced usage with custom workflow:

        >>> workflow = create_custom_workflow()  # Your custom workflow
        >>> metrics = [LLMDriftMetric("final_task", 0.9, AlertThreshold.Above)]
        >>> profile = Drifter().create_llm_drift_profile(config, metrics, workflow)

    Note:
        - When using custom workflows, ensure final tasks have Score response types
        - Initial workflow tasks must include "input" and/or "response" parameters
        - All metric names must match corresponding workflow task names
    """

Embedder

Embedder(
    provider: Provider | str,
    config: Optional[
        OpenAIEmbeddingConfig | GeminiEmbeddingConfig
    ] = None,
)

Class for creating embeddings.

Parameters:

Name Type Description Default
provider Provider | str

The provider to use for the embedder. This can be a Provider enum or a string representing the provider.

required
config Optional[OpenAIEmbeddingConfig | GeminiEmbeddingConfig]

The configuration to use for the embedder. This can be a Pydantic BaseModel class representing the configuration for the provider. If no config is provided, defaults to OpenAI provider configuration.

None
Source code in python/scouter/stubs.pyi
def __init__(  # type: ignore
    self,
    provider: Provider | str,
    config: Optional[OpenAIEmbeddingConfig | GeminiEmbeddingConfig] = None,
) -> None:
    """Create an Embedder object.

    Args:
        provider (Provider | str):
            The provider to use for the embedder. This can be a Provider enum or a string
            representing the provider.
        config (Optional[OpenAIEmbeddingConfig | GeminiEmbeddingConfig]):
            The configuration to use for the embedder. This can be a Pydantic BaseModel class
            representing the configuration for the provider. If no config is provided,
            defaults to OpenAI provider configuration.
    """

embed

embed(
    input: str | List[str] | PredictRequest,
) -> (
    OpenAIEmbeddingResponse
    | GeminiEmbeddingResponse
    | PredictResponse
)

Create embeddings for input.

Parameters:

Name Type Description Default
input str | List[str] | PredictRequest

The input to embed. Type depends on provider: - OpenAI/Gemini: str | List[str] - Vertex: PredictRequest

required

Returns:

Type Description
OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

Provider-specific response type.

OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

OpenAIEmbeddingResponse for OpenAI,

OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

GeminiEmbeddingResponse for Gemini,

OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse

PredictResponse for Vertex.

Examples:

## OpenAI
embedder = Embedder(Provider.OpenAI)
response = embedder.embed(input="Test input")

## Gemini
embedder = Embedder(Provider.Gemini, config=GeminiEmbeddingConfig(model="gemini-embedding-001"))
response = embedder.embed(input="Test input")

## Vertex
from potato_head.google import PredictRequest
embedder = Embedder(Provider.Vertex)
response = embedder.embed(input=PredictRequest(text="Test input"))
Source code in python/scouter/stubs.pyi
def embed(
    self,
    input: str | List[str] | PredictRequest,
) -> OpenAIEmbeddingResponse | GeminiEmbeddingResponse | PredictResponse:
    """Create embeddings for input.

    Args:
        input: The input to embed. Type depends on provider:
            - OpenAI/Gemini: str | List[str]
            - Vertex: PredictRequest

    Returns:
        Provider-specific response type.
        OpenAIEmbeddingResponse for OpenAI,
        GeminiEmbeddingResponse for Gemini,
        PredictResponse for Vertex.

    Examples:
        ```python
        ## OpenAI
        embedder = Embedder(Provider.OpenAI)
        response = embedder.embed(input="Test input")

        ## Gemini
        embedder = Embedder(Provider.Gemini, config=GeminiEmbeddingConfig(model="gemini-embedding-001"))
        response = embedder.embed(input="Test input")

        ## Vertex
        from potato_head.google import PredictRequest
        embedder = Embedder(Provider.Vertex)
        response = embedder.embed(input=PredictRequest(text="Test input"))
        ```
    """

EqualWidthBinning

EqualWidthBinning(method: EqualWidthMethods = Doane())

This strategy divides the range of values into bins of equal width. Several binning rules are supported to automatically determine the appropriate number of bins based on the input distribution.

Note

Detailed explanations of each method are provided in their respective constructors or documentation.

Parameters:

Name Type Description Default
method EqualWidthMethods

Specifies how the number of bins should be determined. Options include: - Manual(num_bins): Explicitly sets the number of bins. - SquareRoot, Sturges, Rice, Doane, Scott, TerrellScott, FreedmanDiaconis: Rules that infer bin counts from data. Defaults to Doane().

Doane()
Source code in python/scouter/stubs.pyi
def __init__(self, method: EqualWidthMethods = Doane()):
    """Initialize the equal-width binning configuration.

    This strategy divides the range of values into bins of equal width.
    Several binning rules are supported to automatically determine the
    appropriate number of bins based on the input distribution.

    Note:
        Detailed explanations of each method are provided in their respective
        constructors or documentation.

    Args:
        method:
            Specifies how the number of bins should be determined.
            Options include:
              - Manual(num_bins): Explicitly sets the number of bins.
              - SquareRoot, Sturges, Rice, Doane, Scott, TerrellScott,
                FreedmanDiaconis: Rules that infer bin counts from data.
            Defaults to Doane().
    """

method property writable

method: EqualWidthMethods

Specifies how the number of bins should be determined.

EvaluationConfig

EvaluationConfig(
    embedder: Optional[Embedder] = None,
    embedding_targets: Optional[List[str]] = None,
    compute_similarity: bool = False,
    cluster: bool = False,
    compute_histograms: bool = False,
)

Configuration options for LLM evaluation.

Parameters:

Name Type Description Default
embedder Optional[Embedder]

Optional Embedder instance to use for generating embeddings for similarity-based metrics. If not provided, no embeddings will be generated.

None
embedding_targets Optional[List[str]]

Optional list of context keys to generate embeddings for. If not provided, embeddings will be generated for all string fields in the record context.

None
compute_similarity bool

Whether to compute similarity between embeddings. Default is False.

False
cluster bool

Whether to perform clustering on the embeddings. Default is False.

False
compute_histograms bool

Whether to compute histograms for all calculated features (metrics, embeddings, similarities). Default is False.

False
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    embedder: Optional[Embedder] = None,
    embedding_targets: Optional[List[str]] = None,
    compute_similarity: bool = False,
    cluster: bool = False,
    compute_histograms: bool = False,
):
    """
    Initialize the EvaluationConfig with optional parameters.

    Args:
        embedder (Optional[Embedder]):
            Optional Embedder instance to use for generating embeddings for similarity-based metrics.
            If not provided, no embeddings will be generated.
        embedding_targets (Optional[List[str]]):
            Optional list of context keys to generate embeddings for. If not provided, embeddings will
            be generated for all string fields in the record context.
        compute_similarity (bool):
            Whether to compute similarity between embeddings. Default is False.
        cluster (bool):
            Whether to perform clustering on the embeddings. Default is False.
        compute_histograms (bool):
            Whether to compute histograms for all calculated features (metrics, embeddings, similarities).
            Default is False.
    """

EventDetails

duration property

duration: Optional[timedelta]

The duration of the task execution.

end_time property

end_time: Optional[datetime]

The end time of the task execution.

error property

error: Optional[str]

The error message if the task failed, otherwise None.

prompt property

prompt: Optional[Prompt]

The prompt used for the task.

response property

response: Optional[ChatResponse]

The response from the agent after executing the task.

start_time property

start_time: Optional[datetime]

The start time of the task execution.

ExportConfig

ExportConfig(
    endpoint: Optional[str],
    protocol: OtelProtocol = OtelProtocol.HttpBinary,
    timeout: Optional[int] = None,
)

Configuration for exporting spans.

Parameters:

Name Type Description Default
endpoint Optional[str]

The HTTP endpoint for exporting spans.

required
protocol Protocol

The protocol to use for exporting spans. Defaults to HttpBinary.

HttpBinary
timeout Optional[int]

The timeout for HTTP requests in seconds.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    endpoint: Optional[str],
    protocol: OtelProtocol = OtelProtocol.HttpBinary,
    timeout: Optional[int] = None,
) -> None:
    """Initialize the ExportConfig.

    Args:
        endpoint (Optional[str]):
            The HTTP endpoint for exporting spans.
        protocol (Protocol):
            The protocol to use for exporting spans. Defaults to HttpBinary.
        timeout (Optional[int]):
            The timeout for HTTP requests in seconds.
    """

endpoint property

endpoint: Optional[str]

Get the HTTP endpoint for exporting spans.

protocol property

protocol: OtelProtocol

Get the protocol used for exporting spans.

timeout property

timeout: Optional[int]

Get the timeout for HTTP requests in seconds.

FeatureDrift

drift property

drift: List[float]

Return list of drift values

samples property

samples: List[float]

Return list of samples

FeatureMap

features property

features: Dict[str, Dict[str, int]]

Return the feature map.

FeatureProfile

correlations property

correlations: Optional[Dict[str, float]]

Feature correlation values

id property

id: str

Return the id.

numeric_stats property

numeric_stats: Optional[NumericStats]

Return the numeric stats.

string_stats property

string_stats: Optional[StringStats]

Return the string stats.

timestamp property

timestamp: str

Return the timestamp.

Features

Features(
    features: (
        List[QueueFeature]
        | Dict[str, Union[int, float, str]]
    )
)

Parameters:

Name Type Description Default
features List[QueueFeature] | Dict[str, Union[int, float, str]]

List of features or a dictionary of key-value pairs. If a list, each item must be an instance of Feature. If a dictionary, each key is the feature name and each value is the feature value. Supported types for values are int, float, and string.

required
Example
# Passing a list of features
features = Features(
    features=[
        Feature.int("feature_1", 1),
        Feature.float("feature_2", 2.0),
        Feature.string("feature_3", "value"),
    ]
)

# Passing a dictionary (pydantic model) of features
class MyFeatures(BaseModel):
    feature1: int
    feature2: float
    feature3: str

my_features = MyFeatures(
    feature1=1,
    feature2=2.0,
    feature3="value",
)

features = Features(my_features.model_dump())
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    features: List[QueueFeature] | Dict[str, Union[int, float, str]],
) -> None:
    """Initialize a features class

    Args:
        features:
            List of features or a dictionary of key-value pairs.
            If a list, each item must be an instance of Feature.
            If a dictionary, each key is the feature name and each value is the feature value.
            Supported types for values are int, float, and string.

    Example:
        ```python
        # Passing a list of features
        features = Features(
            features=[
                Feature.int("feature_1", 1),
                Feature.float("feature_2", 2.0),
                Feature.string("feature_3", "value"),
            ]
        )

        # Passing a dictionary (pydantic model) of features
        class MyFeatures(BaseModel):
            feature1: int
            feature2: float
            feature3: str

        my_features = MyFeatures(
            feature1=1,
            feature2=2.0,
            feature3="value",
        )

        features = Features(my_features.model_dump())
        ```
    """

entity_type property

entity_type: EntityType

Return the entity type

features property

features: List[QueueFeature]

Return the list of features

FreedmanDiaconis

FreedmanDiaconis()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Freedman–Diaconis equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

FunctionType

Enumeration of function types.

GeminiEmbeddingConfig

GeminiEmbeddingConfig(
    model: Optional[str] = None,
    output_dimensionality: Optional[int] = None,
    task_type: Optional[EmbeddingTaskType | str] = None,
)

Parameters:

Name Type Description Default
model Optional[str]

The embedding model to use. If not specified, the default gemini model will be used.

None
output_dimensionality Optional[int]

The output dimensionality of the embeddings. If not specified, a default value will be used.

None
task_type Optional[EmbeddingTaskType]

The type of embedding task to perform. If not specified, the default gemini task type will be used.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model: Optional[str] = None,
    output_dimensionality: Optional[int] = None,
    task_type: Optional[EmbeddingTaskType | str] = None,
) -> None:
    """Configuration to pass to the Gemini Embedding API when creating a request


    Args:
        model (Optional[str]):
            The embedding model to use. If not specified, the default gemini model will be used.
        output_dimensionality (Optional[int]):
            The output dimensionality of the embeddings. If not specified, a default value will be used.
        task_type (Optional[EmbeddingTaskType]):
            The type of embedding task to perform. If not specified, the default gemini task type will be used.
    """

GeminiSettings

GeminiSettings(
    labels: Optional[dict[str, str]] = None,
    tool_config: Optional[ToolConfig] = None,
    generation_config: Optional[GenerationConfig] = None,
    safety_settings: Optional[list[SafetySetting]] = None,
    model_armor_config: Optional[ModelArmorConfig] = None,
    extra_body: Optional[dict] = None,
)
Reference

https://cloud.google.com/vertex-ai/generative-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/generateContent

Parameters:

Name Type Description Default
labels Optional[dict[str, str]]

An optional dictionary of labels for the settings.

None
tool_config Optional[ToolConfig]

Configuration for tools like function calling and retrieval.

None
generation_config Optional[GenerationConfig]

Configuration for content generation parameters.

None
safety_settings Optional[list[SafetySetting]]

List of safety settings to apply.

None
model_armor_config Optional[ModelArmorConfig]

Configuration for model armor templates.

None
extra_body Optional[dict]

Additional configuration as a dictionary.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    labels: Optional[dict[str, str]] = None,
    tool_config: Optional[ToolConfig] = None,
    generation_config: Optional[GenerationConfig] = None,
    safety_settings: Optional[list[SafetySetting]] = None,
    model_armor_config: Optional[ModelArmorConfig] = None,
    extra_body: Optional[dict] = None,
) -> None:
    """Settings to pass to the Gemini API when creating a request

    Reference:
        https://cloud.google.com/vertex-ai/generative-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/generateContent

    Args:
        labels (Optional[dict[str, str]]):
            An optional dictionary of labels for the settings.
        tool_config (Optional[ToolConfig]):
            Configuration for tools like function calling and retrieval.
        generation_config (Optional[GenerationConfig]):
            Configuration for content generation parameters.
        safety_settings (Optional[list[SafetySetting]]):
            List of safety settings to apply.
        model_armor_config (Optional[ModelArmorConfig]):
            Configuration for model armor templates.
        extra_body (Optional[dict]):
            Additional configuration as a dictionary.
    """

GenerationConfig

GenerationConfig(
    stop_sequences: Optional[List[str]] = None,
    response_mime_type: Optional[str] = None,
    response_modalities: Optional[List[Modality]] = None,
    thinking_config: Optional[ThinkingConfig] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    candidate_count: Optional[int] = None,
    max_output_tokens: Optional[int] = None,
    response_logprobs: Optional[bool] = None,
    logprobs: Optional[int] = None,
    presence_penalty: Optional[float] = None,
    frequency_penalty: Optional[float] = None,
    seed: Optional[int] = None,
    audio_timestamp: Optional[bool] = None,
    media_resolution: Optional[MediaResolution] = None,
    speech_config: Optional[SpeechConfig] = None,
    enable_affective_dialog: Optional[bool] = None,
)

Configuration for content generation with comprehensive parameter control.

This class provides fine-grained control over the generation process including sampling parameters, output format, modalities, and various specialized features.

Examples:

Basic usage with temperature control:

GenerationConfig(temperature=0.7, max_output_tokens=1000)

Multi-modal configuration:

config = GenerationConfig(
    response_modalities=[Modality.TEXT, Modality.AUDIO],
    speech_config=SpeechConfig(language_code="en-US")
)

Advanced sampling with penalties:

config = GenerationConfig(
    temperature=0.8,
    top_p=0.9,
    top_k=40,
    presence_penalty=0.1,
    frequency_penalty=0.2
)

Parameters:

Name Type Description Default
stop_sequences Optional[List[str]]

List of strings that will stop generation when encountered

None
response_mime_type Optional[str]

MIME type for the response format

None
response_modalities Optional[List[Modality]]

List of modalities to include in the response

None
thinking_config Optional[ThinkingConfig]

Configuration for reasoning/thinking capabilities

None
temperature Optional[float]

Controls randomness in generation (0.0-1.0)

None
top_p Optional[float]

Nucleus sampling parameter (0.0-1.0)

None
top_k Optional[int]

Top-k sampling parameter

None
candidate_count Optional[int]

Number of response candidates to generate

None
max_output_tokens Optional[int]

Maximum number of tokens to generate

None
response_logprobs Optional[bool]

Whether to return log probabilities

None
logprobs Optional[int]

Number of log probabilities to return per token

None
presence_penalty Optional[float]

Penalty for token presence (-2.0 to 2.0)

None
frequency_penalty Optional[float]

Penalty for token frequency (-2.0 to 2.0)

None
seed Optional[int]

Random seed for deterministic generation

None
audio_timestamp Optional[bool]

Whether to include timestamps in audio responses

None
media_resolution Optional[MediaResolution]

Resolution setting for media content

None
speech_config Optional[SpeechConfig]

Configuration for speech synthesis

None
enable_affective_dialog Optional[bool]

Whether to enable emotional dialog features

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    stop_sequences: Optional[List[str]] = None,
    response_mime_type: Optional[str] = None,
    response_modalities: Optional[List[Modality]] = None,
    thinking_config: Optional[ThinkingConfig] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    candidate_count: Optional[int] = None,
    max_output_tokens: Optional[int] = None,
    response_logprobs: Optional[bool] = None,
    logprobs: Optional[int] = None,
    presence_penalty: Optional[float] = None,
    frequency_penalty: Optional[float] = None,
    seed: Optional[int] = None,
    audio_timestamp: Optional[bool] = None,
    media_resolution: Optional[MediaResolution] = None,
    speech_config: Optional[SpeechConfig] = None,
    enable_affective_dialog: Optional[bool] = None,
) -> None:
    """Initialize GenerationConfig with optional parameters.

    Args:
        stop_sequences (Optional[List[str]]):
            List of strings that will stop generation when encountered
        response_mime_type (Optional[str]):
            MIME type for the response format
        response_modalities (Optional[List[Modality]]):
            List of modalities to include in the response
        thinking_config (Optional[ThinkingConfig]):
            Configuration for reasoning/thinking capabilities
        temperature (Optional[float]):
            Controls randomness in generation (0.0-1.0)
        top_p (Optional[float]):
            Nucleus sampling parameter (0.0-1.0)
        top_k (Optional[int]):
            Top-k sampling parameter
        candidate_count (Optional[int]):
            Number of response candidates to generate
        max_output_tokens (Optional[int]):
            Maximum number of tokens to generate
        response_logprobs (Optional[bool]):
            Whether to return log probabilities
        logprobs (Optional[int]):
            Number of log probabilities to return per token
        presence_penalty (Optional[float]):
            Penalty for token presence (-2.0 to 2.0)
        frequency_penalty (Optional[float]):
            Penalty for token frequency (-2.0 to 2.0)
        seed (Optional[int]):
            Random seed for deterministic generation
        audio_timestamp (Optional[bool]):
            Whether to include timestamps in audio responses
        media_resolution (Optional[MediaResolution]):
            Resolution setting for media content
        speech_config (Optional[SpeechConfig]):
            Configuration for speech synthesis
        enable_affective_dialog (Optional[bool]):
            Whether to enable emotional dialog features
    """

GetProfileRequest

GetProfileRequest(
    name: str,
    space: str,
    version: str,
    drift_type: DriftType,
)

Parameters:

Name Type Description Default
name str

Profile name

required
space str

Profile space

required
version str

Profile version

required
drift_type DriftType

Profile drift type. A (repo/name/version can be associated with more than one drift type)

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, space: str, version: str, drift_type: DriftType) -> None:
    """Initialize get profile request

    Args:
        name:
            Profile name
        space:
            Profile space
        version:
            Profile version
        drift_type:
            Profile drift type. A (repo/name/version can be associated with more than one drift type)
    """

GrpcConfig

GrpcConfig(compression: Optional[CompressionType] = None)

Configuration for gRPC exporting.

Parameters:

Name Type Description Default
compression Optional[CompressionType]

Optional compression type for gRPC requests.

None
Source code in python/scouter/stubs.pyi
def __init__(self, compression: Optional[CompressionType] = None) -> None:
    """Initialize the GrpcConfig.

    Args:
        compression (Optional[CompressionType]):
            Optional compression type for gRPC requests.
    """

compression property

compression: Optional[CompressionType]

Get the compression type.

GrpcSpanExporter

GrpcSpanExporter(
    batch_export: bool = True,
    export_config: Optional[ExportConfig] = None,
    grpc_config: Optional[GrpcConfig] = None,
    sample_ratio: Optional[float] = None,
)

Exporter that sends spans to a gRPC endpoint.

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to True.

True
export_config Optional[ExportConfig]

Configuration for exporting spans.

None
grpc_config Optional[GrpcConfig]

Configuration for the gRPC exporter.

None
sample_ratio Optional[float]

The sampling ratio for traces. If None, defaults to always sample.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    batch_export: bool = True,
    export_config: Optional[ExportConfig] = None,
    grpc_config: Optional[GrpcConfig] = None,
    sample_ratio: Optional[float] = None,
) -> None:
    """Initialize the GrpcSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to True.
        export_config (Optional[ExportConfig]):
            Configuration for exporting spans.
        grpc_config (Optional[GrpcConfig]):
            Configuration for the gRPC exporter.
        sample_ratio (Optional[float]):
            The sampling ratio for traces. If None, defaults to always sample.
    """

batch_export property

batch_export: bool

Get whether batch exporting is enabled.

compression property

compression: Optional[CompressionType]

Get the compression type used for exporting spans.

endpoint property

endpoint: Optional[str]

Get the gRPC endpoint for exporting spans.

protocol property

protocol: OtelProtocol

Get the protocol used for exporting spans.

sample_ratio property

sample_ratio: Optional[float]

Get the sampling ratio.

timeout property

timeout: Optional[int]

Get the timeout for gRPC requests in seconds.

Histogram

bin_counts property

bin_counts: List[int]

Bin counts

bins property

bins: List[float]

Bin values

HttpConfig

HttpConfig(
    server_uri: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    auth_token: Optional[str] = None,
)

Parameters:

Name Type Description Default
server_uri Optional[str]

URL of the HTTP server to publish messages to. If not provided, the value of the HTTP_server_uri environment variable is used.

None
username Optional[str]

Username for basic authentication.

None
password Optional[str]

Password for basic authentication.

None
auth_token Optional[str]

Authorization token to use for authentication.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    server_uri: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    auth_token: Optional[str] = None,
) -> None:
    """HTTP configuration to use with the HTTPProducer.

    Args:
        server_uri:
            URL of the HTTP server to publish messages to.
            If not provided, the value of the HTTP_server_uri environment variable is used.

        username:
            Username for basic authentication.

        password:
            Password for basic authentication.

        auth_token:
            Authorization token to use for authentication.

    """

HttpSpanExporter

HttpSpanExporter(
    batch_export: bool = True,
    export_config: Optional[ExportConfig] = None,
    http_config: Optional[OtelHttpConfig] = None,
    sample_ratio: Optional[float] = None,
)

Exporter that sends spans to an HTTP endpoint.

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to True.

True
export_config Optional[ExportConfig]

Configuration for exporting spans.

None
http_config Optional[OtelHttpConfig]

Configuration for the HTTP exporter.

None
sample_ratio Optional[float]

The sampling ratio for traces. If None, defaults to always sample.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    batch_export: bool = True,
    export_config: Optional[ExportConfig] = None,
    http_config: Optional[OtelHttpConfig] = None,
    sample_ratio: Optional[float] = None,
) -> None:
    """Initialize the HttpSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to True.
        export_config (Optional[ExportConfig]):
            Configuration for exporting spans.
        http_config (Optional[OtelHttpConfig]):
            Configuration for the HTTP exporter.
        sample_ratio (Optional[float]):
            The sampling ratio for traces. If None, defaults to always sample.
    """

batch_export property

batch_export: bool

Get whether batch exporting is enabled.

compression property

compression: Optional[CompressionType]

Get the compression type used for exporting spans.

endpoint property

endpoint: Optional[str]

Get the HTTP endpoint for exporting spans.

headers property

headers: Optional[dict[str, str]]

Get the HTTP headers used for exporting spans.

protocol property

protocol: OtelProtocol

Get the protocol used for exporting spans.

sample_ratio property

sample_ratio: Optional[float]

Get the sampling ratio.

timeout property

timeout: Optional[int]

Get the timeout for HTTP requests in seconds.

ImageUrl

ImageUrl(
    url: str, kind: Literal["image-url"] = "image-url"
)

Parameters:

Name Type Description Default
url str

The URL of the image.

required
kind Literal['image-url']

The kind of the content.

'image-url'
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    url: str,
    kind: Literal["image-url"] = "image-url",
) -> None:
    """Create an ImageUrl object.

    Args:
        url (str):
            The URL of the image.
        kind (Literal["image-url"]):
            The kind of the content.
    """

format property

format: str

The format of the image URL.

kind property

kind: str

The kind of the content.

media_type property

media_type: str

The media type of the image URL.

url property

url: str

The URL of the image.

KafkaConfig

KafkaConfig(
    username: Optional[str] = None,
    password: Optional[str] = None,
    brokers: Optional[str] = None,
    topic: Optional[str] = None,
    compression_type: Optional[str] = None,
    message_timeout_ms: int = 600000,
    message_max_bytes: int = 2097164,
    log_level: LogLevel = LogLevel.Info,
    config: Dict[str, str] = {},
    max_retries: int = 3,
)

This configuration supports both authenticated (SASL) and unauthenticated connections. When credentials are provided, SASL authentication is automatically enabled with secure defaults.

Authentication Priority (first match wins): 1. Direct parameters (username/password) 2. Environment variables (KAFKA_USERNAME/KAFKA_PASSWORD) 3. Configuration dictionary (sasl.username/sasl.password)

SASL Security Defaults
  • security.protocol: "SASL_SSL" (override via KAFKA_SECURITY_PROTOCOL env var)
  • sasl.mechanism: "PLAIN" (override via KAFKA_SASL_MECHANISM env var)

Parameters:

Name Type Description Default
username Optional[str]

SASL username for authentication. Fallback: KAFKA_USERNAME environment variable.

None
password Optional[str]

SASL password for authentication. Fallback: KAFKA_PASSWORD environment variable.

None
brokers Optional[str]

Comma-separated list of Kafka broker addresses (host:port). Fallback: KAFKA_BROKERS environment variable. Default: "localhost:9092"

None
topic Optional[str]

Target Kafka topic for message publishing. Fallback: KAFKA_TOPIC environment variable. Default: "scouter_monitoring"

None
compression_type Optional[str]

Message compression algorithm. Options: "none", "gzip", "snappy", "lz4", "zstd" Default: "gzip"

None
message_timeout_ms int

Maximum time to wait for message delivery (milliseconds). Default: 600000 (10 minutes)

600000
message_max_bytes int

Maximum message size in bytes. Default: 2097164 (~2MB)

2097164
log_level LogLevel

Logging verbosity for the Kafka producer. Default: LogLevel.Info

Info
config Dict[str, str]

Additional Kafka producer configuration parameters. See: https://kafka.apache.org/documentation/#producerconfigs Note: Direct parameters take precedence over config dictionary values.

{}
max_retries int

Maximum number of retry attempts for failed message deliveries. Default: 3

3

Examples:

Basic usage (unauthenticated):

config = KafkaConfig(
    brokers="kafka1:9092,kafka2:9092",
    topic="my_topic"
)

SASL authentication:

config = KafkaConfig(
    username="my_user",
    password="my_password",
    brokers="secure-kafka:9093",
    topic="secure_topic"
)

Advanced configuration:

config = KafkaConfig(
    brokers="kafka:9092",
    compression_type="lz4",
    config={
        "acks": "all",
        "batch.size": "32768",
        "linger.ms": "10"
    }
)

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    username: Optional[str] = None,
    password: Optional[str] = None,
    brokers: Optional[str] = None,
    topic: Optional[str] = None,
    compression_type: Optional[str] = None,
    message_timeout_ms: int = 600_000,
    message_max_bytes: int = 2097164,
    log_level: LogLevel = LogLevel.Info,
    config: Dict[str, str] = {},
    max_retries: int = 3,
) -> None:
    """Kafka configuration for connecting to and publishing messages to Kafka brokers.

    This configuration supports both authenticated (SASL) and unauthenticated connections.
    When credentials are provided, SASL authentication is automatically enabled with
    secure defaults.

    Authentication Priority (first match wins):
        1. Direct parameters (username/password)
        2. Environment variables (KAFKA_USERNAME/KAFKA_PASSWORD)
        3. Configuration dictionary (sasl.username/sasl.password)

    SASL Security Defaults:
        - security.protocol: "SASL_SSL" (override via KAFKA_SECURITY_PROTOCOL env var)
        - sasl.mechanism: "PLAIN" (override via KAFKA_SASL_MECHANISM env var)

    Args:
        username:
            SASL username for authentication.
            Fallback: KAFKA_USERNAME environment variable.
        password:
            SASL password for authentication.
            Fallback: KAFKA_PASSWORD environment variable.
        brokers:
            Comma-separated list of Kafka broker addresses (host:port).
            Fallback: KAFKA_BROKERS environment variable.
            Default: "localhost:9092"
        topic:
            Target Kafka topic for message publishing.
            Fallback: KAFKA_TOPIC environment variable.
            Default: "scouter_monitoring"
        compression_type:
            Message compression algorithm.
            Options: "none", "gzip", "snappy", "lz4", "zstd"
            Default: "gzip"
        message_timeout_ms:
            Maximum time to wait for message delivery (milliseconds).
            Default: 600000 (10 minutes)
        message_max_bytes:
            Maximum message size in bytes.
            Default: 2097164 (~2MB)
        log_level:
            Logging verbosity for the Kafka producer.
            Default: LogLevel.Info
        config:
            Additional Kafka producer configuration parameters.
            See: https://kafka.apache.org/documentation/#producerconfigs
            Note: Direct parameters take precedence over config dictionary values.
        max_retries:
            Maximum number of retry attempts for failed message deliveries.
            Default: 3

    Examples:
        Basic usage (unauthenticated):
        ```python
        config = KafkaConfig(
            brokers="kafka1:9092,kafka2:9092",
            topic="my_topic"
        )
        ```

        SASL authentication:
        ```python
        config = KafkaConfig(
            username="my_user",
            password="my_password",
            brokers="secure-kafka:9093",
            topic="secure_topic"
        )
        ```

        Advanced configuration:
        ```python
        config = KafkaConfig(
            brokers="kafka:9092",
            compression_type="lz4",
            config={
                "acks": "all",
                "batch.size": "32768",
                "linger.ms": "10"
            }
        )
        ```
    """

LLMAlertConfig

LLMAlertConfig(
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
)

Parameters:

Name Type Description Default
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch config. Defaults to console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
):
    """Initialize alert config

    Args:
        dispatch_config:
            Alert dispatch config. Defaults to console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight

    """

alert_conditions property

alert_conditions: Optional[
    Dict[str, LLMMetricAlertCondition]
]

Return the alert conditions

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

schedule property writable

schedule: str

Return the schedule

LLMDriftConfig

LLMDriftConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_rate: int = 5,
    alert_config: LLMAlertConfig = LLMAlertConfig(),
)
space:
    Space to associate with the config
name:
    Name to associate with the config
version:
    Version to associate with the config. Defaults to 0.1.0
sample_rate:
    Sample rate for LLM drift detection. Defaults to 5.
alert_config:
    Custom metric alert configuration
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_rate: int = 5,
    alert_config: LLMAlertConfig = LLMAlertConfig(),
):
    """Initialize drift config
    Args:
        space:
            Space to associate with the config
        name:
            Name to associate with the config
        version:
            Version to associate with the config. Defaults to 0.1.0
        sample_rate:
            Sample rate for LLM drift detection. Defaults to 5.
        alert_config:
            Custom metric alert configuration
    """

alert_config property writable

alert_config: LLMAlertConfig

get alert_config

drift_type property

drift_type: DriftType

Drift type

name property writable

name: str

Model Name

space property writable

space: str

Model space

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> LLMDriftConfig

Load config from json file Args: path: Path to json file to load config from.

Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "LLMDriftConfig":
    """Load config from json file
    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[LLMAlertConfig] = None,
) -> None

Inplace operation that updates config args Args: space: Space to associate with the config name: Name to associate with the config version: Version to associate with the config alert_config: LLM alert configuration

Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[LLMAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args
    Args:
        space:
            Space to associate with the config
        name:
            Name to associate with the config
        version:
            Version to associate with the config
        alert_config:
            LLM alert configuration
    """

LLMDriftMap

records property

records: List[LLMMetricRecord]

Return the list of LLM records.

LLMDriftMetric

LLMDriftMetric(
    name: str,
    value: float,
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float] = None,
    prompt: Optional[Prompt] = None,
)

Metric for monitoring LLM performance.

Parameters:

Name Type Description Default
name str

The name of the metric being monitored. This should be a descriptive identifier for the metric.

required
value float

The current value of the metric.

required
alert_threshold AlertThreshold

The condition used to determine when an alert should be triggered.

required
alert_threshold_value Optional[float]

The threshold or boundary value used in conjunction with the alert_threshold. If supplied, this value will be added or subtracted from the provided metric value to determine if an alert should be triggered.

None
prompt Optional[Prompt]

Optional prompt associated with the metric. This can be used to provide context or additional information about the metric being monitored. If creating an LLM drift profile from a pre-defined workflow, this can be none.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    name: str,
    value: float,
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float] = None,
    prompt: Optional[Prompt] = None,
):
    """
    Initialize a metric for monitoring LLM performance.

    Args:
        name (str):
            The name of the metric being monitored. This should be a
            descriptive identifier for the metric.
        value (float):
            The current value of the metric.
        alert_threshold (AlertThreshold):
            The condition used to determine when an alert should be triggered.
        alert_threshold_value (Optional[float]):
            The threshold or boundary value used in conjunction with the alert_threshold.
            If supplied, this value will be added or subtracted from the provided metric value to
            determine if an alert should be triggered.
        prompt (Optional[Prompt]):
            Optional prompt associated with the metric. This can be used to provide context or
            additional information about the metric being monitored. If creating an LLM drift profile
            from a pre-defined workflow, this can be none.
    """

alert_threshold property

alert_threshold: AlertThreshold

Return the alert_threshold

alert_threshold_value property

alert_threshold_value: Optional[float]

Return the alert_threshold_value

name property

name: str

Return the metric name

prompt property

prompt: Optional[Prompt]

Return the prompt associated with the metric

value property

value: float

Return the metric value

LLMDriftProfile

LLMDriftProfile(
    config: LLMDriftConfig,
    metrics: list[LLMDriftMetric],
    workflow: Optional[Workflow] = None,
)

LLM evaluations are run asynchronously on the scouter server.

Logic flow
  1. If only metrics are provided, a workflow will be created automatically from the metrics. In this case a prompt is required for each metric.
  2. If a workflow is provided, it will be parsed and validated for compatibility:
  3. A list of metrics to evaluate workflow output must be provided
  4. Metric names must correspond to the final task names in the workflow

Baseline metrics and thresholds will be extracted from the LLMDriftMetric objects.

Parameters:

Name Type Description Default
config LLMDriftConfig

The configuration for the LLM drift profile containing space, name, version, and alert settings.

required
metrics list[LLMDriftMetric]

A list of LLMDriftMetric objects representing the metrics to be monitored. Each metric defines evaluation criteria and alert thresholds.

required
workflow Optional[Workflow]

Optional custom workflow for advanced evaluation scenarios. If provided, the workflow will be validated to ensure proper parameter and response type configuration.

None

Returns:

Name Type Description
LLMDriftProfile

Configured profile ready for LLM drift monitoring.

Raises:

Type Description
ProfileError

If workflow validation fails, metrics are empty when no workflow is provided, or if workflow tasks don't match metric names.

Examples:

Basic usage with metrics only:

>>> config = LLMDriftConfig("my_space", "my_model", "1.0")
>>> metrics = [
...     LLMDriftMetric("accuracy", 0.95, AlertThreshold.Above, 0.1, prompt),
...     LLMDriftMetric("relevance", 0.85, AlertThreshold.Below, 0.2, prompt2)
... ]
>>> profile = LLMDriftProfile(config, metrics)

Advanced usage with custom workflow:

>>> workflow = create_custom_workflow()  # Your custom workflow
>>> metrics = [LLMDriftMetric("final_task", 0.9, AlertThreshold.Above)]
>>> profile = LLMDriftProfile(config, metrics, workflow)
Note
  • When using custom workflows, ensure final tasks have Score response types
  • Initial workflow tasks must include "input" and/or "response" parameters
  • All metric names must match corresponding workflow task names
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    config: LLMDriftConfig,
    metrics: list[LLMDriftMetric],
    workflow: Optional[Workflow] = None,
):
    """Initialize a LLMDriftProfile for LLM evaluation and drift detection.

    LLM evaluations are run asynchronously on the scouter server.

    Logic flow:
        1. If only metrics are provided, a workflow will be created automatically
           from the metrics. In this case a prompt is required for each metric.
        2. If a workflow is provided, it will be parsed and validated for compatibility:
           - A list of metrics to evaluate workflow output must be provided
           - Metric names must correspond to the final task names in the workflow

    Baseline metrics and thresholds will be extracted from the LLMDriftMetric objects.

    Args:
        config (LLMDriftConfig):
            The configuration for the LLM drift profile containing space, name,
            version, and alert settings.
        metrics (list[LLMDriftMetric]):
            A list of LLMDriftMetric objects representing the metrics to be monitored.
            Each metric defines evaluation criteria and alert thresholds.
        workflow (Optional[Workflow]):
            Optional custom workflow for advanced evaluation scenarios. If provided,
            the workflow will be validated to ensure proper parameter and response
            type configuration.

    Returns:
        LLMDriftProfile: Configured profile ready for LLM drift monitoring.

    Raises:
        ProfileError: If workflow validation fails, metrics are empty when no
            workflow is provided, or if workflow tasks don't match metric names.

    Examples:
        Basic usage with metrics only:

        >>> config = LLMDriftConfig("my_space", "my_model", "1.0")
        >>> metrics = [
        ...     LLMDriftMetric("accuracy", 0.95, AlertThreshold.Above, 0.1, prompt),
        ...     LLMDriftMetric("relevance", 0.85, AlertThreshold.Below, 0.2, prompt2)
        ... ]
        >>> profile = LLMDriftProfile(config, metrics)

        Advanced usage with custom workflow:

        >>> workflow = create_custom_workflow()  # Your custom workflow
        >>> metrics = [LLMDriftMetric("final_task", 0.9, AlertThreshold.Above)]
        >>> profile = LLMDriftProfile(config, metrics, workflow)

    Note:
        - When using custom workflows, ensure final tasks have Score response types
        - Initial workflow tasks must include "input" and/or "response" parameters
        - All metric names must match corresponding workflow task names
    """

config property

config: LLMDriftConfig

Return the drift config

metrics property

metrics: List[LLMDriftMetric]

Return LLM metrics and their corresponding values

scouter_version property

scouter_version: str

Return scouter version used to create DriftProfile

from_file staticmethod

from_file(path: Path) -> LLMDriftProfile

Load drift profile from file

Parameters:

Name Type Description Default
path Path

Path to the json file

required

Returns:

Type Description
LLMDriftProfile

LLMDriftProfile

Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "LLMDriftProfile":
    """Load drift profile from file

    Args:
        path: Path to the json file

    Returns:
        LLMDriftProfile
    """

model_dump

model_dump() -> Dict[str, Any]

Return dictionary representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Return dictionary representation of drift profile"""

model_dump_json

model_dump_json() -> str

Return json representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of drift profile"""

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> LLMDriftProfile

Load drift profile from dictionary

Parameters:

Name Type Description Default
data Dict[str, Any]

DriftProfile dictionary

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "LLMDriftProfile":
    """Load drift profile from dictionary

    Args:
        data:
            DriftProfile dictionary
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> LLMDriftProfile

Load drift profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "LLMDriftProfile":
    """Load drift profile from json

    Args:
        json_string:
            JSON string representation of the drift profile
    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the json file. If not provided, a default path will be used.

None

Returns:

Type Description
Path

Path to the saved json file.

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift profile to json file

    Args:
        path: Optional path to save the json file. If not provided, a default path will be used.

    Returns:
        Path to the saved json file.
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[LLMAlertConfig] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
name Optional[str]

Model name

None
space Optional[str]

Model space

None
version Optional[str]

Model version

None
sample_size Optional[int]

Sample size

None
alert_config Optional[LLMAlertConfig]

Alert configuration

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[LLMAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        sample_size:
            Sample size
        alert_config:
            Alert configuration
    """

LLMEvalMetric

LLMEvalMetric(name: str, prompt: Prompt)

Defines an LLM eval metric to use when evaluating LLMs

and responses can be evaluated against a variety of user-defined metrics.

Parameters:

Name Type Description Default
name str

Name of the metric

required
prompt Prompt

Prompt to use for the metric. For example, a user may create an accuracy analysis prompt or a query reformulation analysis prompt.

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, prompt: Prompt):
    """
    Initialize an LLMEvalMetric to use for evaluating LLMs. This is
    most commonly used in conjunction with `evaluate_llm` where LLM inputs
    and responses can be evaluated against a variety of user-defined metrics.

    Args:
        name (str):
            Name of the metric
        prompt (Prompt):
            Prompt to use for the metric. For example, a user may create
            an accuracy analysis prompt or a query reformulation analysis prompt.
    """

LLMEvalRecord

LLMEvalRecord(context: Context, id: Optional[str] = None)

LLM record containing context tied to a Large Language Model interaction that is used to evaluate LLM responses.

Examples:

>>> record = LLMEvalRecord(
        id="123",
        context={
            "input": "What is the capital of France?",
            "response": "Paris is the capital of France."
        },
... )
>>> print(record.context["input"])
"What is the capital of France?"

then used to inject context into the evaluation prompts.

Parameters:

Name Type Description Default
context Context

Additional context information as a dictionary or a pydantic BaseModel. During evaluation, this will be merged with the input and response data and passed to the assigned evaluation prompts. So if you're evaluation prompts expect additional context via bound variables (e.g., ${foo}), you can pass that here as key value pairs.

required
id Optional[str]

Unique identifier for the record. If not provided, a new UUID will be generated. This is helpful for when joining evaluation results back to the original request.

None

Raises:

Type Description
TypeError

If context is not a dict or a pydantic BaseModel.

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    context: Context,
    id: Optional[str] = None,
) -> None:
    """Creates a new LLM record to associate with an `LLMDriftProfile`.
    The record is sent to the `Scouter` server via the `ScouterQueue` and is
    then used to inject context into the evaluation prompts.

    Args:
        context:
            Additional context information as a dictionary or a pydantic BaseModel. During evaluation,
            this will be merged with the input and response data and passed to the assigned
            evaluation prompts. So if you're evaluation prompts expect additional context via
            bound variables (e.g., `${foo}`), you can pass that here as key value pairs.
            {"foo": "bar"}
        id:
            Unique identifier for the record. If not provided, a new UUID will be generated.
            This is helpful for when joining evaluation results back to the original request.

    Raises:
        TypeError: If context is not a dict or a pydantic BaseModel.

    """

context property

context: Dict[str, Any]

Get the contextual information.

Returns:

Type Description
Dict[str, Any]

The context data as a Python object (deserialized from JSON).

LLMEvalResults

Defines the results of an LLM eval metric

errored_tasks property

errored_tasks: List[str]

Get a list of record IDs that had errors during evaluation

histograms property

histograms: Optional[Dict[str, Histogram]]

Get histograms for all calculated features (metrics, embeddings, similarities)

model_dump_json

model_dump_json() -> str

Dump the results as a JSON string

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Dump the results as a JSON string"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> LLMEvalResults

Validate and create an LLMEvalResults instance from a JSON string

Parameters:

Name Type Description Default
json_string str

JSON string to validate and create the LLMEvalResults instance from.

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "LLMEvalResults":
    """Validate and create an LLMEvalResults instance from a JSON string

    Args:
        json_string (str):
            JSON string to validate and create the LLMEvalResults instance from.
    """

to_dataframe

to_dataframe(polars: bool = False) -> Any

Convert the results to a Pandas or Polars DataFrame.

Parameters:

Name Type Description Default
polars bool

Whether to return a Polars DataFrame. If False, a Pandas DataFrame will be returned.

False

Returns:

Name Type Description
DataFrame Any

A Pandas or Polars DataFrame containing the results.

Source code in python/scouter/stubs.pyi
def to_dataframe(self, polars: bool = False) -> Any:
    """
    Convert the results to a Pandas or Polars DataFrame.

    Args:
        polars (bool):
            Whether to return a Polars DataFrame. If False, a Pandas DataFrame will be returned.

    Returns:
        DataFrame:
            A Pandas or Polars DataFrame containing the results.
    """

LLMEvalTaskResult

Eval Result for a specific evaluation

embedding property

embedding: Dict[str, List[float]]

Get embeddings of embedding targets

id property

id: str

Get the record id associated with this result

metrics property

metrics: Dict[str, Score]

Get the list of metrics

LLMMetricAlertCondition

LLMMetricAlertCondition(
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float],
)
alert_threshold (AlertThreshold):
    The condition that determines when an alert should be triggered.
    Must be one of the AlertThreshold enum members like Below, Above, or Outside.
alert_threshold_value (Optional[float], optional):
    A numerical boundary used in conjunction with the alert_threshold.
    This can be None for certain types of comparisons that don't require a fixed boundary.

Example: alert_threshold = LLMMetricAlertCondition(AlertCondition.BELOW, 2.0)

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    alert_threshold: AlertThreshold,
    alert_threshold_value: Optional[float],
):
    """Initialize a LLMMetricAlertCondition instance.
    Args:
        alert_threshold (AlertThreshold):
            The condition that determines when an alert should be triggered.
            Must be one of the AlertThreshold enum members like Below, Above, or Outside.
        alert_threshold_value (Optional[float], optional):
            A numerical boundary used in conjunction with the alert_threshold.
            This can be None for certain types of comparisons that don't require a fixed boundary.
    Example:
        alert_threshold = LLMMetricAlertCondition(AlertCondition.BELOW, 2.0)
    """

LLMMetricRecord

created_at property

created_at: datetime

Return the timestamp when the record was created

metric property

metric: str

Return the name of the metric associated with the record

name property

name: str

Return the name associated with the record

record_uid property

record_uid: str

Return the record id

space property

space: str

Return the space associated with the record

value property

value: float

Return the value of the metric associated with the record

version property

version: str

Return the version associated with the record

LLMRecord

LLMRecord(
    context: Context,
    prompt: Optional[Prompt | SerializedType] = None,
)

LLM record containing context tied to a Large Language Model interaction that is used to evaluate drift in LLM responses.

Examples:

>>> record = LLMRecord(
...     context={
...         "input": "What is the capital of France?",
...         "response": "Paris is the capital of France."
...     },
... )
>>> print(record.context["input"])
"What is the capital of France?"

then used to inject context into the evaluation prompts.

Parameters:

Name Type Description Default
context Context

Additional context information as a dictionary or a pydantic BaseModel. During evaluation, this will be merged with the input and response data and passed to the assigned evaluation prompts. So if you're evaluation prompts expect additional context via bound variables (e.g., ${foo}), you can pass that here as key value pairs.

required
prompt Optional[Prompt | SerializedType]

Optional prompt configuration associated with this record. Can be a Potatohead Prompt or a JSON-serializable type.

None

Raises:

Type Description
TypeError

If context is not a dict or a pydantic BaseModel.

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    context: Context,
    prompt: Optional[Prompt | SerializedType] = None,
) -> None:
    """Creates a new LLM record to associate with an `LLMDriftProfile`.
    The record is sent to the `Scouter` server via the `ScouterQueue` and is
    then used to inject context into the evaluation prompts.

    Args:
        context:
            Additional context information as a dictionary or a pydantic BaseModel. During evaluation,
            this will be merged with the input and response data and passed to the assigned
            evaluation prompts. So if you're evaluation prompts expect additional context via
            bound variables (e.g., `${foo}`), you can pass that here as key value pairs.
            {"foo": "bar"}
        prompt:
            Optional prompt configuration associated with this record. Can be a Potatohead Prompt or
            a JSON-serializable type.

    Raises:
        TypeError: If context is not a dict or a pydantic BaseModel.

    """

context property

context: Dict[str, Any]

Get the contextual information.

Returns:

Type Description
Dict[str, Any]

The context data as a Python object (deserialized from JSON).

Raises:

Type Description
TypeError

If the stored JSON cannot be converted to a Python object.

entity_type instance-attribute

entity_type: EntityType

Type of entity, always EntityType.LLM for LLMRecord instances.

prompt instance-attribute

prompt: Optional[Prompt]

Optional prompt configuration associated with this record.

LLMTestServer

LLMTestServer()

Mock server for OpenAI API. This class is used to simulate the OpenAI API for testing purposes.

Source code in python/scouter/stubs.pyi
def __init__(self): ...

LatLng

LatLng(latitude: float, longitude: float)

Parameters:

Name Type Description Default
latitude float

The latitude value.

required
longitude float

The longitude value.

required
Source code in python/scouter/stubs.pyi
def __init__(self, latitude: float, longitude: float) -> None:
    """Initialize LatLng with latitude and longitude.

    Args:
        latitude (float):
            The latitude value.
        longitude (float):
            The longitude value.
    """

LatencyMetrics

p25 property

p25: float

25th percentile

p5 property

p5: float

5th percentile

p50 property

p50: float

50th percentile

p95 property

p95: float

95th percentile

p99 property

p99: float

99th percentile

LogProbs

tokens property

tokens: List[ResponseLogProbs]

The log probabilities of the tokens in the response. This is primarily used for debugging and analysis purposes.

LoggingConfig

LoggingConfig(
    show_threads: bool = True,
    log_level: LogLevel = LogLevel.Info,
    write_level: WriteLevel = WriteLevel.Stdout,
    use_json: bool = False,
)

Parameters:

Name Type Description Default
show_threads bool

Whether to include thread information in log messages. Default is True.

True
log_level LogLevel

Log level for the logger. Default is LogLevel.Info.

Info
write_level WriteLevel

Write level for the logger. Default is WriteLevel.Stdout.

Stdout
use_json bool

Whether to write log messages in JSON format. Default is False.

False
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    show_threads: bool = True,
    log_level: LogLevel = LogLevel.Info,
    write_level: WriteLevel = WriteLevel.Stdout,
    use_json: bool = False,
) -> None:
    """
    Logging configuration options.

    Args:
        show_threads:
            Whether to include thread information in log messages.
            Default is True.

        log_level:
            Log level for the logger.
            Default is LogLevel.Info.

        write_level:
            Write level for the logger.
            Default is WriteLevel.Stdout.

        use_json:
            Whether to write log messages in JSON format.
            Default is False.
    """

Manual

Manual(num_bins: int)

Divides the feature range into a fixed number of equally sized bins.

Parameters:

Name Type Description Default
num_bins int

The exact number of bins to create.

required
Source code in python/scouter/stubs.pyi
def __init__(self, num_bins: int):
    """Manual equal-width binning strategy.

    Divides the feature range into a fixed number of equally sized bins.

    Args:
        num_bins:
            The exact number of bins to create.
    """

num_bins property writable

num_bins: int

The number of bins you want created

MediaResolution

Media resolution settings for content generation.

Message

Message(
    content: (
        str
        | ImageUrl
        | AudioUrl
        | BinaryContent
        | DocumentUrl
    ),
)

Parameters:

Name Type Description Default
content str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl

The content of the message.

required
Source code in python/scouter/stubs.pyi
def __init__(self, content: str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl) -> None:
    """Create a Message object.

    Args:
        content (str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl):
            The content of the message.
    """

content property

content: (
    str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl
)

The content of the message

bind

bind(name: str, value: str) -> Message

Bind context to a specific variable in the prompt. This is an immutable operation meaning that it will return a new Message object with the context bound.

Example with Prompt that contains two messages

```python
    prompt = Prompt(
        model="openai:gpt-4o",
        message=[
            "My prompt variable is ${variable}",
            "This is another message",
        ],
        system_instruction="system_prompt",
    )
    bounded_prompt = prompt.message[0].bind("variable", "hello world").unwrap() # we bind "hello world" to "variable"
```

Parameters:

Name Type Description Default
name str

The name of the variable to bind.

required
value str

The value to bind the variable to.

required

Returns:

Name Type Description
Message Message

The message with the context bound.

Source code in python/scouter/stubs.pyi
def bind(self, name: str, value: str) -> "Message":
    """Bind context to a specific variable in the prompt. This is an immutable operation meaning that it
    will return a new Message object with the context bound.

        Example with Prompt that contains two messages

        ```python
            prompt = Prompt(
                model="openai:gpt-4o",
                message=[
                    "My prompt variable is ${variable}",
                    "This is another message",
                ],
                system_instruction="system_prompt",
            )
            bounded_prompt = prompt.message[0].bind("variable", "hello world").unwrap() # we bind "hello world" to "variable"
        ```

    Args:
        name (str):
            The name of the variable to bind.
        value (str):
            The value to bind the variable to.

    Returns:
        Message:
            The message with the context bound.
    """

bind_mut

bind_mut(name: str, value: str) -> Message

Bind context to a specific variable in the prompt. This is a mutable operation meaning that it will modify the current Message object.

Example with Prompt that contains two messages

```python
    prompt = Prompt(
        model="openai:gpt-4o",
        message=[
            "My prompt variable is ${variable}",
            "This is another message",
        ],
        system_instruction="system_prompt",
    )
    prompt.message[0].bind_mut("variable", "hello world") # we bind "hello world" to "variable"
```

Parameters:

Name Type Description Default
name str

The name of the variable to bind.

required
value str

The value to bind the variable to.

required

Returns:

Name Type Description
Message Message

The message with the context bound.

Source code in python/scouter/stubs.pyi
def bind_mut(self, name: str, value: str) -> "Message":
    """Bind context to a specific variable in the prompt. This is a mutable operation meaning that it
    will modify the current Message object.

        Example with Prompt that contains two messages

        ```python
            prompt = Prompt(
                model="openai:gpt-4o",
                message=[
                    "My prompt variable is ${variable}",
                    "This is another message",
                ],
                system_instruction="system_prompt",
            )
            prompt.message[0].bind_mut("variable", "hello world") # we bind "hello world" to "variable"
        ```

    Args:
        name (str):
            The name of the variable to bind.
        value (str):
            The value to bind the variable to.

    Returns:
        Message:
            The message with the context bound.
    """

model_dump

model_dump() -> Dict[str, Any]

Unwrap the message content and serialize it to a dictionary.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: The message dictionary with keys "content" and "role".

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Unwrap the message content and serialize it to a dictionary.

    Returns:
        Dict[str, Any]:
            The message dictionary with keys "content" and "role".
    """

unwrap

unwrap() -> Any

Unwrap the message content.

Returns:

Type Description
Any

A serializable representation of the message content, which can be a string, list, or dict.

Source code in python/scouter/stubs.pyi
def unwrap(self) -> Any:
    """Unwrap the message content.

    Returns:
        A serializable representation of the message content, which can be a string, list, or dict.
    """

Metric

Metric(name: str, value: float | int)

Parameters:

Name Type Description Default
name str

Name of the metric

required
value float | int

Value to assign to the metric. Can be an int or float but will be converted to float.

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, value: float | int) -> None:
    """Initialize metric

    Args:
        name:
            Name of the metric
        value:
            Value to assign to the metric. Can be an int or float but will be converted to float.
    """

Metrics

Metrics(
    metrics: List[Metric] | Dict[str, Union[int, float]]
)

Parameters:

Name Type Description Default
metrics List[Metric] | Dict[str, Union[int, float]]

List of metrics or a dictionary of key-value pairs. If a list, each item must be an instance of Metric. If a dictionary, each key is the metric name and each value is the metric value.

required
Example

```python

Passing a list of metrics

metrics = Metrics( metrics=[ Metric("metric_1", 1.0), Metric("metric_2", 2.5), Metric("metric_3", 3), ] )

Passing a dictionary (pydantic model) of metrics

class MyMetrics(BaseModel): metric1: float metric2: int

my_metrics = MyMetrics( metric1=1.0, metric2=2, )

metrics = Metrics(my_metrics.model_dump())

Source code in python/scouter/stubs.pyi
def __init__(self, metrics: List[Metric] | Dict[str, Union[int, float]]) -> None:
    """Initialize metrics

    Args:
        metrics:
            List of metrics or a dictionary of key-value pairs.
            If a list, each item must be an instance of Metric.
            If a dictionary, each key is the metric name and each value is the metric value.


    Example:
        ```python

        # Passing a list of metrics
        metrics = Metrics(
            metrics=[
                Metric("metric_1", 1.0),
                Metric("metric_2", 2.5),
                Metric("metric_3", 3),
            ]
        )

        # Passing a dictionary (pydantic model) of metrics
        class MyMetrics(BaseModel):
            metric1: float
            metric2: int

        my_metrics = MyMetrics(
            metric1=1.0,
            metric2=2,
        )

        metrics = Metrics(my_metrics.model_dump())
    """

entity_type property

entity_type: EntityType

Return the entity type

metrics property

metrics: List[Metric]

Return the list of metrics

MockConfig

MockConfig(**kwargs)

Parameters:

Name Type Description Default
**kwargs

Arbitrary keyword arguments to set as attributes.

{}
Source code in python/scouter/stubs.pyi
def __init__(self, **kwargs) -> None:
    """Mock configuration for the ScouterQueue

    Args:
        **kwargs: Arbitrary keyword arguments to set as attributes.
    """

Modality

Represents different modalities for content generation.

ModelArmorConfig

ModelArmorConfig(
    prompt_template_name: Optional[str],
    response_template_name: Optional[str],
)
    The name of the prompt template to use.
response_template_name (Optional[str]):
    The name of the response template to use.
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    prompt_template_name: Optional[str],
    response_template_name: Optional[str],
) -> None:
    """
    Args:
        prompt_template_name (Optional[str]):
            The name of the prompt template to use.
        response_template_name (Optional[str]):
            The name of the response template to use.
    """

ModelSettings

ModelSettings(
    settings: OpenAIChatSettings | GeminiSettings,
)

Parameters:

Name Type Description Default
settings OpenAIChatSettings | GeminiSettings

The settings to use for the model. Currently supports OpenAI and Gemini settings.

required
Source code in python/scouter/stubs.pyi
def __init__(self, settings: OpenAIChatSettings | GeminiSettings) -> None:
    """ModelSettings for configuring the model.

    Args:
        settings (OpenAIChatSettings | GeminiSettings):
            The settings to use for the model. Currently supports OpenAI and Gemini settings.
    """

settings property

settings: OpenAIChatSettings | GeminiSettings

The settings to use for the model.

model_dump_json

model_dump_json() -> str

The JSON representation of the model settings.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """The JSON representation of the model settings."""

NumericStats

distinct property

distinct: Distinct

Distinct value counts

histogram property

histogram: Histogram

Value histograms

max property

max: float

Return the max.

mean property

mean: float

Return the mean.

min property

min: float

Return the min.

quantiles property

quantiles: Quantiles

Value quantiles

stddev property

stddev: float

Return the stddev.

ObservabilityMetrics

error_count property

error_count: int

Error count

name property

name: str

Return the name

request_count property

request_count: int

Request count

route_metrics property

route_metrics: List[RouteMetrics]

Route metrics object

space property

space: str

Return the space

version property

version: str

Return the version

model_dump_json

model_dump_json() -> str

Return the json representation of the observability metrics

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the observability metrics"""

Observer

Observer(space: str, name: str, version: str)

Parameters:

Name Type Description Default
space str

Model space

required
name str

Model name

required
version str

Model version

required
Source code in python/scouter/stubs.pyi
def __init__(self, space: str, name: str, version: str) -> None:
    """Initializes an api metric observer

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
    """

collect_metrics

collect_metrics() -> Optional[ServerRecords]

Collect metrics from observer

Source code in python/scouter/stubs.pyi
def collect_metrics(self) -> Optional[ServerRecords]:
    """Collect metrics from observer"""

increment

increment(
    route: str, latency: float, status_code: int
) -> None

Increment the feature value

Parameters:

Name Type Description Default
route str

Route name

required
latency float

Latency of request

required
status_code int

Status code of request

required
Source code in python/scouter/stubs.pyi
def increment(self, route: str, latency: float, status_code: int) -> None:
    """Increment the feature value

    Args:
        route:
            Route name
        latency:
            Latency of request
        status_code:
            Status code of request
    """

reset_metrics

reset_metrics() -> None

Reset the observer metrics

Source code in python/scouter/stubs.pyi
def reset_metrics(self) -> None:
    """Reset the observer metrics"""

OpenAIChatSettings

OpenAIChatSettings(
    *,
    max_completion_tokens: Optional[int] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    frequency_penalty: Optional[float] = None,
    timeout: Optional[float] = None,
    parallel_tool_calls: Optional[bool] = None,
    seed: Optional[int] = None,
    logit_bias: Optional[Dict[str, int]] = None,
    stop_sequences: Optional[List[str]] = None,
    logprobs: Optional[bool] = None,
    audio: Optional[AudioParam] = None,
    metadata: Optional[Dict[str, str]] = None,
    modalities: Optional[List[str]] = None,
    n: Optional[int] = None,
    prediction: Optional[Prediction] = None,
    presence_penalty: Optional[float] = None,
    prompt_cache_key: Optional[str] = None,
    reasoning_effort: Optional[str] = None,
    safety_identifier: Optional[str] = None,
    service_tier: Optional[str] = None,
    store: Optional[bool] = None,
    stream: Optional[bool] = None,
    stream_options: Optional[StreamOptions] = None,
    tool_choice: Optional[ToolChoice] = None,
    tools: Optional[List[Tool]] = None,
    top_logprobs: Optional[int] = None,
    verbosity: Optional[str] = None,
    extra_body: Optional[Any] = None
)

OpenAI chat completion settings configuration.

This class provides configuration options for OpenAI chat completions, including model parameters, tool usage, and request options.

Examples:

>>> settings = OpenAIChatSettings(
...     temperature=0.7,
...     max_completion_tokens=1000,
...     stream=True
... )
>>> settings.temperature = 0.5

Parameters:

Name Type Description Default
max_completion_tokens Optional[int]

Maximum number of tokens to generate

None
temperature Optional[float]

Sampling temperature (0.0 to 2.0)

None
top_p Optional[float]

Nucleus sampling parameter

None
top_k Optional[int]

Top-k sampling parameter

None
frequency_penalty Optional[float]

Frequency penalty (-2.0 to 2.0)

None
timeout Optional[float]

Request timeout in seconds

None
parallel_tool_calls Optional[bool]

Whether to enable parallel tool calls

None
seed Optional[int]

Random seed for deterministic outputs

None
logit_bias Optional[Dict[str, int]]

Token bias modifications

None
stop_sequences Optional[List[str]]

Sequences where generation should stop

None
logprobs Optional[bool]

Whether to return log probabilities

None
audio Optional[AudioParam]

Audio generation parameters

None
metadata Optional[Dict[str, str]]

Additional metadata for the request

None
modalities Optional[List[str]]

List of modalities to use

None
n Optional[int]

Number of completions to generate

None
prediction Optional[Prediction]

Prediction configuration

None
presence_penalty Optional[float]

Presence penalty (-2.0 to 2.0)

None
prompt_cache_key Optional[str]

Key for prompt caching

None
reasoning_effort Optional[str]

Reasoning effort level

None
safety_identifier Optional[str]

Safety configuration identifier

None
service_tier Optional[str]

Service tier to use

None
store Optional[bool]

Whether to store the conversation

None
stream Optional[bool]

Whether to stream the response

None
stream_options Optional[StreamOptions]

Streaming configuration options

None
tool_choice Optional[ToolChoice]

Tool choice configuration

None
tools Optional[List[Tool]]

Available tools for the model

None
top_logprobs Optional[int]

Number of top log probabilities to return

None
verbosity Optional[str]

Verbosity level for the response

None
extra_body Optional[Any]

Additional request body parameters

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    *,
    max_completion_tokens: Optional[int] = None,
    temperature: Optional[float] = None,
    top_p: Optional[float] = None,
    top_k: Optional[int] = None,
    frequency_penalty: Optional[float] = None,
    timeout: Optional[float] = None,
    parallel_tool_calls: Optional[bool] = None,
    seed: Optional[int] = None,
    logit_bias: Optional[Dict[str, int]] = None,
    stop_sequences: Optional[List[str]] = None,
    logprobs: Optional[bool] = None,
    audio: Optional[AudioParam] = None,
    metadata: Optional[Dict[str, str]] = None,
    modalities: Optional[List[str]] = None,
    n: Optional[int] = None,
    prediction: Optional[Prediction] = None,
    presence_penalty: Optional[float] = None,
    prompt_cache_key: Optional[str] = None,
    reasoning_effort: Optional[str] = None,
    safety_identifier: Optional[str] = None,
    service_tier: Optional[str] = None,
    store: Optional[bool] = None,
    stream: Optional[bool] = None,
    stream_options: Optional[StreamOptions] = None,
    tool_choice: Optional[ToolChoice] = None,
    tools: Optional[List[Tool]] = None,
    top_logprobs: Optional[int] = None,
    verbosity: Optional[str] = None,
    extra_body: Optional[Any] = None,
) -> None:
    """Initialize OpenAI chat settings.

    Args:
        max_completion_tokens (Optional[int]):
            Maximum number of tokens to generate
        temperature (Optional[float]):
            Sampling temperature (0.0 to 2.0)
        top_p (Optional[float]):
            Nucleus sampling parameter
        top_k (Optional[int]):
            Top-k sampling parameter
        frequency_penalty (Optional[float]):
            Frequency penalty (-2.0 to 2.0)
        timeout (Optional[float]):
            Request timeout in seconds
        parallel_tool_calls (Optional[bool]):
            Whether to enable parallel tool calls
        seed (Optional[int]):
            Random seed for deterministic outputs
        logit_bias (Optional[Dict[str, int]]):
            Token bias modifications
        stop_sequences (Optional[List[str]]):
            Sequences where generation should stop
        logprobs (Optional[bool]):
            Whether to return log probabilities
        audio (Optional[AudioParam]):
            Audio generation parameters
        metadata (Optional[Dict[str, str]]):
            Additional metadata for the request
        modalities (Optional[List[str]]):
            List of modalities to use
        n (Optional[int]):
            Number of completions to generate
        prediction (Optional[Prediction]):
            Prediction configuration
        presence_penalty (Optional[float]):
            Presence penalty (-2.0 to 2.0)
        prompt_cache_key (Optional[str]):
            Key for prompt caching
        reasoning_effort (Optional[str]):
            Reasoning effort level
        safety_identifier (Optional[str]):
            Safety configuration identifier
        service_tier (Optional[str]):
            Service tier to use
        store (Optional[bool]):
            Whether to store the conversation
        stream (Optional[bool]):
            Whether to stream the response
        stream_options (Optional[StreamOptions]):
            Streaming configuration options
        tool_choice (Optional[ToolChoice]):
            Tool choice configuration
        tools (Optional[List[Tool]]):
            Available tools for the model
        top_logprobs (Optional[int]):
            Number of top log probabilities to return
        verbosity (Optional[str]):
            Verbosity level for the response
        extra_body (Optional[Any]):
            Additional request body parameters
    """

OpenAIEmbeddingConfig

OpenAIEmbeddingConfig(
    model: str,
    dimensions: Optional[int] = None,
    encoding_format: Optional[str] = None,
    user: Optional[str] = None,
)

OpenAI embedding configuration settings.

Parameters:

Name Type Description Default
model str

The embedding model to use.

required
dimensions Optional[int]

The output dimensionality of the embeddings.

None
encoding_format Optional[str]

The encoding format to use for the embeddings. Can be either "float" or "base64".

None
user Optional[str]

The user ID for the embedding request.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    model: str,
    dimensions: Optional[int] = None,
    encoding_format: Optional[str] = None,
    user: Optional[str] = None,
) -> None:
    """Initialize OpenAI embedding configuration.

    Args:
        model (str):
            The embedding model to use.
        dimensions (Optional[int]):
            The output dimensionality of the embeddings.
        encoding_format (Optional[str]):
            The encoding format to use for the embeddings.
            Can be either "float" or "base64".
        user (Optional[str]):
            The user ID for the embedding request.
    """

OpsGenieDispatchConfig

OpsGenieDispatchConfig(team: str)

Parameters:

Name Type Description Default
team str

Opsegenie team to be notified in the event of drift

required
Source code in python/scouter/stubs.pyi
def __init__(self, team: str):
    """Initialize alert config

    Args:
        team:
            Opsegenie team to be notified in the event of drift
    """

team property writable

team: str

Return the opesgenie team name

OtelHttpConfig

OtelHttpConfig(
    headers: Optional[dict[str, str]] = None,
    compression: Optional[CompressionType] = None,
)

Configuration for HTTP span exporting.

Parameters:

Name Type Description Default
headers Optional[dict[str, str]]

Optional HTTP headers to include in requests.

None
compression Optional[CompressionType]

Optional compression type for HTTP requests.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    headers: Optional[dict[str, str]] = None,
    compression: Optional[CompressionType] = None,
) -> None:
    """Initialize the HttpConfig.

    Args:
        headers (Optional[dict[str, str]]):
            Optional HTTP headers to include in requests.
        compression (Optional[CompressionType]):
            Optional compression type for HTTP requests.
    """

compression property

compression: Optional[CompressionType]

Get the compression type.

headers property

headers: Optional[dict[str, str]]

Get the HTTP headers.

OtelProtocol

Enumeration of protocols for HTTP exporting.

PrebuiltVoiceConfig

PrebuiltVoiceConfig(voice_name: str)

Configuration for prebuilt voice models.

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    voice_name: str,
) -> None: ...

PredictRequest

PredictRequest(
    instances: List[dict], parameters: Optional[dict] = None
)

Parameters:

Name Type Description Default
instances List[dict]

A list of instances to be sent in the request.

required
parameters Optional[dict]

Optional parameters for the request.

None
Source code in python/scouter/stubs.pyi
def __init__(self, instances: List[dict], parameters: Optional[dict] = None) -> None:
    """Request to pass to the Vertex Predict API when creating a request

    Args:
        instances (List[dict]):
            A list of instances to be sent in the request.
        parameters (Optional[dict]):
            Optional parameters for the request.
    """

ProfileStatusRequest

ProfileStatusRequest(
    name: str,
    space: str,
    version: str,
    drift_type: DriftType,
    active: bool,
)

Parameters:

Name Type Description Default
name str

Model name

required
space str

Model space

required
version str

Model version

required
drift_type DriftType

Profile drift type. A (repo/name/version can be associated with more than one drift type)

required
active bool

Whether to set the profile as active or inactive

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, space: str, version: str, drift_type: DriftType, active: bool) -> None:
    """Initialize profile status request

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        drift_type:
            Profile drift type. A (repo/name/version can be associated with more than one drift type)
        active:
            Whether to set the profile as active or inactive
    """

Prompt

Prompt(
    message: (
        str
        | Sequence[
            str
            | ImageUrl
            | AudioUrl
            | BinaryContent
            | DocumentUrl
        ]
        | Message
        | List[Message]
        | List[Dict[str, Any]]
    ),
    model: str,
    provider: Provider | str,
    system_instruction: Optional[str | List[str]] = None,
    model_settings: Optional[
        ModelSettings | OpenAIChatSettings | GeminiSettings
    ] = None,
    response_format: Optional[Any] = None,
)

Parameters:

Name Type Description Default
message str | Sequence[str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl] | Message | List[Message]

The prompt to use.

required
model str

The model to use for the prompt

required
provider Provider | str

The provider to use for the prompt.

required
system_instruction Optional[str | List[str]]

The system prompt to use in the prompt.

None
model_settings None

The model settings to use for the prompt. Defaults to None which means no model settings will be used

None
response_format Optional[BaseModel | Score]

The response format to use for the prompt. This is used for Structured Outputs (https://platform.openai.com/docs/guides/structured-outputs?api-mode=chat). Currently, response_format only support Pydantic BaseModel classes and the PotatoHead Score class. The provided response_format will be parsed into a JSON schema.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    message: (
        str
        | Sequence[str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl]
        | Message
        | List[Message]
        | List[Dict[str, Any]]
    ),
    model: str,
    provider: Provider | str,
    system_instruction: Optional[str | List[str]] = None,
    model_settings: Optional[ModelSettings | OpenAIChatSettings | GeminiSettings] = None,
    response_format: Optional[Any] = None,
) -> None:
    """Prompt for interacting with an LLM API.

    Args:
        message (str | Sequence[str | ImageUrl | AudioUrl | BinaryContent | DocumentUrl] | Message | List[Message]):
            The prompt to use.
        model (str):
            The model to use for the prompt
        provider (Provider | str):
            The provider to use for the prompt.
        system_instruction (Optional[str | List[str]]):
            The system prompt to use in the prompt.
        model_settings (None):
            The model settings to use for the prompt.
            Defaults to None which means no model settings will be used
        response_format (Optional[BaseModel | Score]):
            The response format to use for the prompt. This is used for Structured Outputs
            (https://platform.openai.com/docs/guides/structured-outputs?api-mode=chat).
            Currently, response_format only support Pydantic BaseModel classes and the PotatoHead Score class.
            The provided response_format will be parsed into a JSON schema.

    """

message property

message: List[Message]

The user message to use in the prompt.

model property

model: str

The model to use for the prompt.

model_identifier property

model_identifier: Any

Concatenation of provider and model, used for identifying the model in the prompt. This is commonly used with pydantic_ai to identify the model to use for the agent.

Example
    prompt = Prompt(
        model="gpt-4o",
        message="My prompt variable is ${variable}",
        system_instruction="system_instruction",
        provider="openai",
    )
    agent = Agent(
        prompt.model_identifier, # "openai:gpt-4o"
        system_instructions=prompt.system_instruction[0].unwrap(),
    )

model_settings property

model_settings: ModelSettings

The model settings to use for the prompt.

provider property

provider: str

The provider to use for the prompt.

response_json_schema property

response_json_schema: Optional[str]

The JSON schema for the response if provided.

system_instruction property

system_instruction: List[Message]

The system message to use in the prompt.

bind

bind(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any
) -> Prompt

Bind context to a specific variable in the prompt. This is an immutable operation meaning that it will return a new Prompt object with the context bound. This will iterate over all user messages.

Parameters:

Name Type Description Default
name str

The name of the variable to bind.

None
value str | int | float | bool | list

The value to bind the variable to. Must be a JSON serializable type.

None
**kwargs Any

Additional keyword arguments to bind to the prompt. This can be used to bind multiple variables at once.

{}

Returns:

Name Type Description
Prompt Prompt

The prompt with the context bound.

Source code in python/scouter/stubs.pyi
def bind(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any,
) -> "Prompt":
    """Bind context to a specific variable in the prompt. This is an immutable operation meaning that it
    will return a new Prompt object with the context bound. This will iterate over all user messages.

    Args:
        name (str):
            The name of the variable to bind.
        value (str | int | float | bool | list):
            The value to bind the variable to. Must be a JSON serializable type.
        **kwargs (Any):
            Additional keyword arguments to bind to the prompt. This can be used to bind multiple variables at once.

    Returns:
        Prompt:
            The prompt with the context bound.
    """

bind_mut

bind_mut(
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any
) -> Prompt

Bind context to a specific variable in the prompt. This is a mutable operation meaning that it will modify the current Prompt object. This will iterate over all user messages.

Parameters:

Name Type Description Default
name str

The name of the variable to bind.

None
value str | int | float | bool | list

The value to bind the variable to. Must be a JSON serializable type.

None
**kwargs Any

Additional keyword arguments to bind to the prompt. This can be used to bind multiple variables at once.

{}

Returns:

Name Type Description
Prompt Prompt

The prompt with the context bound.

Source code in python/scouter/stubs.pyi
def bind_mut(
    self,
    name: Optional[str] = None,
    value: Optional[str | int | float | bool | list] = None,
    **kwargs: Any,
) -> "Prompt":
    """Bind context to a specific variable in the prompt. This is a mutable operation meaning that it
    will modify the current Prompt object. This will iterate over all user messages.

    Args:
        name (str):
            The name of the variable to bind.
        value (str | int | float | bool | list):
            The value to bind the variable to. Must be a JSON serializable type.
        **kwargs (Any):
            Additional keyword arguments to bind to the prompt. This can be used to bind multiple variables at once.

    Returns:
        Prompt:
            The prompt with the context bound.
    """

from_path staticmethod

from_path(path: Path) -> Prompt

Load a prompt from a file.

Parameters:

Name Type Description Default
path Path

The path to the prompt file.

required

Returns:

Name Type Description
Prompt Prompt

The loaded prompt.

Source code in python/scouter/stubs.pyi
@staticmethod
def from_path(path: Path) -> "Prompt":
    """Load a prompt from a file.

    Args:
        path (Path):
            The path to the prompt file.

    Returns:
        Prompt:
            The loaded prompt.
    """

model_dump_json

model_dump_json() -> str

Dump the model to a JSON string.

Returns:

Name Type Description
str str

The JSON string.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Dump the model to a JSON string.

    Returns:
        str:
            The JSON string.
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> Prompt

Validate the model JSON.

Parameters:

Name Type Description Default
json_string str

The JSON string to validate.

required

Returns: Prompt: The prompt object.

Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "Prompt":
    """Validate the model JSON.

    Args:
        json_string (str):
            The JSON string to validate.
    Returns:
        Prompt:
            The prompt object.
    """

save_prompt

save_prompt(path: Optional[Path] = None) -> None

Save the prompt to a file.

Parameters:

Name Type Description Default
path Optional[Path]

The path to save the prompt to. If None, the prompt will be saved to the current working directory.

None
Source code in python/scouter/stubs.pyi
def save_prompt(self, path: Optional[Path] = None) -> None:
    """Save the prompt to a file.

    Args:
        path (Optional[Path]):
            The path to save the prompt to. If None, the prompt will be saved to
            the current working directory.
    """

PromptTokenDetails

Details about the prompt tokens used in a request.

audio_tokens property

audio_tokens: int

The number of audio tokens used in the request.

cached_tokens property

cached_tokens: int

The number of cached tokens used in the request.

PsiAlertConfig

PsiAlertConfig(
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
    threshold: Optional[
        PsiThresholdType
    ] = PsiChiSquareThreshold(),
)

Parameters:

Name Type Description Default
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch configuration to use. Defaults to an internal "Console" type where the alerts will be logged to the console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
features_to_monitor List[str]

List of features to monitor. Defaults to empty list, which means all features

[]
threshold Optional[PsiThresholdType]

Configuration that helps determine how to calculate PSI critical values. Defaults to PsiChiSquareThreshold, which uses the chi-square distribution.

PsiChiSquareThreshold()
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
    threshold: Optional[PsiThresholdType] = PsiChiSquareThreshold(),
):
    """Initialize alert config

    Args:
        dispatch_config:
            Alert dispatch configuration to use. Defaults to an internal "Console" type where
            the alerts will be logged to the console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight
        features_to_monitor:
            List of features to monitor. Defaults to empty list, which means all features
        threshold:
            Configuration that helps determine how to calculate PSI critical values.
            Defaults to PsiChiSquareThreshold, which uses the chi-square distribution.
    """

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

features_to_monitor property writable

features_to_monitor: List[str]

Return the features to monitor

schedule property writable

schedule: str

Return the schedule

threshold property

threshold: PsiThresholdType

Return the threshold config

PsiChiSquareThreshold

PsiChiSquareThreshold(alpha: float = 0.05)

Uses the asymptotic chi-square distribution of PSI.

The chi-square method is generally more statistically rigorous than normal approximation, especially for smaller sample sizes.

Parameters:

Name Type Description Default
alpha float

Significance level (0.0 to 1.0, exclusive). Common values: 0.05 (95% confidence), 0.01 (99% confidence)

0.05

Raises:

Type Description
ValueError

If alpha not in range (0.0, 1.0)

Source code in python/scouter/stubs.pyi
def __init__(self, alpha: float = 0.05):
    """Initialize PSI threshold using chi-square approximation.

    Uses the asymptotic chi-square distribution of PSI.

    The chi-square method is generally more statistically rigorous than
    normal approximation, especially for smaller sample sizes.

    Args:
        alpha: Significance level (0.0 to 1.0, exclusive). Common values:
               0.05 (95% confidence), 0.01 (99% confidence)

    Raises:
        ValueError: If alpha not in range (0.0, 1.0)
    """

alpha property writable

alpha: float

Statistical significance level for drift detection.

PsiDriftConfig

PsiDriftConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    alert_config: PsiAlertConfig = PsiAlertConfig(),
    config_path: Optional[Path] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: (
        QuantileBinning | EqualWidthBinning
    ) = QuantileBinning(num_bins=10),
)

Parameters:

Name Type Description Default
space str

Model space

'__missing__'
name str

Model name

'__missing__'
version str

Model version. Defaults to 0.1.0

'0.1.0'
alert_config PsiAlertConfig

Alert configuration

PsiAlertConfig()
config_path Optional[Path]

Optional path to load config from.

None
categorical_features Optional[list[str]]

List of features to treat as categorical for PSI calculation.

None
binning_strategy QuantileBinning | EqualWidthBinning

Strategy for binning continuous features during PSI calculation. Supports: - QuantileBinning (R-7 method, Hyndman & Fan Type 7). - EqualWidthBinning which divides the range of values into fixed-width bins. Default is QuantileBinning with 10 bins. You can also specify methods like Doane's rule with EqualWidthBinning.

QuantileBinning(num_bins=10)
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    alert_config: PsiAlertConfig = PsiAlertConfig(),
    config_path: Optional[Path] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: QuantileBinning | EqualWidthBinning = QuantileBinning(num_bins=10),
):
    """Initialize monitor config

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version. Defaults to 0.1.0
        alert_config:
            Alert configuration
        config_path:
            Optional path to load config from.
        categorical_features:
            List of features to treat as categorical for PSI calculation.
        binning_strategy:
            Strategy for binning continuous features during PSI calculation.
            Supports:
              - QuantileBinning (R-7 method, Hyndman & Fan Type 7).
              - EqualWidthBinning which divides the range of values into fixed-width bins.
            Default is QuantileBinning with 10 bins. You can also specify methods like Doane's rule with EqualWidthBinning.
    """

alert_config property writable

alert_config: PsiAlertConfig

Alert configuration

binning_strategy property writable

binning_strategy: QuantileBinning | EqualWidthBinning

binning_strategy

categorical_features property writable

categorical_features: list[str]

list of categorical features

drift_type property

drift_type: DriftType

Drift type

feature_map property

feature_map: Optional[FeatureMap]

Feature map

name property writable

name: str

Model Name

space property writable

space: str

Model space

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> PsiDriftConfig

Load config from json file

Parameters:

Name Type Description Default
path Path

Path to json file to load config from.

required
Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "PsiDriftConfig":
    """Load config from json file

    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[
        QuantileBinning | EqualWidthBinning
    ] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
space Optional[str]

Model space

None
name Optional[str]

Model name

None
version Optional[str]

Model version

None
alert_config Optional[PsiAlertConfig]

Alert configuration

None
categorical_features Optional[list[str]]

Categorical features

None
binning_strategy Optional[QuantileBinning | EqualWidthBinning]

Binning strategy

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[QuantileBinning | EqualWidthBinning] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        alert_config:
            Alert configuration
        categorical_features:
            Categorical features
        binning_strategy:
            Binning strategy
    """

PsiDriftMap

Drift map of features

features property

features: Dict[str, float]

Returns dictionary of features and their reported drift, if any

name property

name: str

name to associate with drift map

space property

space: str

Space to associate with drift map

version property

version: str

Version to associate with drift map

model_dump_json

model_dump_json() -> str

Return json representation of data drift

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of data drift"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> PsiDriftMap

Load drift map from json file.

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift map

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "PsiDriftMap":
    """Load drift map from json file.

    Args:
        json_string:
            JSON string representation of the drift map
    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift map to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift map. If None, outputs to psi_drift_map.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift map to json file

    Args:
        path:
            Optional path to save the drift map. If None, outputs to `psi_drift_map.json`

    Returns:
        Path to the saved json file

    """

PsiDriftProfile

config property

config: PsiDriftConfig

Return the monitor config.

features property

features: Dict[str, PsiFeatureDriftProfile]

Return the list of features.

scouter_version property

scouter_version: str

Return scouter version used to create DriftProfile

from_file staticmethod

from_file(path: Path) -> PsiDriftProfile

Load drift profile from file

Parameters:

Name Type Description Default
path Path

Path to the file

required
Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "PsiDriftProfile":
    """Load drift profile from file

    Args:
        path: Path to the file
    """

model_dump

model_dump() -> Dict[str, Any]

Return dictionary representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Return dictionary representation of drift profile"""

model_dump_json

model_dump_json() -> str

Return json representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of drift profile"""

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> PsiDriftProfile

Load drift profile from dictionary

Parameters:

Name Type Description Default
data Dict[str, Any]

DriftProfile dictionary

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "PsiDriftProfile":
    """Load drift profile from dictionary

    Args:
        data:
            DriftProfile dictionary
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> PsiDriftProfile

Load drift profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "PsiDriftProfile":
    """Load drift profile from json

    Args:
        json_string:
            JSON string representation of the drift profile

    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift profile. If None, outputs to psi_drift_profile.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift profile to json file

    Args:
        path:
            Optional path to save the drift profile. If None, outputs to `psi_drift_profile.json`

    Returns:
        Path to the saved json file
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[
        QuantileBinning | EqualWidthBinning
    ] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
name Optional[str]

Model name

None
space Optional[str]

Model space

None
version Optional[str]

Model version

None
alert_config Optional[PsiAlertConfig]

Alert configuration

None
categorical_features Optional[list[str]]

Categorical Features

None
binning_strategy Optional[QuantileBinning | EqualWidthBinning]

Binning strategy

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    alert_config: Optional[PsiAlertConfig] = None,
    categorical_features: Optional[list[str]] = None,
    binning_strategy: Optional[QuantileBinning | EqualWidthBinning] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        alert_config:
            Alert configuration
        categorical_features:
            Categorical Features
        binning_strategy:
            Binning strategy
    """

PsiFeatureDriftProfile

bin_type property

bin_type: BinType

Return the timestamp.

bins property

bins: List[Bin]

Return the bins

id property

id: str

Return the feature name

timestamp property

timestamp: str

Return the timestamp.

PsiFixedThreshold

PsiFixedThreshold(threshold: float = 0.25)

Uses a predetermined PSI threshold value, similar to traditional "rule of thumb" approaches (e.g., 0.10 for moderate drift, 0.25 for significant drift).

Parameters:

Name Type Description Default
threshold float

Fixed PSI threshold value (must be positive). Common industry values: 0.10, 0.25

0.25

Raises:

Type Description
ValueError

If threshold is not positive

Source code in python/scouter/stubs.pyi
def __init__(self, threshold: float = 0.25):
    """Initialize PSI threshold using a fixed value.

    Uses a predetermined PSI threshold value, similar to traditional
    "rule of thumb" approaches (e.g., 0.10 for moderate drift, 0.25
    for significant drift).

    Args:
        threshold: Fixed PSI threshold value (must be positive).
                  Common industry values: 0.10, 0.25

    Raises:
        ValueError: If threshold is not positive
    """

threshold property writable

threshold: float

Fixed PSI threshold value for drift detection.

PsiNormalThreshold

PsiNormalThreshold(alpha: float = 0.05)

Uses the asymptotic normal distribution of PSI to calculate critical values for population drift detection.

Parameters:

Name Type Description Default
alpha float

Significance level (0.0 to 1.0, exclusive). Common values: 0.05 (95% confidence), 0.01 (99% confidence)

0.05

Raises:

Type Description
ValueError

If alpha not in range (0.0, 1.0)

Source code in python/scouter/stubs.pyi
def __init__(self, alpha: float = 0.05):
    """Initialize PSI threshold using normal approximation.

    Uses the asymptotic normal distribution of PSI to calculate critical values
    for population drift detection.

    Args:
        alpha: Significance level (0.0 to 1.0, exclusive). Common values:
               0.05 (95% confidence), 0.01 (99% confidence)

    Raises:
        ValueError: If alpha not in range (0.0, 1.0)
    """

alpha property writable

alpha: float

Statistical significance level for drift detection.

PsiServerRecord

PsiServerRecord(
    space: str,
    name: str,
    version: str,
    feature: str,
    bin_id: int,
    bin_count: int,
)

Parameters:

Name Type Description Default
space str

Model space

required
name str

Model name

required
version str

Model version

required
feature str

Feature name

required
bin_id int

Bundle ID

required
bin_count int

Bundle ID

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str,
    name: str,
    version: str,
    feature: str,
    bin_id: int,
    bin_count: int,
):
    """Initialize spc drift server record

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        feature:
            Feature name
        bin_id:
            Bundle ID
        bin_count:
            Bundle ID
    """

bin_count property

bin_count: int

Return the sample value.

bin_id property

bin_id: int

Return the bin id.

created_at property

created_at: datetime

Return the created at timestamp.

feature property

feature: str

Return the feature.

name property

name: str

Return the name.

space property

space: str

Return the space.

version property

version: str

Return the version.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

to_dict

to_dict() -> Dict[str, str]

Return the dictionary representation of the record.

Source code in python/scouter/stubs.pyi
def to_dict(self) -> Dict[str, str]:
    """Return the dictionary representation of the record."""

PyTask

Python-specific task interface for Task objects and results

agent_id property

agent_id: str

The ID of the agent that will execute the task.

dependencies property

dependencies: List[str]

The dependencies of the task.

id property

id: str

The ID of the task.

prompt property

prompt: Prompt

The prompt to use for the task.

result property

result: Optional[AgentResponse]

The result of the task if it has been executed, otherwise None.

status property

status: TaskStatus

The status of the task.

QuantileBinning

QuantileBinning(num_bins: int = 10)

This strategy uses the R-7 quantile method (Hyndman & Fan Type 7) to compute bin edges. It is the default quantile method in R and provides continuous, median-unbiased estimates that are approximately unbiased for normal distributions.

The R-7 method defines quantiles using
  • m = 1 - p
  • j = floor(n * p + m)
  • h = n * p + m - j
  • Q(p) = (1 - h) * x[j] + h * x[j+1]
Reference

Hyndman, R. J. & Fan, Y. (1996). "Sample quantiles in statistical packages." The American Statistician, 50(4), pp. 361–365. PDF: https://www.amherst.edu/media/view/129116/original/Sample+Quantiles.pdf

Parameters:

Name Type Description Default
num_bins int

Number of bins to compute using the R-7 quantile method.

10
Source code in python/scouter/stubs.pyi
def __init__(self, num_bins: int = 10):
    """Initialize the quantile binning strategy.

    This strategy uses the R-7 quantile method (Hyndman & Fan Type 7) to
    compute bin edges. It is the default quantile method in R and provides
    continuous, median-unbiased estimates that are approximately unbiased
    for normal distributions.

    The R-7 method defines quantiles using:
        - m = 1 - p
        - j = floor(n * p + m)
        - h = n * p + m - j
        - Q(p) = (1 - h) * x[j] + h * x[j+1]

    Reference:
        Hyndman, R. J. & Fan, Y. (1996). "Sample quantiles in statistical packages."
        The American Statistician, 50(4), pp. 361–365.
        PDF: https://www.amherst.edu/media/view/129116/original/Sample+Quantiles.pdf

    Args:
        num_bins:
            Number of bins to compute using the R-7 quantile method.
    """

num_bins property writable

num_bins: int

The number of bins you want created using the r7 quantile method

Quantiles

q25 property

q25: float

25th quantile

q50 property

q50: float

50th quantile

q75 property

q75: float

75th quantile

q99 property

q99: float

99th quantile

Queue

Individual queue associated with a drift profile

identifier property

identifier: str

Return the identifier of the queue

insert

insert(entity: Union[Features, Metrics, LLMRecord]) -> None

Insert a record into the queue

Parameters:

Name Type Description Default
entity Union[Features, Metrics, LLMRecord]

Entity to insert into the queue. Can be an instance for Features, Metrics, or LLMRecord.

required
Example
features = Features(
    features=[
        Feature("feature_1", 1),
        Feature("feature_2", 2.0),
        Feature("feature_3", "value"),
    ]
)
queue.insert(features)
Source code in python/scouter/stubs.pyi
def insert(self, entity: Union[Features, Metrics, LLMRecord]) -> None:
    """Insert a record into the queue

    Args:
        entity:
            Entity to insert into the queue.
            Can be an instance for Features, Metrics, or LLMRecord.

    Example:
        ```python
        features = Features(
            features=[
                Feature("feature_1", 1),
                Feature("feature_2", 2.0),
                Feature("feature_3", "value"),
            ]
        )
        queue.insert(features)
        ```
    """

QueueFeature

QueueFeature(name: str, value: Any)

Parameters:

Name Type Description Default
name str

Name of the feature

required
value Any

Value of the feature. Can be an int, float, or string.

required
Example
feature = Feature("feature_1", 1) # int feature
feature = Feature("feature_2", 2.0) # float feature
feature = Feature("feature_3", "value") # string feature
Source code in python/scouter/stubs.pyi
def __init__(self, name: str, value: Any) -> None:
    """Initialize feature. Will attempt to convert the value to it's corresponding feature type.
    Current support types are int, float, string.

    Args:
        name:
            Name of the feature
        value:
            Value of the feature. Can be an int, float, or string.

    Example:
        ```python
        feature = Feature("feature_1", 1) # int feature
        feature = Feature("feature_2", 2.0) # float feature
        feature = Feature("feature_3", "value") # string feature
        ```
    """

categorical staticmethod

categorical(name: str, value: str) -> QueueFeature

Create a categorical feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value str

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def categorical(name: str, value: str) -> "QueueFeature":
    """Create a categorical feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

float staticmethod

float(name: str, value: float) -> QueueFeature

Create a float feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value float

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def float(name: str, value: float) -> "QueueFeature":
    """Create a float feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

int staticmethod

int(name: str, value: int) -> QueueFeature

Create an integer feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value int

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def int(name: str, value: int) -> "QueueFeature":
    """Create an integer feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

string staticmethod

string(name: str, value: str) -> QueueFeature

Create a string feature

Parameters:

Name Type Description Default
name str

Name of the feature

required
value str

Value of the feature

required
Source code in python/scouter/stubs.pyi
@staticmethod
def string(name: str, value: str) -> "QueueFeature":
    """Create a string feature

    Args:
        name:
            Name of the feature
        value:
            Value of the feature
    """

RabbitMQConfig

RabbitMQConfig(
    host: Optional[str] = None,
    port: Optional[int] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    queue: Optional[str] = None,
    max_retries: int = 3,
)

Parameters:

Name Type Description Default
host Optional[str]

RabbitMQ host. If not provided, the value of the RABBITMQ_HOST environment variable is used.

None
port Optional[int]

RabbitMQ port. If not provided, the value of the RABBITMQ_PORT environment variable is used.

None
username Optional[str]

RabbitMQ username. If not provided, the value of the RABBITMQ_USERNAME environment variable is used.

None
password Optional[str]

RabbitMQ password. If not provided, the value of the RABBITMQ_PASSWORD environment variable is used.

None
queue Optional[str]

RabbitMQ queue to publish messages to. If not provided, the value of the RABBITMQ_QUEUE environment variable is used.

None
max_retries int

Maximum number of retries to attempt when publishing messages. Default is 3.

3
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    host: Optional[str] = None,
    port: Optional[int] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    queue: Optional[str] = None,
    max_retries: int = 3,
) -> None:
    """RabbitMQ configuration to use with the RabbitMQProducer.

    Args:
        host:
            RabbitMQ host.
            If not provided, the value of the RABBITMQ_HOST environment variable is used.

        port:
            RabbitMQ port.
            If not provided, the value of the RABBITMQ_PORT environment variable is used.

        username:
            RabbitMQ username.
            If not provided, the value of the RABBITMQ_USERNAME environment variable is used.

        password:
            RabbitMQ password.
            If not provided, the value of the RABBITMQ_PASSWORD environment variable is used.

        queue:
            RabbitMQ queue to publish messages to.
            If not provided, the value of the RABBITMQ_QUEUE environment variable is used.

        max_retries:
            Maximum number of retries to attempt when publishing messages.
            Default is 3.
    """

RedisConfig

RedisConfig(
    address: Optional[str] = None,
    chanel: Optional[str] = None,
)

Parameters:

Name Type Description Default
address str

Redis address. If not provided, the value of the REDIS_ADDR environment variable is used and defaults to "redis://localhost:6379".

None
channel str

Redis channel to publish messages to.

If not provided, the value of the REDIS_CHANNEL environment variable is used and defaults to "scouter_monitoring".

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    address: Optional[str] = None,
    chanel: Optional[str] = None,
) -> None:
    """Redis configuration to use with a Redis producer

    Args:
        address (str):
            Redis address.
            If not provided, the value of the REDIS_ADDR environment variable is used and defaults to
            "redis://localhost:6379".

        channel (str):
            Redis channel to publish messages to.

            If not provided, the value of the REDIS_CHANNEL environment variable is used and defaults to "scouter_monitoring".
    """

ResponseLogProbs

logprob property

logprob: float

The log probability of the token.

token property

token: str

The token for which the log probabilities are calculated.

RetrievalConfig

RetrievalConfig(lat_lng: LatLng, language_code: str)

Parameters:

Name Type Description Default
lat_lng LatLng

The latitude and longitude configuration.

required
language_code str

The language code for the retrieval.

required
Source code in python/scouter/stubs.pyi
def __init__(self, lat_lng: LatLng, language_code: str) -> None:
    """Initialize RetrievalConfig with latitude/longitude and language code.

    Args:
        lat_lng (LatLng):
            The latitude and longitude configuration.
        language_code (str):
            The language code for the retrieval.
    """

Rice

Rice()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Rice equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

RouteMetrics

error_count property

error_count: int

Error count

error_latency property

error_latency: float

Error latency

metrics property

metrics: LatencyMetrics

Return the metrics

request_count property

request_count: int

Request count

route_name property

route_name: str

Return the route name

status_codes property

status_codes: Dict[int, int]

Dictionary of status codes and counts

RustyLogger

debug

debug(message: str, *args: Any) -> None

Log a debug message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def debug(self, message: str, *args: Any) -> None:
    """Log a debug message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

error

error(message: str, *args: Any) -> None

Log an error message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def error(self, message: str, *args: Any) -> None:
    """Log an error message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

get_logger staticmethod

get_logger(
    config: Optional[LoggingConfig] = None,
) -> RustyLogger

Get a logger with the provided name.

Parameters:

Name Type Description Default
config Optional[LoggingConfig]

Logging configuration options.

None
Source code in python/scouter/stubs.pyi
@staticmethod
def get_logger(config: Optional[LoggingConfig] = None) -> "RustyLogger":
    """Get a logger with the provided name.

    Args:
        config:
            Logging configuration options.
    """

info

info(message: str, *args: Any) -> None

Log an info message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def info(self, message: str, *args: Any) -> None:
    """Log an info message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

setup_logging staticmethod

setup_logging(
    config: Optional[LoggingConfig] = None,
) -> None

Setup logging with the provided configuration.

Parameters:

Name Type Description Default
config Optional[LoggingConfig]

Logging configuration options.

None
Source code in python/scouter/stubs.pyi
@staticmethod
def setup_logging(config: Optional[LoggingConfig] = None) -> None:
    """Setup logging with the provided configuration.

    Args:
        config:
            Logging configuration options.
    """

trace

trace(message: str, *args: Any) -> None

Log a trace message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def trace(self, message: str, *args: Any) -> None:
    """Log a trace message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

warn

warn(message: str, *args: Any) -> None

Log a warning message.

Parameters:

Name Type Description Default
message str

Message to log.

required
args Any

Additional arguments to format the message.

()
Source code in python/scouter/stubs.pyi
def warn(self, message: str, *args: Any) -> None:
    """Log a warning message.

    Args:
        message:
            Message to log.

        args:
            Additional arguments to format the message.
    """

SafetySetting

SafetySetting(
    category: HarmCategory,
    threshold: HarmBlockThreshold,
    method: Optional[HarmBlockMethod] = None,
)

Parameters:

Name Type Description Default
category HarmCategory

The category of harm to protect against.

required
threshold HarmBlockThreshold

The threshold for blocking content.

required
method Optional[HarmBlockMethod]

The method used for blocking (if any).

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    category: HarmCategory,
    threshold: HarmBlockThreshold,
    method: Optional[HarmBlockMethod] = None,
) -> None:
    """Initialize SafetySetting with required and optional parameters.

    Args:
        category (HarmCategory):
            The category of harm to protect against.
        threshold (HarmBlockThreshold):
            The threshold for blocking content.
        method (Optional[HarmBlockMethod]):
            The method used for blocking (if any).
    """

Score

A class representing a score with a score value and a reason. This is typically used as a response type for tasks/prompts that require scoring or evaluation of results.

Example:

    Prompt(
        model="openai:gpt-4o",
        message="What is the score of this response?",
        system_instruction="system_prompt",
        response_format=Score,
    )

reason property

reason: str

The reason for the score.

score property

score: int

The score value.

model_validate_json staticmethod

model_validate_json(json_string: str) -> Score

Validate the score JSON.

Parameters:

Name Type Description Default
json_string str

The JSON string to validate.

required

Returns:

Name Type Description
Score Score

The score object.

Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "Score":
    """Validate the score JSON.

    Args:
        json_string (str):
            The JSON string to validate.

    Returns:
        Score:
            The score object.
    """

Scott

Scott()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Scott equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

ScouterClient

ScouterClient(config: Optional[HttpConfig] = None)

Helper client for interacting with Scouter Server

Parameters:

Name Type Description Default
config Optional[HttpConfig]

HTTP configuration for interacting with the server.

None
Source code in python/scouter/stubs.pyi
def __init__(self, config: Optional[HttpConfig] = None) -> None:
    """Initialize ScouterClient

    Args:
        config:
            HTTP configuration for interacting with the server.
    """

download_profile

download_profile(
    request: GetProfileRequest, path: Optional[Path]
) -> str

Download profile

Parameters:

Name Type Description Default
request GetProfileRequest

GetProfileRequest

required
path Optional[Path]

Path to save profile

required

Returns:

Type Description
str

Path to downloaded profile

Source code in python/scouter/stubs.pyi
def download_profile(self, request: GetProfileRequest, path: Optional[Path]) -> str:
    """Download profile

    Args:
        request:
            GetProfileRequest
        path:
            Path to save profile

    Returns:
        Path to downloaded profile
    """

get_alerts

get_alerts(request: DriftAlertRequest) -> List[Alert]

Get alerts

Parameters:

Name Type Description Default
request DriftAlertRequest

DriftAlertRequest

required

Returns:

Type Description
List[Alert]

List[Alert]

Source code in python/scouter/stubs.pyi
def get_alerts(self, request: DriftAlertRequest) -> List[Alert]:
    """Get alerts

    Args:
        request:
            DriftAlertRequest

    Returns:
        List[Alert]
    """

get_binned_drift

get_binned_drift(drift_request: DriftRequest) -> Any

Get drift map from server

Parameters:

Name Type Description Default
drift_request DriftRequest

DriftRequest object

required

Returns:

Type Description
Any

Drift map of type BinnedMetrics | BinnedPsiFeatureMetrics | BinnedSpcFeatureMetrics

Source code in python/scouter/stubs.pyi
def get_binned_drift(self, drift_request: DriftRequest) -> Any:
    """Get drift map from server

    Args:
        drift_request:
            DriftRequest object

    Returns:
        Drift map of type BinnedMetrics | BinnedPsiFeatureMetrics | BinnedSpcFeatureMetrics
    """

get_paginated_traces

get_paginated_traces(
    filters: TraceFilters,
) -> TracePaginationResponse

Get paginated traces Args: filters: TraceFilters object Returns: TracePaginationResponse

Source code in python/scouter/stubs.pyi
def get_paginated_traces(self, filters: TraceFilters) -> TracePaginationResponse:
    """Get paginated traces
    Args:
        filters:
            TraceFilters object
    Returns:
        TracePaginationResponse
    """

get_tags

get_tags(entity_type: str, entity_id: str) -> TagsResponse

Get tags for an entity

Parameters:

Name Type Description Default
entity_type str

Entity type

required
entity_id str

Entity ID

required

Returns:

Type Description
TagsResponse

TagsResponse

Source code in python/scouter/stubs.pyi
def get_tags(self, entity_type: str, entity_id: str) -> TagsResponse:
    """Get tags for an entity

    Args:
        entity_type:
            Entity type
        entity_id:
            Entity ID

    Returns:
        TagsResponse
    """

get_trace_baggage

get_trace_baggage(trace_id: str) -> TraceBaggageResponse

Get trace baggage

Parameters:

Name Type Description Default
trace_id str

Trace ID

required

Returns:

Type Description
TraceBaggageResponse

TraceBaggageResponse

Source code in python/scouter/stubs.pyi
def get_trace_baggage(self, trace_id: str) -> TraceBaggageResponse:
    """Get trace baggage

    Args:
        trace_id:
            Trace ID

    Returns:
        TraceBaggageResponse
    """

get_trace_metrics

get_trace_metrics(
    request: TraceMetricsRequest,
) -> TraceMetricsResponse

Get trace metrics

Parameters:

Name Type Description Default
request TraceMetricsRequest

TraceMetricsRequest

required

Returns:

Type Description
TraceMetricsResponse

TraceMetricsResponse

Source code in python/scouter/stubs.pyi
def get_trace_metrics(self, request: TraceMetricsRequest) -> TraceMetricsResponse:
    """Get trace metrics

    Args:
        request:
            TraceMetricsRequest

    Returns:
        TraceMetricsResponse
    """

get_trace_spans

get_trace_spans(trace_id: str) -> TraceSpansResponse

Get trace spans

Parameters:

Name Type Description Default
trace_id str

Trace ID

required

Returns:

Type Description
TraceSpansResponse

TraceSpansResponse

Source code in python/scouter/stubs.pyi
def get_trace_spans(self, trace_id: str) -> TraceSpansResponse:
    """Get trace spans

    Args:
        trace_id:
            Trace ID

    Returns:
        TraceSpansResponse
    """

refresh_trace_summary

refresh_trace_summary() -> bool

Refresh trace summary cache

Returns:

Type Description
bool

boolean

Source code in python/scouter/stubs.pyi
def refresh_trace_summary(self) -> bool:
    """Refresh trace summary cache

    Returns:
        boolean
    """

register_profile

register_profile(
    profile: Any, set_active: bool = False
) -> bool

Registers a drift profile with the server

Parameters:

Name Type Description Default
profile Any

Drift profile

required
set_active bool

Whether to set the profile as active or inactive

False

Returns:

Type Description
bool

boolean

Source code in python/scouter/stubs.pyi
def register_profile(self, profile: Any, set_active: bool = False) -> bool:
    """Registers a drift profile with the server

    Args:
        profile:
            Drift profile
        set_active:
            Whether to set the profile as active or inactive

    Returns:
        boolean
    """

update_profile_status

update_profile_status(
    request: ProfileStatusRequest,
) -> bool

Update profile status

Parameters:

Name Type Description Default
request ProfileStatusRequest

ProfileStatusRequest

required

Returns:

Type Description
bool

boolean

Source code in python/scouter/stubs.pyi
def update_profile_status(self, request: ProfileStatusRequest) -> bool:
    """Update profile status

    Args:
        request:
            ProfileStatusRequest

    Returns:
        boolean
    """

ScouterQueue

Main queue class for Scouter. Publishes drift records to the configured transport

transport_config property

transport_config: Union[
    KafkaConfig,
    RabbitMQConfig,
    RedisConfig,
    HttpConfig,
    MockConfig,
]

Return the transport configuration used by the queue

from_path staticmethod

from_path(
    path: Dict[str, Path],
    transport_config: Union[
        KafkaConfig, RabbitMQConfig, RedisConfig, HttpConfig
    ],
) -> ScouterQueue

Initializes Scouter queue from one or more drift profile paths

Parameters:

Name Type Description Default
path Dict[str, Path]

Dictionary of drift profile paths. Each key is a user-defined alias for accessing a queue

required
transport_config Union[KafkaConfig, RabbitMQConfig, RedisConfig, HttpConfig]

Transport configuration for the queue publisher Can be KafkaConfig, RabbitMQConfig RedisConfig, or HttpConfig

required
Example
queue = ScouterQueue(
    path={
        "spc": Path("spc_profile.json"),
        "psi": Path("psi_profile.json"),
    },
    transport_config=KafkaConfig(
        brokers="localhost:9092",
        topic="scouter_topic",
    ),
)

queue["psi"].insert(
    Features(
        features=[
            Feature("feature_1", 1),
            Feature("feature_2", 2.0),
            Feature("feature_3", "value"),
        ]
    )
)
Source code in python/scouter/stubs.pyi
@staticmethod
def from_path(
    path: Dict[str, Path],
    transport_config: Union[
        KafkaConfig,
        RabbitMQConfig,
        RedisConfig,
        HttpConfig,
    ],
) -> "ScouterQueue":
    """Initializes Scouter queue from one or more drift profile paths

    Args:
        path (Dict[str, Path]):
            Dictionary of drift profile paths.
            Each key is a user-defined alias for accessing a queue
        transport_config (Union[KafkaConfig, RabbitMQConfig, RedisConfig, HttpConfig]):
            Transport configuration for the queue publisher
            Can be KafkaConfig, RabbitMQConfig RedisConfig, or HttpConfig

    Example:
        ```python
        queue = ScouterQueue(
            path={
                "spc": Path("spc_profile.json"),
                "psi": Path("psi_profile.json"),
            },
            transport_config=KafkaConfig(
                brokers="localhost:9092",
                topic="scouter_topic",
            ),
        )

        queue["psi"].insert(
            Features(
                features=[
                    Feature("feature_1", 1),
                    Feature("feature_2", 2.0),
                    Feature("feature_3", "value"),
                ]
            )
        )
        ```
    """

shutdown

shutdown() -> None

Shutdown the queue. This will close and flush all queues and transports

Source code in python/scouter/stubs.pyi
def shutdown(self) -> None:
    """Shutdown the queue. This will close and flush all queues and transports"""

ScouterTestServer

ScouterTestServer(
    cleanup: bool = True,
    rabbit_mq: bool = False,
    kafka: bool = False,
    openai: bool = False,
    base_path: Optional[Path] = None,
)

When the test server is used as a context manager, it will start the server in a background thread and set the appropriate env vars so that the client can connect to the server. The server will be stopped when the context manager exits and the env vars will be reset.

Parameters:

Name Type Description Default
cleanup bool

Whether to cleanup the server after the test. Defaults to True.

True
rabbit_mq bool

Whether to use RabbitMQ as the transport. Defaults to False.

False
kafka bool

Whether to use Kafka as the transport. Defaults to False.

False
openai bool

Whether to create a mock OpenAITest server. Defaults to False.

False
base_path Optional[Path]

The base path for the server. Defaults to None. This is primarily used for testing loading attributes from a pyproject.toml file.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    cleanup: bool = True,
    rabbit_mq: bool = False,
    kafka: bool = False,
    openai: bool = False,
    base_path: Optional[Path] = None,
) -> None:
    """Instantiates the test server.

    When the test server is used as a context manager, it will start the server
    in a background thread and set the appropriate env vars so that the client
    can connect to the server. The server will be stopped when the context manager
    exits and the env vars will be reset.

    Args:
        cleanup (bool, optional):
            Whether to cleanup the server after the test. Defaults to True.
        rabbit_mq (bool, optional):
            Whether to use RabbitMQ as the transport. Defaults to False.
        kafka (bool, optional):
            Whether to use Kafka as the transport. Defaults to False.
        openai (bool, optional):
            Whether to create a mock OpenAITest server. Defaults to False.
        base_path (Optional[Path], optional):
            The base path for the server. Defaults to None. This is primarily
            used for testing loading attributes from a pyproject.toml file.
    """

cleanup staticmethod

cleanup() -> None

Cleans up the test server.

Source code in python/scouter/stubs.pyi
@staticmethod
def cleanup() -> None:
    """Cleans up the test server."""

remove_env_vars_for_client

remove_env_vars_for_client() -> None

Removes the env vars for the client to connect to the server.

Source code in python/scouter/stubs.pyi
def remove_env_vars_for_client(self) -> None:
    """Removes the env vars for the client to connect to the server."""

set_env_vars_for_client

set_env_vars_for_client() -> None

Sets the env vars for the client to connect to the server.

Source code in python/scouter/stubs.pyi
def set_env_vars_for_client(self) -> None:
    """Sets the env vars for the client to connect to the server."""

start_server

start_server() -> None

Starts the test server.

Source code in python/scouter/stubs.pyi
def start_server(self) -> None:
    """Starts the test server."""

stop_server

stop_server() -> None

Stops the test server.

Source code in python/scouter/stubs.pyi
def stop_server(self) -> None:
    """Stops the test server."""

ServerRecord

ServerRecord(record: Any)

Parameters:

Name Type Description Default
record Any

Server record to initialize

required
Source code in python/scouter/stubs.pyi
def __init__(self, record: Any) -> None:
    """Initialize server record

    Args:
        record:
            Server record to initialize
    """

record property

record: Union[
    SpcServerRecord,
    PsiServerRecord,
    CustomMetricServerRecord,
    ObservabilityMetrics,
]

Return the drift server record.

ServerRecords

ServerRecords(records: List[ServerRecord])

Parameters:

Name Type Description Default
records List[ServerRecord]

List of server records

required
Source code in python/scouter/stubs.pyi
def __init__(self, records: List[ServerRecord]) -> None:
    """Initialize server records

    Args:
        records:
            List of server records
    """

records property

records: List[ServerRecord]

Return the drift server records.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

SlackDispatchConfig

SlackDispatchConfig(channel: str)

Parameters:

Name Type Description Default
channel str

Slack channel name for where alerts will be reported

required
Source code in python/scouter/stubs.pyi
def __init__(self, channel: str):
    """Initialize alert config

    Args:
        channel:
            Slack channel name for where alerts will be reported
    """

channel property writable

channel: str

Return the slack channel name

SpanEvent

Represents an event within a span.

SpanKind

Enumeration of span kinds.

Represents a link to another span.

SpcAlert

SpcAlert(kind: SpcAlertType, zone: AlertZone)
Source code in python/scouter/stubs.pyi
def __init__(self, kind: SpcAlertType, zone: AlertZone):
    """Initialize alert"""

kind property

kind: SpcAlertType

Alert kind

zone property

zone: AlertZone

Zone associated with alert

SpcAlertConfig

SpcAlertConfig(
    rule: Optional[SpcAlertRule] = None,
    dispatch_config: Optional[
        SlackDispatchConfig | OpsGenieDispatchConfig
    ] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
)

Parameters:

Name Type Description Default
rule Optional[SpcAlertRule]

Alert rule to use. Defaults to Standard

None
dispatch_config Optional[SlackDispatchConfig | OpsGenieDispatchConfig]

Alert dispatch config. Defaults to console

None
schedule Optional[str | CommonCrons]

Schedule to run monitor. Defaults to daily at midnight

None
features_to_monitor List[str]

List of features to monitor. Defaults to empty list, which means all features

[]
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    rule: Optional[SpcAlertRule] = None,
    dispatch_config: Optional[SlackDispatchConfig | OpsGenieDispatchConfig] = None,
    schedule: Optional[str | CommonCrons] = None,
    features_to_monitor: List[str] = [],
):
    """Initialize alert config

    Args:
        rule:
            Alert rule to use. Defaults to Standard
        dispatch_config:
            Alert dispatch config. Defaults to console
        schedule:
            Schedule to run monitor. Defaults to daily at midnight
        features_to_monitor:
            List of features to monitor. Defaults to empty list, which means all features

    """

dispatch_config property

dispatch_config: DispatchConfigType

Return the dispatch config

dispatch_type property

dispatch_type: AlertDispatchType

Return the alert dispatch type

features_to_monitor property writable

features_to_monitor: List[str]

Return the features to monitor

rule property writable

rule: SpcAlertRule

Return the alert rule

schedule property writable

schedule: str

Return the schedule

SpcAlertRule

SpcAlertRule(
    rule: str = "8 16 4 8 2 4 1 1",
    zones_to_monitor: List[AlertZone] = [
        AlertZone.Zone1,
        AlertZone.Zone2,
        AlertZone.Zone3,
        AlertZone.Zone4,
    ],
)

Parameters:

Name Type Description Default
rule str

Rule to use for alerting. Eight digit integer string. Defaults to '8 16 4 8 2 4 1 1'

'8 16 4 8 2 4 1 1'
zones_to_monitor List[AlertZone]

List of zones to monitor. Defaults to all zones.

[Zone1, Zone2, Zone3, Zone4]
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    rule: str = "8 16 4 8 2 4 1 1",
    zones_to_monitor: List[AlertZone] = [
        AlertZone.Zone1,
        AlertZone.Zone2,
        AlertZone.Zone3,
        AlertZone.Zone4,
    ],
) -> None:
    """Initialize alert rule

    Args:
        rule:
            Rule to use for alerting. Eight digit integer string.
            Defaults to '8 16 4 8 2 4 1 1'
        zones_to_monitor:
            List of zones to monitor. Defaults to all zones.
    """

rule property writable

rule: str

Return the alert rule

zones_to_monitor property writable

zones_to_monitor: List[AlertZone]

Return the zones to monitor

SpcDriftConfig

SpcDriftConfig(
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: SpcAlertConfig = SpcAlertConfig(),
    config_path: Optional[Path] = None,
)

Parameters:

Name Type Description Default
space str

Model space

'__missing__'
name str

Model name

'__missing__'
version str

Model version. Defaults to 0.1.0

'0.1.0'
sample_size int

Sample size

25
alert_config SpcAlertConfig

Alert configuration

SpcAlertConfig()
config_path Optional[Path]

Optional path to load config from.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str = "__missing__",
    name: str = "__missing__",
    version: str = "0.1.0",
    sample_size: int = 25,
    alert_config: SpcAlertConfig = SpcAlertConfig(),
    config_path: Optional[Path] = None,
):
    """Initialize monitor config

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version. Defaults to 0.1.0
        sample_size:
            Sample size
        alert_config:
            Alert configuration
        config_path:
            Optional path to load config from.
    """

alert_config property writable

alert_config: SpcAlertConfig

Alert configuration

drift_type property

drift_type: DriftType

Drift type

feature_map property

feature_map: Optional[FeatureMap]

Feature map

name property writable

name: str

Model Name

sample_size property writable

sample_size: int

Return the sample size.

space property writable

space: str

Model space

version property writable

version: str

Model version

load_from_json_file staticmethod

load_from_json_file(path: Path) -> SpcDriftConfig

Load config from json file

Parameters:

Name Type Description Default
path Path

Path to json file to load config from.

required
Source code in python/scouter/stubs.pyi
@staticmethod
def load_from_json_file(path: Path) -> "SpcDriftConfig":
    """Load config from json file

    Args:
        path:
            Path to json file to load config from.
    """

model_dump_json

model_dump_json() -> str

Return the json representation of the config.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the config."""

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
space Optional[str]

Model space

None
name Optional[str]

Model name

None
version Optional[str]

Model version

None
sample_size Optional[int]

Sample size

None
alert_config Optional[SpcAlertConfig]

Alert configuration

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        sample_size:
            Sample size
        alert_config:
            Alert configuration
    """

SpcDriftMap

Drift map of features

features property

features: Dict[str, SpcFeatureDrift]

Returns dictionary of features and their data profiles

name property

name: str

name to associate with drift map

space property

space: str

Space to associate with drift map

version property

version: str

Version to associate with drift map

model_dump_json

model_dump_json() -> str

Return json representation of data drift

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of data drift"""

model_validate_json staticmethod

model_validate_json(json_string: str) -> SpcDriftMap

Load drift map from json file.

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift map

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "SpcDriftMap":
    """Load drift map from json file.

    Args:
        json_string:
            JSON string representation of the drift map
    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift map to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift map. If None, outputs to spc_drift_map.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift map to json file

    Args:
        path:
            Optional path to save the drift map. If None, outputs to `spc_drift_map.json`

    Returns:
        Path to the saved json file

    """

to_numpy

to_numpy() -> Any

Return drift map as a tuple of sample_array, drift_array and list of features

Source code in python/scouter/stubs.pyi
def to_numpy(self) -> Any:
    """Return drift map as a tuple of sample_array, drift_array and list of features"""

SpcDriftProfile

config property

config: SpcDriftConfig

Return the monitor config.

features property

features: Dict[str, SpcFeatureDriftProfile]

Return the list of features.

scouter_version property

scouter_version: str

Return scouter version used to create DriftProfile

from_file staticmethod

from_file(path: Path) -> SpcDriftProfile

Load drift profile from file

Parameters:

Name Type Description Default
path Path

Path to the file

required
Source code in python/scouter/stubs.pyi
@staticmethod
def from_file(path: Path) -> "SpcDriftProfile":
    """Load drift profile from file

    Args:
        path: Path to the file
    """

model_dump

model_dump() -> Dict[str, Any]

Return dictionary representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump(self) -> Dict[str, Any]:
    """Return dictionary representation of drift profile"""

model_dump_json

model_dump_json() -> str

Return json representation of drift profile

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return json representation of drift profile"""

model_validate staticmethod

model_validate(data: Dict[str, Any]) -> SpcDriftProfile

Load drift profile from dictionary

Parameters:

Name Type Description Default
data Dict[str, Any]

DriftProfile dictionary

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate(data: Dict[str, Any]) -> "SpcDriftProfile":
    """Load drift profile from dictionary

    Args:
        data:
            DriftProfile dictionary
    """

model_validate_json staticmethod

model_validate_json(json_string: str) -> SpcDriftProfile

Load drift profile from json

Parameters:

Name Type Description Default
json_string str

JSON string representation of the drift profile

required
Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str) -> "SpcDriftProfile":
    """Load drift profile from json

    Args:
        json_string:
            JSON string representation of the drift profile

    """

save_to_json

save_to_json(path: Optional[Path] = None) -> Path

Save drift profile to json file

Parameters:

Name Type Description Default
path Optional[Path]

Optional path to save the drift profile. If None, outputs to spc_drift_profile.json

None

Returns:

Type Description
Path

Path to the saved json file

Source code in python/scouter/stubs.pyi
def save_to_json(self, path: Optional[Path] = None) -> Path:
    """Save drift profile to json file

    Args:
        path:
            Optional path to save the drift profile. If None, outputs to `spc_drift_profile.json`


    Returns:
        Path to the saved json file
    """

update_config_args

update_config_args(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None

Inplace operation that updates config args

Parameters:

Name Type Description Default
name Optional[str]

Model name

None
space Optional[str]

Model space

None
version Optional[str]

Model version

None
sample_size Optional[int]

Sample size

None
alert_config Optional[SpcAlertConfig]

Alert configuration

None
Source code in python/scouter/stubs.pyi
def update_config_args(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    sample_size: Optional[int] = None,
    alert_config: Optional[SpcAlertConfig] = None,
) -> None:
    """Inplace operation that updates config args

    Args:
        name:
            Model name
        space:
            Model space
        version:
            Model version
        sample_size:
            Sample size
        alert_config:
            Alert configuration
    """

SpcFeatureDrift

drift property

drift: List[float]

Return list of drift values

samples property

samples: List[float]

Return list of samples

SpcFeatureDriftProfile

center property

center: float

Return the center.

id property

id: str

Return the id.

one_lcl property

one_lcl: float

Return the zone 1 lcl.

one_ucl property

one_ucl: float

Return the zone 1 ucl.

three_lcl property

three_lcl: float

Return the zone 3 lcl.

three_ucl property

three_ucl: float

Return the zone 3 ucl.

timestamp property

timestamp: str

Return the timestamp.

two_lcl property

two_lcl: float

Return the zone 2 lcl.

two_ucl property

two_ucl: float

Return the zone 2 ucl.

SpcServerRecord

SpcServerRecord(
    space: str,
    name: str,
    version: str,
    feature: str,
    value: float,
)

Parameters:

Name Type Description Default
space str

Model space

required
name str

Model name

required
version str

Model version

required
feature str

Feature name

required
value float

Feature value

required
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: str,
    name: str,
    version: str,
    feature: str,
    value: float,
):
    """Initialize spc drift server record

    Args:
        space:
            Model space
        name:
            Model name
        version:
            Model version
        feature:
            Feature name
        value:
            Feature value
    """

created_at property

created_at: datetime

Return the created at timestamp.

feature property

feature: str

Return the feature.

name property

name: str

Return the name.

space property

space: str

Return the space.

value property

value: float

Return the sample value.

version property

version: str

Return the version.

model_dump_json

model_dump_json() -> str

Return the json representation of the record.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Return the json representation of the record."""

to_dict

to_dict() -> Dict[str, str]

Return the dictionary representation of the record.

Source code in python/scouter/stubs.pyi
def to_dict(self) -> Dict[str, str]:
    """Return the dictionary representation of the record."""

SpeechConfig

SpeechConfig(
    voice_config: Optional[VoiceConfig] = None,
    language_code: Optional[str] = None,
)

Configuration for speech generation.

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    voice_config: Optional["VoiceConfig"] = None,
    language_code: Optional[str] = None,
) -> None: ...

SquareRoot

SquareRoot()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the SquareRoot equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

StdoutSpanExporter

StdoutSpanExporter(
    batch_export: bool = False,
    sample_ratio: Optional[float] = None,
)

Exporter that outputs spans to standard output (stdout).

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to False.

False
sample_ratio Optional[float]

The sampling ratio for traces. If None, defaults to always sample.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    batch_export: bool = False,
    sample_ratio: Optional[float] = None,
) -> None:
    """Initialize the StdoutSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to False.
        sample_ratio (Optional[float]):
            The sampling ratio for traces. If None, defaults to always sample.
    """

batch_export property

batch_export: bool

Get whether batch exporting is enabled.

sample_ratio property

sample_ratio: Optional[float]

Get the sampling ratio.

StringStats

char_stats property

char_stats: CharStats

Character statistics

distinct property

distinct: Distinct

Distinct value counts

word_stats property

word_stats: WordStats

word statistics

Sturges

Sturges()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Sturges equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

TagRecord

Represents a single tag record associated with an entity.

TagsResponse

Response structure containing a list of tag records.

Task

Task(
    agent_id: str,
    prompt: Prompt,
    dependencies: List[str] = [],
    id: Optional[str] = None,
)

Parameters:

Name Type Description Default
agent_id str

The ID of the agent that will execute the task.

required
prompt Prompt

The prompt to use for the task.

required
dependencies List[str]

The dependencies of the task.

[]
id Optional[str]

The ID of the task. If None, a random uuid7 will be generated.

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    agent_id: str,
    prompt: Prompt,
    dependencies: List[str] = [],
    id: Optional[str] = None,
) -> None:
    """Create a Task object.

    Args:
        agent_id (str):
            The ID of the agent that will execute the task.
        prompt (Prompt):
            The prompt to use for the task.
        dependencies (List[str]):
            The dependencies of the task.
        id (Optional[str]):
            The ID of the task. If None, a random uuid7 will be generated.
    """

dependencies property

dependencies: List[str]

The dependencies of the task.

id property

id: str

The ID of the task.

prompt property

prompt: Prompt

The prompt to use for the task.

status property

status: TaskStatus

The status of the task.

TaskEvent

details property

details: EventDetails

Additional details about the event. This can include information such as error messages or other relevant data.

id property

id: str

The ID of the event

status property

status: TaskStatus

The status of the task at the time of the event.

task_id property

task_id: str

The ID of the task that the event is associated with.

timestamp property

timestamp: datetime

The timestamp of the event. This is the time when the event occurred.

updated_at property

updated_at: datetime

The timestamp of when the event was last updated. This is useful for tracking changes to the event.

workflow_id property

workflow_id: str

The ID of the workflow that the task is part of.

TaskList

TaskList()
Source code in python/scouter/stubs.pyi
def __init__(self) -> None:
    """Create a TaskList object."""

TerrellScott

TerrellScott()

For more information, please see: https://en.wikipedia.org/wiki/Histogram

Source code in python/scouter/stubs.pyi
def __init__(self):
    """Use the Terrell-Scott equal-width method.

    For more information, please see: https://en.wikipedia.org/wiki/Histogram
    """

TestSpanExporter

TestSpanExporter(batch_export: bool = True)

Exporter for testing that collects spans in memory.

Parameters:

Name Type Description Default
batch_export bool

Whether to use batch exporting. Defaults to True.

True
Source code in python/scouter/stubs.pyi
def __init__(self, batch_export: bool = True) -> None:
    """Initialize the TestSpanExporter.

    Args:
        batch_export (bool):
            Whether to use batch exporting. Defaults to True.
    """

baggage property

baggage: list[TraceBaggageRecord]

Get the collected trace baggage records.

spans property

spans: list[TraceSpanRecord]

Get the collected trace span records.

traces property

traces: list[TraceRecord]

Get the collected trace records.

clear

clear() -> None

Clear all collected trace records.

Source code in python/scouter/stubs.pyi
def clear(self) -> None:
    """Clear all collected trace records."""

ThinkingConfig

ThinkingConfig(
    include_thoughts: Optional[bool] = None,
    thinking_budget: Optional[int] = None,
)

Configuration for thinking/reasoning capabilities.

Source code in python/scouter/stubs.pyi
def __init__(
    self,
    include_thoughts: Optional[bool] = None,
    thinking_budget: Optional[int] = None,
) -> None: ...

TraceBaggageRecord

Represents a single baggage record associated with a trace.

TraceBaggageResponse

Response structure containing trace baggage records.

TraceFilters

TraceFilters(
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    service_name: Optional[str] = None,
    has_errors: Optional[bool] = None,
    status_code: Optional[int] = None,
    start_time: Optional[datetime] = None,
    end_time: Optional[datetime] = None,
    limit: Optional[int] = None,
    cursor_created_at: Optional[datetime] = None,
    cursor_trace_id: Optional[str] = None,
)

A struct for filtering traces, generated from Rust pyclass.

Parameters:

Name Type Description Default
space Optional[str]

Model space filter

None
name Optional[str]

Model name filter

None
version Optional[str]

Model version filter

None
service_name Optional[str]

Service name filter

None
has_errors Optional[bool]

Filter by presence of errors

None
status_code Optional[int]

Filter by root span status code

None
start_time Optional[datetime]

Start time boundary (UTC)

None
end_time Optional[datetime]

End time boundary (UTC)

None
limit Optional[int]

Maximum number of results to return

None
cursor_created_at Optional[datetime]

Pagination cursor: created at timestamp

None
cursor_trace_id Optional[str]

Pagination cursor: trace ID

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
    service_name: Optional[str] = None,
    has_errors: Optional[bool] = None,
    status_code: Optional[int] = None,
    start_time: Optional[datetime] = None,
    end_time: Optional[datetime] = None,
    limit: Optional[int] = None,
    cursor_created_at: Optional[datetime] = None,
    cursor_trace_id: Optional[str] = None,
) -> None:
    """Initialize trace filters.

    Args:
        space:
            Model space filter
        name:
            Model name filter
        version:
            Model version filter
        service_name:
            Service name filter
        has_errors:
            Filter by presence of errors
        status_code:
            Filter by root span status code
        start_time:
            Start time boundary (UTC)
        end_time:
            End time boundary (UTC)
        limit:
            Maximum number of results to return
        cursor_created_at:
            Pagination cursor: created at timestamp
        cursor_trace_id:
            Pagination cursor: trace ID
    """

TraceListItem

Represents a summary item for a trace in a list view.

TraceMetricBucket

Represents aggregated trace metrics for a specific time bucket.

TraceMetricsRequest

TraceMetricsRequest(
    start_time: datetime,
    end_time: datetime,
    bucket_interval: str,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
)

Request payload for fetching trace metrics.

Parameters:

Name Type Description Default
start_time datetime

Start time boundary (UTC)

required
end_time datetime

End time boundary (UTC)

required
bucket_interval str

The time interval for metric aggregation buckets (e.g., '1 minutes', '30 minutes')

required
space Optional[str]

Model space filter

None
name Optional[str]

Model name filter

None
version Optional[str]

Model version filter

None
Source code in python/scouter/stubs.pyi
def __init__(
    self,
    start_time: datetime,
    end_time: datetime,
    bucket_interval: str,
    space: Optional[str] = None,
    name: Optional[str] = None,
    version: Optional[str] = None,
) -> None:
    """Initialize trace metrics request.

    Args:
        start_time:
            Start time boundary (UTC)
        end_time:
            End time boundary (UTC)
        bucket_interval:
            The time interval for metric aggregation buckets (e.g., '1 minutes', '30 minutes')
        space:
            Model space filter
        name:
            Model name filter
        version:
            Model version filter
    """

TraceMetricsResponse

Response structure containing aggregated trace metrics.

TracePaginationResponse

Response structure for paginated trace list requests.

TraceSpan

Detailed information for a single span within a trace.

TraceSpansResponse

Response structure containing a list of spans for a trace.

Usage

Usage statistics for a model response.

completion_tokens property

completion_tokens: int

The number of completion tokens used in the response.

completion_tokens_details property

completion_tokens_details: CompletionTokenDetails

Details about the completion tokens used in the response.

finish_reason property

finish_reason: str

The reason why the model stopped generating tokens

prompt_tokens property

prompt_tokens: int

The number of prompt tokens used in the request.

prompt_tokens_details property

prompt_tokens_details: PromptTokenDetails

Details about the prompt tokens used in the request.

total_tokens property

total_tokens: int

The total number of tokens used in the request and response.

VoiceConfig

VoiceConfig(voice_config: VoiceConfigMode)

Configuration for voice generation.

Source code in python/scouter/stubs.pyi
def __init__(self, voice_config: VoiceConfigMode) -> None: ...

WordStats

words property

words: Dict[str, Distinct]

Distinct word counts

Workflow

Workflow(name: str)

Parameters:

Name Type Description Default
name str

The name of the workflow.

required
Source code in python/scouter/stubs.pyi
def __init__(self, name: str) -> None:
    """Create a Workflow object.

    Args:
        name (str):
            The name of the workflow.
    """

agents property

agents: Dict[str, Agent]

The agents in the workflow.

is_workflow property

is_workflow: bool

Returns True if the workflow is a valid workflow, otherwise False. This is used to determine if the workflow can be executed.

name property

name: str

The name of the workflow.

task_list property

task_list: TaskList

The tasks in the workflow.

add_agent

add_agent(agent: Agent) -> None

Add an agent to the workflow.

Parameters:

Name Type Description Default
agent Agent

The agent to add to the workflow.

required
Source code in python/scouter/stubs.pyi
def add_agent(self, agent: Agent) -> None:
    """Add an agent to the workflow.

    Args:
        agent (Agent):
            The agent to add to the workflow.
    """

add_task

add_task(task: Task, output_type: Optional[Any]) -> None

Add a task to the workflow.

Parameters:

Name Type Description Default
task Task

The task to add to the workflow.

required
output_type Optional[Any]

The output type to use for the task. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

required
Source code in python/scouter/stubs.pyi
def add_task(self, task: Task, output_type: Optional[Any]) -> None:
    """Add a task to the workflow.

    Args:
        task (Task):
            The task to add to the workflow.
        output_type (Optional[Any]):
            The output type to use for the task. This can either be a Pydantic `BaseModel` class
            or a supported potato_head response type such as `Score`.
    """

add_task_output_types

add_task_output_types(
    task_output_types: Dict[str, Any]
) -> None

Add output types for tasks in the workflow. This is primarily used for when loading a workflow as python objects are not serializable.

Parameters:

Name Type Description Default
task_output_types Dict[str, Any]

A dictionary mapping task IDs to their output types. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

required
Source code in python/scouter/stubs.pyi
def add_task_output_types(self, task_output_types: Dict[str, Any]) -> None:
    """Add output types for tasks in the workflow. This is primarily used for
    when loading a workflow as python objects are not serializable.

    Args:
        task_output_types (Dict[str, Any]):
            A dictionary mapping task IDs to their output types.
            This can either be a Pydantic `BaseModel` class or a supported potato_head response type such as `Score`.
    """

add_tasks

add_tasks(tasks: List[Task]) -> None

Add multiple tasks to the workflow.

Parameters:

Name Type Description Default
tasks List[Task]

The tasks to add to the workflow.

required
Source code in python/scouter/stubs.pyi
def add_tasks(self, tasks: List[Task]) -> None:
    """Add multiple tasks to the workflow.

    Args:
        tasks (List[Task]):
            The tasks to add to the workflow.
    """

execution_plan

execution_plan() -> Dict[str, List[str]]

Get the execution plan for the workflow.

Returns:

Type Description
Dict[str, List[str]]

Dict[str, List[str]]: A dictionary where the keys are task IDs and the values are lists of task IDs that the task depends on.

Source code in python/scouter/stubs.pyi
def execution_plan(self) -> Dict[str, List[str]]:
    """Get the execution plan for the workflow.

    Returns:
        Dict[str, List[str]]:
            A dictionary where the keys are task IDs and the values are lists of task IDs
            that the task depends on.
    """

is_complete

is_complete() -> bool

Check if the workflow is complete.

Returns:

Name Type Description
bool bool

True if the workflow is complete, False otherwise.

Source code in python/scouter/stubs.pyi
def is_complete(self) -> bool:
    """Check if the workflow is complete.

    Returns:
        bool:
            True if the workflow is complete, False otherwise.
    """

model_dump_json

model_dump_json() -> str

Dump the workflow to a JSON string.

Returns:

Name Type Description
str str

The JSON string.

Source code in python/scouter/stubs.pyi
def model_dump_json(self) -> str:
    """Dump the workflow to a JSON string.

    Returns:
        str:
            The JSON string.
    """

model_validate_json staticmethod

model_validate_json(
    json_string: str, output_types: Optional[Dict[str, Any]]
) -> Workflow

Load a workflow from a JSON string.

Parameters:

Name Type Description Default
json_string str

The JSON string to validate.

required
output_types Optional[Dict[str, Any]]

A dictionary mapping task IDs to their output types. This can either be a Pydantic BaseModel class or a supported potato_head response type such as Score.

required

Returns:

Name Type Description
Workflow Workflow

The workflow object.

Source code in python/scouter/stubs.pyi
@staticmethod
def model_validate_json(json_string: str, output_types: Optional[Dict[str, Any]]) -> "Workflow":
    """Load a workflow from a JSON string.

    Args:
        json_string (str):
            The JSON string to validate.
        output_types (Optional[Dict[str, Any]]):
            A dictionary mapping task IDs to their output types.
            This can either be a Pydantic `BaseModel` class or a supported potato_head response type such as `Score`.

    Returns:
        Workflow:
            The workflow object.
    """

pending_count

pending_count() -> int

Get the number of pending tasks in the workflow.

Returns:

Name Type Description
int int

The number of pending tasks in the workflow.

Source code in python/scouter/stubs.pyi
def pending_count(self) -> int:
    """Get the number of pending tasks in the workflow.

    Returns:
        int:
            The number of pending tasks in the workflow.
    """

run

run(
    global_context: Optional[Dict[str, Any]] = None
) -> WorkflowResult

Run the workflow. This will execute all tasks in the workflow and return when all tasks are complete.

Parameters:

Name Type Description Default
global_context Optional[Dict[str, Any]]

A dictionary of global context to bind to the workflow. All tasks in the workflow will have this context bound to them.

None
Source code in python/scouter/stubs.pyi
def run(
    self,
    global_context: Optional[Dict[str, Any]] = None,
) -> "WorkflowResult":
    """Run the workflow. This will execute all tasks in the workflow and return when all tasks are complete.

    Args:
        global_context (Optional[Dict[str, Any]]):
            A dictionary of global context to bind to the workflow.
            All tasks in the workflow will have this context bound to them.
    """

WorkflowResult

events property

events: List[TaskEvent]

The events that occurred during the workflow execution. This is a list of dictionaries where each dictionary contains information about the event such as the task ID, status, and timestamp.

tasks property

tasks: Dict[str, PyTask]

The tasks in the workflow result.

evaluate_llm

evaluate_llm(
    records: List[LLMEvalRecord],
    metrics: List[LLMEvalMetric],
    config: Optional[EvaluationConfig] = None,
) -> LLMEvalResults

Evaluate LLM responses using the provided evaluation metrics.

Parameters:

Name Type Description Default
records List[LLMEvalRecord]

List of LLM evaluation records to evaluate.

required
metrics List[LLMEvalMetric]

List of LLMEvalMetric instances to use for evaluation.

required
config Optional[EvaluationConfig]

Optional EvaluationConfig instance to configure evaluation options.

None

Returns:

Type Description
LLMEvalResults

LLMEvalResults

Source code in python/scouter/stubs.pyi
def evaluate_llm(
    records: List[LLMEvalRecord],
    metrics: List[LLMEvalMetric],
    config: Optional[EvaluationConfig] = None,
) -> LLMEvalResults:
    """
    Evaluate LLM responses using the provided evaluation metrics.

    Args:
        records (List[LLMEvalRecord]):
            List of LLM evaluation records to evaluate.
        metrics (List[LLMEvalMetric]):
            List of LLMEvalMetric instances to use for evaluation.
        config (Optional[EvaluationConfig]):
            Optional EvaluationConfig instance to configure evaluation options.

    Returns:
        LLMEvalResults
    """

flush_tracer

flush_tracer() -> None

Force flush the tracer's exporter.

Source code in python/scouter/stubs.pyi
def flush_tracer() -> None:
    """Force flush the tracer's exporter."""

get_function_type

get_function_type(func: Callable[..., Any]) -> FunctionType

Determine the function type (sync, async, generator, async generator).

Parameters:

Name Type Description Default
func Callable[..., Any]

The function to analyze.

required
Source code in python/scouter/stubs.pyi
def get_function_type(func: Callable[..., Any]) -> "FunctionType":
    """Determine the function type (sync, async, generator, async generator).

    Args:
        func (Callable[..., Any]):
            The function to analyze.
    """

init_tracer

init_tracer(
    service_name: str = "scouter_service",
    transport_config: Optional[
        HttpConfig
        | KafkaConfig
        | RabbitMQConfig
        | RedisConfig
    ] = None,
    exporter: (
        HttpSpanExporter
        | StdoutSpanExporter
        | TestSpanExporter
    ) = StdoutSpanExporter(),
    batch_config: Optional[BatchConfig] = None,
    profile_space: Optional[str] = None,
    profile_name: Optional[str] = None,
    profile_version: Optional[str] = None,
) -> None

Initialize the tracer for a service with specific transport and exporter configurations.

This function configures a service tracer, allowing for the specification of the service name, the transport mechanism for exporting spans, and the chosen span exporter.

Parameters:

Name Type Description Default
service_name str

The required name of the service this tracer is associated with. This is typically a logical identifier for the application or component.

'scouter_service'
transport_config HttpConfig | KafkaConfig | RabbitMQConfig | RedisConfig | None

The configuration detailing how spans should be sent out. If None, a default HttpConfig will be used.

The supported configuration types are: * HttpConfig: Configuration for exporting via HTTP/gRPC. * KafkaConfig: Configuration for exporting to a Kafka topic. * RabbitMQConfig: Configuration for exporting to a RabbitMQ queue. * RedisConfig: Configuration for exporting to a Redis stream or channel.

None
exporter HttpSpanExporter | StdoutSpanExporter | TestSpanExporter | None

The span exporter implementation to use. If None, a default StdoutSpanExporter is used.

Available exporters: * HttpSpanExporter: Sends spans to an HTTP endpoint (e.g., an OpenTelemetry collector). * StdoutSpanExporter: Writes spans directly to standard output for debugging. * TestSpanExporter: Collects spans in memory, primarily for unit testing.

StdoutSpanExporter()
batch_config BatchConfig | None

Configuration for the batching process. If provided, spans will be queued and exported in batches according to these settings. If None, and the exporter supports batching, default batch settings will be applied.

None

Drift Profile Association (Optional): Use these parameters to associate the tracer with a specific drift profile.

profile_space (str | None):
    The space for the drift profile.
profile_name (str | None):
    A name of the associated drift profile or service.
profile_version (str | None):
    The version of the drift profile.
Source code in python/scouter/stubs.pyi
def init_tracer(
    service_name: str = "scouter_service",
    transport_config: Optional[HttpConfig | KafkaConfig | RabbitMQConfig | RedisConfig] = None,
    exporter: HttpSpanExporter | StdoutSpanExporter | TestSpanExporter = StdoutSpanExporter(),  # noqa: F821
    batch_config: Optional[BatchConfig] = None,
    profile_space: Optional[str] = None,
    profile_name: Optional[str] = None,
    profile_version: Optional[str] = None,
) -> None:
    """Initialize the tracer for a service with specific transport and exporter configurations.

    This function configures a service tracer, allowing for the specification of
    the service name, the transport mechanism for exporting spans, and the chosen
    span exporter.

    Args:
        service_name (str):
            The **required** name of the service this tracer is associated with.
            This is typically a logical identifier for the application or component.
        transport_config (HttpConfig | KafkaConfig | RabbitMQConfig | RedisConfig | None):
            The configuration detailing how spans should be sent out.
            If **None**, a default `HttpConfig` will be used.

            The supported configuration types are:
            * `HttpConfig`: Configuration for exporting via HTTP/gRPC.
            * `KafkaConfig`: Configuration for exporting to a Kafka topic.
            * `RabbitMQConfig`: Configuration for exporting to a RabbitMQ queue.
            * `RedisConfig`: Configuration for exporting to a Redis stream or channel.
        exporter (HttpSpanExporter | StdoutSpanExporter | TestSpanExporter | None):
            The span exporter implementation to use.
            If **None**, a default `StdoutSpanExporter` is used.

            Available exporters:
            * `HttpSpanExporter`: Sends spans to an HTTP endpoint (e.g., an OpenTelemetry collector).
            * `StdoutSpanExporter`: Writes spans directly to standard output for debugging.
            * `TestSpanExporter`: Collects spans in memory, primarily for unit testing.
        batch_config (BatchConfig | None):
            Configuration for the batching process. If provided, spans will be queued
            and exported in batches according to these settings. If `None`, and the
            exporter supports batching, default batch settings will be applied.

    Drift Profile Association (Optional):
        Use these parameters to associate the tracer with a specific drift profile.

        profile_space (str | None):
            The space for the drift profile.
        profile_name (str | None):
            A name of the associated drift profile or service.
        profile_version (str | None):
            The version of the drift profile.
    """

shutdown_tracer

shutdown_tracer() -> None

Shutdown the tracer and flush any remaining spans.

Source code in python/scouter/stubs.pyi
def shutdown_tracer() -> None:
    """Shutdown the tracer and flush any remaining spans."""