GeminiEmbeddingConfig
¶
Source code in python/scouter/llm/google/_google.pyi
__init__(model=None, output_dimensionality=None, task_type=None)
¶
Configuration to pass to the Gemini Embedding API when creating a request
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Optional[str]
|
The embedding model to use. If not specified, the default gemini model will be used. |
None
|
output_dimensionality
|
Optional[int]
|
The output dimensionality of the embeddings. If not specified, a default value will be used. |
None
|
task_type
|
Optional[EmbeddingTaskType]
|
The type of embedding task to perform. If not specified, the default gemini task type will be used. |
None
|
Source code in python/scouter/llm/google/_google.pyi
GeminiSettings
¶
Source code in python/scouter/llm/google/_google.pyi
__init__(labels=None, tool_config=None, generation_config=None, safety_settings=None, model_armor_config=None, extra_body=None)
¶
Settings to pass to the Gemini API when creating a request
Reference
https://cloud.google.com/vertex-ai/generative-ai/docs/reference/rest/v1beta1/projects.locations.endpoints/generateContent
Parameters:
Name | Type | Description | Default |
---|---|---|---|
labels
|
Optional[dict[str, str]]
|
An optional dictionary of labels for the settings. |
None
|
tool_config
|
Optional[ToolConfig]
|
Configuration for tools like function calling and retrieval. |
None
|
generation_config
|
Optional[GenerationConfig]
|
Configuration for content generation parameters. |
None
|
safety_settings
|
Optional[list[SafetySetting]]
|
List of safety settings to apply. |
None
|
model_armor_config
|
Optional[ModelArmorConfig]
|
Configuration for model armor templates. |
None
|
extra_body
|
Optional[dict]
|
Additional configuration as a dictionary. |
None
|
Source code in python/scouter/llm/google/_google.pyi
GenerationConfig
¶
Configuration for content generation with comprehensive parameter control.
This class provides fine-grained control over the generation process including sampling parameters, output format, modalities, and various specialized features.
Examples:
Basic usage with temperature control:
Multi-modal configuration:
config = GenerationConfig(
response_modalities=[Modality.TEXT, Modality.AUDIO],
speech_config=SpeechConfig(language_code="en-US")
)
Advanced sampling with penalties:
config = GenerationConfig(
temperature=0.8,
top_p=0.9,
top_k=40,
presence_penalty=0.1,
frequency_penalty=0.2
)
Source code in python/scouter/llm/google/_google.pyi
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
|
__init__(stop_sequences=None, response_mime_type=None, response_modalities=None, thinking_config=None, temperature=None, top_p=None, top_k=None, candidate_count=None, max_output_tokens=None, response_logprobs=None, logprobs=None, presence_penalty=None, frequency_penalty=None, seed=None, audio_timestamp=None, media_resolution=None, speech_config=None, enable_affective_dialog=None)
¶
Initialize GenerationConfig with optional parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
stop_sequences
|
Optional[List[str]]
|
List of strings that will stop generation when encountered |
None
|
response_mime_type
|
Optional[str]
|
MIME type for the response format |
None
|
response_modalities
|
Optional[List[Modality]]
|
List of modalities to include in the response |
None
|
thinking_config
|
Optional[ThinkingConfig]
|
Configuration for reasoning/thinking capabilities |
None
|
temperature
|
Optional[float]
|
Controls randomness in generation (0.0-1.0) |
None
|
top_p
|
Optional[float]
|
Nucleus sampling parameter (0.0-1.0) |
None
|
top_k
|
Optional[int]
|
Top-k sampling parameter |
None
|
candidate_count
|
Optional[int]
|
Number of response candidates to generate |
None
|
max_output_tokens
|
Optional[int]
|
Maximum number of tokens to generate |
None
|
response_logprobs
|
Optional[bool]
|
Whether to return log probabilities |
None
|
logprobs
|
Optional[int]
|
Number of log probabilities to return per token |
None
|
presence_penalty
|
Optional[float]
|
Penalty for token presence (-2.0 to 2.0) |
None
|
frequency_penalty
|
Optional[float]
|
Penalty for token frequency (-2.0 to 2.0) |
None
|
seed
|
Optional[int]
|
Random seed for deterministic generation |
None
|
audio_timestamp
|
Optional[bool]
|
Whether to include timestamps in audio responses |
None
|
media_resolution
|
Optional[MediaResolution]
|
Resolution setting for media content |
None
|
speech_config
|
Optional[SpeechConfig]
|
Configuration for speech synthesis |
None
|
enable_affective_dialog
|
Optional[bool]
|
Whether to enable emotional dialog features |
None
|
Source code in python/scouter/llm/google/_google.pyi
LatLng
¶
Source code in python/scouter/llm/google/_google.pyi
__init__(latitude, longitude)
¶
Initialize LatLng with latitude and longitude.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
latitude
|
float
|
The latitude value. |
required |
longitude
|
float
|
The longitude value. |
required |
MediaResolution
¶
Media resolution settings for content generation.
Source code in python/scouter/llm/google/_google.pyi
Modality
¶
ModelArmorConfig
¶
Source code in python/scouter/llm/google/_google.pyi
__init__(prompt_template_name, response_template_name)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt_template_name
|
Optional[str]
|
The name of the prompt template to use. |
required |
response_template_name
|
Optional[str]
|
The name of the response template to use. |
required |
Source code in python/scouter/llm/google/_google.pyi
PrebuiltVoiceConfig
¶
PredictRequest
¶
Source code in python/scouter/llm/google/_google.pyi
__init__(instances, parameters=None)
¶
Request to pass to the Vertex Predict API when creating a request
Parameters:
Name | Type | Description | Default |
---|---|---|---|
instances
|
List[dict]
|
A list of instances to be sent in the request. |
required |
parameters
|
Optional[dict]
|
Optional parameters for the request. |
None
|
Source code in python/scouter/llm/google/_google.pyi
RetrievalConfig
¶
Source code in python/scouter/llm/google/_google.pyi
__init__(lat_lng, language_code)
¶
Initialize RetrievalConfig with latitude/longitude and language code.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lat_lng
|
LatLng
|
The latitude and longitude configuration. |
required |
language_code
|
str
|
The language code for the retrieval. |
required |
Source code in python/scouter/llm/google/_google.pyi
SafetySetting
¶
Source code in python/scouter/llm/google/_google.pyi
__init__(category, threshold, method=None)
¶
Initialize SafetySetting with required and optional parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
category
|
HarmCategory
|
The category of harm to protect against. |
required |
threshold
|
HarmBlockThreshold
|
The threshold for blocking content. |
required |
method
|
Optional[HarmBlockMethod]
|
The method used for blocking (if any). |
None
|
Source code in python/scouter/llm/google/_google.pyi
SpeechConfig
¶
ThinkingConfig
¶
Configuration for thinking/reasoning capabilities.