Experiment
BaseModel
¶
Bases: Protocol
Protocol for pydantic BaseModel to ensure compatibility with context
Source code in python/opsml/experiment/_experiment.pyi
EvalMetrics
¶
Map of metrics used that can be used to evaluate a model. The metrics are also used when comparing a model with other models
Source code in python/opsml/experiment/_experiment.pyi
__getitem__(key)
¶
Get the value of a metric by name. A RuntimeError will be raised if the metric does not exist.
__init__(metrics)
¶
Initialize EvalMetrics
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics
|
Dict[str, float]
|
Dictionary of metrics containing the name of the metric as the key and its value as the value. |
required |
Source code in python/opsml/experiment/_experiment.pyi
Experiment
¶
Source code in python/opsml/experiment/_experiment.pyi
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 |
|
card
property
¶
ExperimentCard associated with the Experiment
llm
property
¶
Access to LLM evaluation methods.
log_artifact(lpath, rpath=None)
¶
Log an artifact
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lpath
|
Path
|
The local path where the artifact has been saved to |
required |
rpath
|
Optional[str]
|
The path to associate with the artifact in the experiment artifact directory {experiment_path}/artifacts. If not provided, defaults to {experiment}/artifacts/{filename} |
None
|
Source code in python/opsml/experiment/_experiment.pyi
log_artifacts(paths)
¶
Log multiple artifacts
Parameters:
Name | Type | Description | Default |
---|---|---|---|
paths
|
Path
|
Paths to a directory containing artifacts. All files in the directory will be logged. |
required |
log_eval_metrics(metrics)
¶
Log evaluation metrics
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics
|
EvalMetrics
|
Evaluation metrics to log |
required |
log_figure(name, figure, kwargs=None)
¶
Log a figure. This method will log a matplotlib Figure object to the experiment artifacts.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the figure including its file extension |
required |
figure
|
Any
|
Figure to log |
required |
kwargs
|
Optional[Dict[str, Any]]
|
Additional keyword arguments |
None
|
Source code in python/opsml/experiment/_experiment.pyi
log_figure_from_path(lpath, rpath=None)
¶
Log a figure
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lpath
|
Path
|
The local path where the figure has been saved to. Must be an image type (e.g. jpeg, tiff, png, etc.) |
required |
rpath
|
Optional[str]
|
The path to associate with the figure in the experiment artifact directory {experiment_path}/artifacts/figures. If not provided, defaults to {experiment}/artifacts/figures/{filename} |
None
|
Source code in python/opsml/experiment/_experiment.pyi
log_metric(name, value, step=None, timestamp=None, created_at=None)
¶
Log a metric
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the metric |
required |
value
|
float
|
Value of the metric |
required |
step
|
int | None
|
Step of the metric |
None
|
timestamp
|
int | None
|
Timestamp of the metric |
None
|
created_at
|
datetime | None
|
Created at of the metric |
None
|
Source code in python/opsml/experiment/_experiment.pyi
log_metrics(metrics)
¶
Log multiple metrics
Parameters:
Name | Type | Description | Default |
---|---|---|---|
metrics
|
list[Metric]
|
List of metrics to log |
required |
log_parameter(name, value)
¶
Log a parameter
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the parameter |
required |
value
|
int | float | str
|
Value of the parameter |
required |
Source code in python/opsml/experiment/_experiment.pyi
log_parameters(parameters)
¶
Log multiple parameters
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parameters
|
list[Parameter] | Dict[str, Union[int, float, str]]
|
Parameters to log |
required |
Source code in python/opsml/experiment/_experiment.pyi
register_card(card, version_type=VersionType.Minor, pre_tag=None, build_tag=None, save_kwargs=None)
¶
Register a Card as part of an experiment
Parameters:
Name | Type | Description | Default |
---|---|---|---|
card
|
DataCard | ModelCard
|
Card to register. Can be a DataCard or a ModelCard |
required |
version_type
|
VersionType
|
How to increment the version SemVer. Default is VersionType.Minor. |
Minor
|
pre_tag
|
str
|
Optional pre tag to associate with the version. |
None
|
build_tag
|
str
|
Optional build_tag to associate with the version. |
None
|
save_kwargs
|
SaveKwargs
|
Optional SaveKwargs to pass to the Card interface (If using DataCards and ModelCards). |
None
|
Source code in python/opsml/experiment/_experiment.pyi
start_experiment(space=None, name=None, code_dir=None, log_hardware=False, experiment_uid=None)
¶
Start an Experiment
Parameters:
Name | Type | Description | Default |
---|---|---|---|
space
|
str | None
|
space to associate with |
None
|
name
|
str | None
|
Name to associate with |
None
|
code_dir
|
Path | None
|
Directory to log code from |
None
|
log_hardware
|
bool
|
Whether to log hardware information or not |
False
|
experiment_uid
|
str | None
|
Experiment UID. If provided, the experiment will be loaded from the server. |
None
|
Returns:
Type | Description |
---|---|
Experiment
|
Experiment |
Source code in python/opsml/experiment/_experiment.pyi
LLMEvaluator
¶
Source code in python/opsml/experiment/_experiment.pyi
evaluate(records, metrics, config=None)
staticmethod
¶
Evaluate LLM responses using the provided evaluation metrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
records
|
List[LLMEvalRecord]
|
List of LLM evaluation records to evaluate. |
required |
metrics
|
List[LLMEvalMetric]
|
List of LLMEvalMetric instances to use for evaluation. |
required |
config
|
Optional[EvaluationConfig]
|
Optional EvaluationConfig instance to configure evaluation options. |
None
|
Returns:
Type | Description |
---|---|
LLMEvalResults
|
LLMEvalResults |
Source code in python/opsml/experiment/_experiment.pyi
Metric
¶
Source code in python/opsml/experiment/_experiment.pyi
created_at
property
¶
Created at of the metric
name
property
¶
Name of the metric
step
property
¶
Step of the metric
timestamp
property
¶
Timestamp of the metric
value
property
¶
Value of the metric
__init__(name, value, step=None, timestamp=None, created_at=None)
¶
Initialize a Metric
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the metric |
required |
value
|
float
|
Value of the metric |
required |
step
|
int | None
|
Step of the metric |
None
|
timestamp
|
int | None
|
Timestamp of the metric |
None
|
created_at
|
datetime | None
|
Created at of the metric |
None
|
Source code in python/opsml/experiment/_experiment.pyi
Parameter
¶
Source code in python/opsml/experiment/_experiment.pyi
name
property
¶
Name of the parameter
value
property
¶
Value of the parameter
__init__(name, value)
¶
Initialize a Parameter
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the parameter |
required |
value
|
int | float | str
|
Value of the parameter |
required |
Source code in python/opsml/experiment/_experiment.pyi
get_experiment_metrics(experiment_uid, names=None)
¶
Get metrics of an experiment
Parameters:
Name | Type | Description | Default |
---|---|---|---|
experiment_uid
|
str
|
UID of the experiment |
required |
names
|
list[str] | None
|
Names of the metrics to get. If None, all metrics will be returned. |
None
|
Returns:
Type | Description |
---|---|
Metrics
|
Metrics |
Source code in python/opsml/experiment/_experiment.pyi
get_experiment_parameters(experiment_uid, names=None)
¶
Get parameters of an experiment
Parameters:
Name | Type | Description | Default |
---|---|---|---|
experiment_uid
|
str
|
UID of the experiment |
required |
names
|
list[str] | None
|
Names of the parameters to get. If None, all parameters will be returned. |
None
|
Returns:
Type | Description |
---|---|
Parameters
|
Parameters |
Source code in python/opsml/experiment/_experiment.pyi
start_experiment(space=None, name=None, code_dir=None, log_hardware=False, experiment_uid=None)
¶
Start an Experiment
Parameters:
Name | Type | Description | Default |
---|---|---|---|
space
|
str | None
|
space to associate with |
None
|
name
|
str | None
|
Name to associate with |
None
|
code_dir
|
Path | None
|
Directory to log code from |
None
|
log_hardware
|
bool
|
Whether to log hardware information or not |
False
|
experiment_uid
|
str | None
|
Experiment UID. If provided, the experiment will be loaded from the server. |
None
|
Returns:
Type | Description |
---|---|
Experiment
|
Experiment |